ERIC Educational Resources Information Center
Ding, Cody S.; Davison, Mark L.
2010-01-01
Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
A stopping criterion for the iterative solution of partial differential equations
NASA Astrophysics Data System (ADS)
Rao, Kaustubh; Malan, Paul; Perot, J. Blair
2018-01-01
A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell-Maupin, Kathryn; Oden, J. T.
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
Farrell-Maupin, Kathryn; Oden, J. T.
2017-08-01
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Entropic criterion for model selection
NASA Astrophysics Data System (ADS)
Tseng, Chih-Yuan
2006-10-01
Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.
Evaluation of volatile organic emissions from hazardous waste incinerators.
Sedman, R M; Esparza, J R
1991-01-01
Conventional methods of risk assessment typically employed to evaluate the impact of hazardous waste incinerators on public health must rely on somewhat speculative emissions estimates or on complicated and expensive sampling and analytical methods. The limited amount of toxicological information concerning many of the compounds detected in stack emissions also complicates the evaluation of the public health impacts of these facilities. An alternative approach aimed at evaluating the public health impacts associated with volatile organic stack emissions is presented that relies on a screening criterion to evaluate total stack hydrocarbon emissions. If the concentration of hydrocarbons in ambient air is below the screening criterion, volatile emissions from the incinerator are judged not to pose a significant threat to public health. Both the screening criterion and a conventional method of risk assessment were employed to evaluate the emissions from 20 incinerators. Use of the screening criterion always yielded a substantially greater estimate of risk than that derived by the conventional method. Since the use of the screening criterion always yielded estimates of risk that were greater than that determined by conventional methods and measuring total hydrocarbon emissions is a relatively simple analytical procedure, the use of the screening criterion would appear to facilitate the evaluation of operating hazardous waste incinerators. PMID:1954928
Gao, Yingbin; Kong, Xiangyu; Zhang, Huihui; Hou, Li'an
2017-05-01
Minor component (MC) plays an important role in signal processing and data analysis, so it is a valuable work to develop MC extraction algorithms. Based on the concepts of weighted subspace and optimum theory, a weighted information criterion is proposed for searching the optimum solution of a linear neural network. This information criterion exhibits a unique global minimum attained if and only if the state matrix is composed of the desired MCs of an autocorrelation matrix of an input signal. By using gradient ascent method and recursive least square (RLS) method, two algorithms are developed for multiple MCs extraction. The global convergences of the proposed algorithms are also analyzed by the Lyapunov method. The proposed algorithms can extract the multiple MCs in parallel and has advantage in dealing with high dimension matrices. Since the weighted matrix does not require an accurate value, it facilitates the system design of the proposed algorithms for practical applications. The speed and computation advantages of the proposed algorithms are verified through simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.
2014-09-01
Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.
Ternès, Nils; Rotolo, Federico; Michiels, Stefan
2016-07-10
Correct selection of prognostic biomarkers among multiple candidates is becoming increasingly challenging as the dimensionality of biological data becomes higher. Therefore, minimizing the false discovery rate (FDR) is of primary importance, while a low false negative rate (FNR) is a complementary measure. The lasso is a popular selection method in Cox regression, but its results depend heavily on the penalty parameter λ. Usually, λ is chosen using maximum cross-validated log-likelihood (max-cvl). However, this method has often a very high FDR. We review methods for a more conservative choice of λ. We propose an empirical extension of the cvl by adding a penalization term, which trades off between the goodness-of-fit and the parsimony of the model, leading to the selection of fewer biomarkers and, as we show, to the reduction of the FDR without large increase in FNR. We conducted a simulation study considering null and moderately sparse alternative scenarios and compared our approach with the standard lasso and 10 other competitors: Akaike information criterion (AIC), corrected AIC, Bayesian information criterion (BIC), extended BIC, Hannan and Quinn information criterion (HQIC), risk information criterion (RIC), one-standard-error rule, adaptive lasso, stability selection, and percentile lasso. Our extension achieved the best compromise across all the scenarios between a reduction of the FDR and a limited raise of the FNR, followed by the AIC, the RIC, and the adaptive lasso, which performed well in some settings. We illustrate the methods using gene expression data of 523 breast cancer patients. In conclusion, we propose to apply our extension to the lasso whenever a stringent FDR with a limited FNR is targeted. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Minimal Polynomial Method for Estimating Parameters of Signals Received by an Antenna Array
NASA Astrophysics Data System (ADS)
Ermolaev, V. T.; Flaksman, A. G.; Elokhin, A. V.; Kuptsov, V. V.
2018-01-01
The effectiveness of the projection minimal polynomial method for solving the problem of determining the number of sources of signals acting on an antenna array (AA) with an arbitrary configuration and their angular directions has been studied. The method proposes estimating the degree of the minimal polynomial of the correlation matrix (CM) of the input process in the AA on the basis of a statistically validated root-mean-square criterion. Special attention is paid to the case of the ultrashort sample of the input process when the number of samples is considerably smaller than the number of AA elements, which is important for multielement AAs. It is shown that the proposed method is more effective in this case than methods based on the AIC (Akaike's Information Criterion) or minimum description length (MDL) criterion.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
The Development of a Criterion Instrument for Counselor Selection.
ERIC Educational Resources Information Center
Remer, Rory; Sease, William
A measure of potential performance as a counselor is needed as an adjunct to the information presently employed in selection decisions. This article deals with one possible method of development of such a potential performance criterion and the steps taken, to date, in the attempt to validate it. It includes: the overall effectiveness of the…
NASA Astrophysics Data System (ADS)
Diamant, Idit; Shalhon, Moran; Goldberger, Jacob; Greenspan, Hayit
2016-03-01
Classification of clustered breast microcalcifications into benign and malignant categories is an extremely challenging task for computerized algorithms and expert radiologists alike. In this paper we present a novel method for feature selection based on mutual information (MI) criterion for automatic classification of microcalcifications. We explored the MI based feature selection for various texture features. The proposed method was evaluated on a standardized digital database for screening mammography (DDSM). Experimental results demonstrate the effectiveness and the advantage of using the MI-based feature selection to obtain the most relevant features for the task and thus to provide for improved performance as compared to using all features.
ERIC Educational Resources Information Center
Beretvas, S. Natasha; Murphy, Daniel L.
2013-01-01
The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…
Link, William; Sauer, John R.
2016-01-01
The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.
ERIC Educational Resources Information Center
Vrieze, Scott I.
2012-01-01
This article reviews the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models, given their growing use in theory testing and construction. Theoretical statistical results in regression are discussed, and more important…
[Information value of "additional tasks" method to evaluate pilot's work load].
Gorbunov, V V
2005-01-01
"Additional task" method was used to evaluate pilot's work load in prolonged flight. Calculated through durations of latent periods of motor responses, quantitative criterion of work load is more informative for objective evaluation of pilot's involvement in his piloting functions rather than of other registered parameters.
Yu, Fang; Chen, Ming-Hui; Kuo, Lynn; Talbott, Heather; Davis, John S
2015-08-07
Recently, the Bayesian method becomes more popular for analyzing high dimensional gene expression data as it allows us to borrow information across different genes and provides powerful estimators for evaluating gene expression levels. It is crucial to develop a simple but efficient gene selection algorithm for detecting differentially expressed (DE) genes based on the Bayesian estimators. In this paper, by extending the two-criterion idea of Chen et al. (Chen M-H, Ibrahim JG, Chi Y-Y. A new class of mixture models for differential gene expression in DNA microarray data. J Stat Plan Inference. 2008;138:387-404), we propose two new gene selection algorithms for general Bayesian models and name these new methods as the confident difference criterion methods. One is based on the standardized differences between two mean expression values among genes; the other adds the differences between two variances to it. The proposed confident difference criterion methods first evaluate the posterior probability of a gene having different gene expressions between competitive samples and then declare a gene to be DE if the posterior probability is large. The theoretical connection between the proposed first method based on the means and the Bayes factor approach proposed by Yu et al. (Yu F, Chen M-H, Kuo L. Detecting differentially expressed genes using alibrated Bayes factors. Statistica Sinica. 2008;18:783-802) is established under the normal-normal-model with equal variances between two samples. The empirical performance of the proposed methods is examined and compared to those of several existing methods via several simulations. The results from these simulation studies show that the proposed confident difference criterion methods outperform the existing methods when comparing gene expressions across different conditions for both microarray studies and sequence-based high-throughput studies. A real dataset is used to further demonstrate the proposed methodology. In the real data application, the confident difference criterion methods successfully identified more clinically important DE genes than the other methods. The confident difference criterion method proposed in this paper provides a new efficient approach for both microarray studies and sequence-based high-throughput studies to identify differentially expressed genes.
NASA Astrophysics Data System (ADS)
Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin
2015-03-01
Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.
Bayes' Theorem: An Old Tool Applicable to Today's Classroom Measurement Needs. ERIC/AE Digest.
ERIC Educational Resources Information Center
Rudner, Lawrence M.
This digest introduces ways of responding to the call for criterion-referenced information using Bayes' Theorem, a method that was coupled with criterion-referenced testing in the early 1970s (see R. Hambleton and M. Novick, 1973). To illustrate Bayes' Theorem, an example is given in which the goal is to classify an examinee as being a master or…
Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan
2015-01-01
Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836
NASA Astrophysics Data System (ADS)
He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2015-03-01
Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.
Water-sediment controversy in setting environmental standards for selenium
Hamilton, Steven J.; Lemly, A. Dennis
1999-01-01
A substantial amount of laboratory and field research on selenium effects to biota has been accomplished since the national water quality criterion was published for selenium in 1987. Many articles have documented adverse effects on biota at concentrations below the current chronic criterion of 5 μg/L. This commentary will present information to support a national water quality criterion for selenium of 2 μg/L, based on a wide array of support from federal, state, university, and international sources. Recently, two articles have argued for a sediment-based criterion and presented a model for deriving site-specific criteria. In one example, they calculate a criterion of 31 μg/L for a stream with a low sediment selenium toxicity threshold and low site-specific sediment total organic carbon content, which is substantially higher than the national criterion of 5 μg/L. Their basic premise for proposing a sediment-based method has been critically reviewed and problems in their approach are discussed.
Comparison of Nurse Staffing Measurements in Staffing-Outcomes Research.
Park, Shin Hye; Blegen, Mary A; Spetz, Joanne; Chapman, Susan A; De Groot, Holly A
2015-01-01
Investigators have used a variety of operational definitions of nursing hours of care in measuring nurse staffing for health services research. However, little is known about which approach is best for nurse staffing measurement. To examine whether various nursing hours measures yield different model estimations when predicting patient outcomes and to determine the best method to measure nurse staffing based on the model estimations. We analyzed data from the University HealthSystem Consortium for 2005. The sample comprised 208 hospital-quarter observations from 54 hospitals, representing information on 971 adult-care units and about 1 million inpatient discharges. We compared regression models using different combinations of staffing measures based on productive/nonproductive and direct-care/indirect-care hours. Akaike Information Criterion and Bayesian Information Criterion were used in the assessment of staffing measure performance. The models that included the staffing measure calculated from productive hours by direct-care providers were best, in general. However, the Akaike Information Criterion and Bayesian Information Criterion differences between models were small, indicating that distinguishing nonproductive and indirect-care hours from productive direct-care hours does not substantially affect the approximation of the relationship between nurse staffing and patient outcomes. This study is the first to explicitly evaluate various measures of nurse staffing. Productive hours by direct-care providers are the strongest measure related to patient outcomes and thus should be preferred in research on nurse staffing and patient outcomes.
Performance index and meta-optimization of a direct search optimization method
NASA Astrophysics Data System (ADS)
Krus, P.; Ölvander, J.
2013-10-01
Design optimization is becoming an increasingly important tool for design, often using simulation as part of the evaluation of the objective function. A measure of the efficiency of an optimization algorithm is of great importance when comparing methods. The main contribution of this article is the introduction of a singular performance criterion, the entropy rate index based on Shannon's information theory, taking both reliability and rate of convergence into account. It can also be used to characterize the difficulty of different optimization problems. Such a performance criterion can also be used for optimization of the optimization algorithms itself. In this article the Complex-RF optimization method is described and its performance evaluated and optimized using the established performance criterion. Finally, in order to be able to predict the resources needed for optimization an objective function temperament factor is defined that indicates the degree of difficulty of the objective function.
Accounting for uncertainty in health economic decision models by using model averaging.
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-04-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.
Spatiotemporal coding in the cortex: information flow-based learning in spiking neural networks.
Deco, G; Schürmann, B
1999-05-15
We introduce a learning paradigm for networks of integrate-and-fire spiking neurons that is based on an information-theoretic criterion. This criterion can be viewed as a first principle that demonstrates the experimentally observed fact that cortical neurons display synchronous firing for some stimuli and not for others. The principle can be regarded as the postulation of a nonparametric reconstruction method as optimization criteria for learning the required functional connectivity that justifies and explains synchronous firing for binding of features as a mechanism for spatiotemporal coding. This can be expressed in an information-theoretic way by maximizing the discrimination ability between different sensory inputs in minimal time.
How Can We Get the Information about Democracy? The Example of Social Studies Prospective Teachers
ERIC Educational Resources Information Center
Tonga, Deniz
2014-01-01
In this research, the information about democracy, which social studies prospective teachers have, and interpretation of the information sources are aimed. The research was planned as a survey research methodology and the participants were determined with criterion sampling method. The data were collected through developed open-ended questions…
Accounting for uncertainty in health economic decision models by using model averaging
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-01-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment. PMID:19381329
Shen, Chung-Wei; Chen, Yi-Hau
2018-03-13
We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.
Automatic discovery of optimal classes
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John; Freeman, Don; Self, Matthew
1986-01-01
A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classification) for a given set of examples. This criterion is transformed into an equivalent minimum message length criterion with an intuitive information interpretation. This criterion does not require that the number of classes be specified in advance, this is determined by the data. The minimum message length criterion includes the message length required to describe the classes, so there is a built in bias against adding new classes unless they lead to a reduction in the message length required to describe the data. Unfortunately, the search space of possible classifications is too large to search exhaustively, so heuristic search methods, such as simulated annealing, are applied. Tutored learning and probabilistic prediction in particular cases are an important indirect result of optimal class discovery. Extensions to the basic class induction program include the ability to combine category and real value data, hierarchical classes, independent classifications and deciding for each class which attributes are relevant.
Kerridge, Bradley T.; Saha, Tulshi D.; Smith, Sharon; Chou, Patricia S.; Pickering, Roger P.; Huang, Boji; Ruan, June W.; Pulay, Attila J.
2012-01-01
Background Prior research has demonstrated the dimensionality of Diagnostic and Statistical Manual of Mental Disorders - Fourth Edition (DSM-IV) alcohol, nicotine, cannabis, cocaine and amphetamine abuse and dependence criteria. The purpose of this study was to examine the dimensionality of hallucinogen and inhalant/solvent abuse and dependence criteria. In addition, we assessed the impact of elimination of the legal problems abuse criterion on the information value of the aggregate abuse and dependence criteria, another proposed change for DSM- IV currently lacking empirical justification. Methods Factor analyses and item response theory (IRT) analyses were used to explore the unidimisionality and psychometric properties of hallucinogen and inhalant/solvent abuse and dependence criteria using a large representative sample of the United States (U.S.) general population. Results Hallucinogen and inhalant/solvent abuse and dependence criteria formed unidimensional latent traits. For both substances, IRT models without the legal problems abuse criterion demonstrated better fit than the corresponding model with the legal problem abuse criterion. Further, there were no differences in the information value of the IRT models with and without the legal problems abuse criterion, supporting the elimination of that criterion. No bias in the new diagnoses was observed by sex, age and race-ethnicity. Conclusion Consistent with findings for alcohol, nicotine, cannabis, cocaine and amphetamine abuse and dependence criteria, hallucinogen and inhalant/solvent criteria reflect underlying dimensions of severity. The legal problems criterion associated with each of these substance use disorders can be eliminated with no loss in informational value and an advantage of parsimony. Taken together, these findings support the changes to substance use disorder diagnoses recommended by the DSM-V Substance and Related Disorders Workgroup, that is, combining DSM-IV abuse and dependence criteria and eliminating the legal problems abuse criterion. PMID:21621334
Comparing hierarchical models via the marginalized deviance information criterion.
Quintero, Adrian; Lesaffre, Emmanuel
2018-07-20
Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.
Modeling crime events by d-separation method
NASA Astrophysics Data System (ADS)
Aarthee, R.; Ezhilmaran, D.
2017-11-01
Problematic legal cases have recently called for a scientifically founded method of dealing with the qualitative and quantitative roles of evidence in a case [1].To deal with quantitative, we proposed a d-separation method for modeling the crime events. A d-separation is a graphical criterion for identifying independence in a directed acyclic graph. By developing a d-separation method, we aim to lay the foundations for the development of a software support tool that can deal with the evidential reasoning in legal cases. Such a tool is meant to be used by a judge or juror, in alliance with various experts who can provide information about the details. This will hopefully improve the communication between judges or jurors and experts. The proposed method used to uncover more valid independencies than any other graphical criterion.
Obuchowski, N A
2001-10-15
Electronic medical images are an efficient and convenient format in which to display, store and transmit radiographic information. Before electronic images can be used routinely to screen and diagnose patients, however, it must be shown that readers have the same diagnostic performance with this new format as traditional hard-copy film. Currently, there exist no suitable definitions of diagnostic equivalence. In this paper we propose two criteria for diagnostic equivalence. The first criterion ('population equivalence') considers the variability between and within readers, as well as the mean reader performance. This criterion is useful for most applications. The second criterion ('individual equivalence') involves a comparison of the test results for individual patients and is necessary when patients are followed radiographically over time. We present methods for testing both individual and population equivalence. The properties of the proposed methods are assessed in a Monte Carlo simulation study. Data from a mammography screening study is used to illustrate the proposed methods and compare them with results from more conventional methods of assessing equivalence and inter-procedure agreement. Copyright 2001 John Wiley & Sons, Ltd.
Analysis of the observed and intrinsic durations of Swift/BAT gamma-ray bursts
NASA Astrophysics Data System (ADS)
Tarnopolski, Mariusz
2016-07-01
The duration distribution of 947 GRBs observed by Swift/BAT, as well as its subsample of 347 events with measured redshift, allowing to examine the durations in both the observer and rest frames, are examined. Using a maximum log-likelihood method, mixtures of two and three standard Gaussians are fitted to each sample, and the adequate model is chosen based on the value of the difference in the log-likelihoods, Akaike information criterion and Bayesian information criterion. It is found that a two-Gaussian is a better description than a three-Gaussian, and that the presumed intermediate-duration class is unlikely to be present in the Swift duration data.
Final Environmental Assessment of Military Service Station Privatization at Five AETC Installations
2013-10-01
distinction (Criterion C); or • Have yielded, or may likely yield, information important in prehistory or history (Criterion D). Resources less than 50...important information in history or prehistory ; thus, it does not meet the requirement of Criterion D. Building 2109 is recommended not eligible for
NASA Astrophysics Data System (ADS)
Kotchasarn, Chirawat; Saengudomlert, Poompat
We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.
Emitter Number Estimation by the General Information Theoretic Criterion from Pulse Trains
2002-12-01
negative log likelihood function plus a penalty function. The general information criteria by Yin and Krishnaiah [11] are different from the regular...548-551, Victoria, BC, Canada, March 1999 DRDC Ottawa TR 2002-156 11 11. L. Zhao, P. P. Krishnaiah and Z. Bai, “On some nonparametric methods for
Comparison of case note review methods for evaluating quality and safety in health care.
Hutchinson, A; Coster, J E; Cooper, K L; McIntosh, A; Walters, S J; Bath, P A; Pearson, M; Young, T A; Rantell, K; Campbell, M J; Ratcliffe, J
2010-02-01
To determine which of two methods of case note review--holistic (implicit) and criterion-based (explicit)--provides the most useful and reliable information for quality and safety of care, and the level of agreement within and between groups of health-care professionals when they use the two methods to review the same record. To explore the process-outcome relationship between holistic and criterion-based quality-of-care measures and hospital-level outcome indicators. Case notes of patients at randomly selected hospitals in England. In the first part of the study, retrospective multiple reviews of 684 case notes were undertaken at nine acute hospitals using both holistic and criterion-based review methods. Quality-of-care measures included evidence-based review criteria and a quality-of-care rating scale. Textual commentary on the quality of care was provided as a component of holistic review. Review teams comprised combinations of: doctors (n = 16), specialist nurses (n = 10) and clinically trained audit staff (n = 3) and non-clinical audit staff (n = 9). In the second part of the study, process (quality and safety) of care data were collected from the case notes of 1565 people with either chronic obstructive pulmonary disease (COPD) or heart failure in 20 hospitals. Doctors collected criterion-based data from case notes and used implicit review methods to derive textual comments on the quality of care provided and score the care overall. Data were analysed for intrarater consistency, inter-rater reliability between pairs of staff using intraclass correlation coefficients (ICCs) and completeness of criterion data capture, and comparisons were made within and between staff groups and between review methods. To explore the process-outcome relationship, a range of publicly available health-care indicator data were used as proxy outcomes in a multilevel analysis. Overall, 1473 holistic and 1389 criterion-based reviews were undertaken in the first part of the study. When same staff-type reviewer pairs/groups reviewed the same record, holistic scale score inter-rater reliability was moderate within each of the three staff groups [intraclass correlation coefficient (ICC) 0.46-0.52], and inter-rater reliability for criterion-based scores was moderate to good (ICC 0.61-0.88). When different staff-type pairs/groups reviewed the same record, agreement between the reviewer pairs/groups was weak to moderate for overall care (ICC 0.24-0.43). Comparison of holistic review score and criterion-based score of case notes reviewed by doctors and by non-clinical audit staff showed a reasonable level of agreement (p-values for difference 0.406 and 0.223, respectively), although results from all three staff types showed no overall level of agreement (p-value for difference 0.057). Detailed qualitative analysis of the textual data indicated that the three staff types tended to provide different forms of commentary on quality of care, although there was some overlap between some groups. In the process-outcome study there generally were high criterion-based scores for all hospitals, whereas there was more interhospital variation between the holistic review overall scale scores. Textual commentary on the quality of care verified the holistic scale scores. Differences among hospitals with regard to the relationship between mortality and quality of care were not statistically significant. Using the holistic approach, the three groups of staff appeared to interpret the recorded care differently when they each reviewed the same record. When the same clinical record was reviewed by doctors and non-clinical audit staff, there was no significant difference between the assessments of quality of care generated by the two groups. All three staff groups performed reasonably well when using criterion-based review, although the quality and type of information provided by doctors was of greater value. Therefore, when measuring quality of care from case notes, consideration needs to be given to the method of review, the type of staff undertaking the review, and the methods of analysis available to the review team. Review can be enhanced using a combination of both criterion-based and structured holistic methods with textual commentary, and variation in quality of care can best be identified from a combination of holistic scale scores and textual data review.
Integration of an EEG biomarker with a clinician's ADHD evaluation
Snyder, Steven M; Rugino, Thomas A; Hornig, Mady; Stein, Mark A
2015-01-01
Background This study is the first to evaluate an assessment aid for attention-deficit/hyperactivity disorder (ADHD) according to both Class-I evidence standards of American Academy of Neurology and De Novo requirements of US Food and Drug Administration. The assessment aid involves a method to integrate an electroencephalographic (EEG) biomarker, theta/beta ratio (TBR), with a clinician's ADHD evaluation. The integration method is intended as a step to help improve certainty with criterion E (i.e., whether symptoms are better explained by another condition). Methods To evaluate the assessment aid, investigators conducted a prospective, triple-blinded, 13-site, clinical cohort study. Comprehensive clinical evaluation data were obtained from 275 children and adolescents presenting with attentional and behavioral concerns. A qualified clinician at each site performed differential diagnosis. EEG was collected by separate teams. The reference standard was consensus diagnosis by an independent, multidisciplinary team (psychiatrist, psychologist, and neurodevelopmental pediatrician), which is well-suited to evaluate criterion E in a complex clinical population. Results Of 209 patients meeting ADHD criteria per a site clinician's judgment, 93 were separately found by the multidisciplinary team to be less likely to meet criterion E, implying possible overdiagnosis by clinicians in 34% of the total clinical sample (93/275). Of those 93, 91% were also identified by EEG, showing a relatively lower TBR (85/93). Further, the integration method was in 97% agreement with the multidisciplinary team in the resolution of a clinician's uncertain cases (35/36). TBR showed statistical power specific to supporting certainty of criterion E per the multidisciplinary team (Cohen's d, 1.53). Patients with relatively lower TBR were more likely to have other conditions that could affect criterion E certainty (10 significant results; P ≤ 0.05). Integration of this information with a clinician's ADHD evaluation could help improve diagnostic accuracy from 61% to 88%. Conclusions The EEG-based assessment aid may help improve accuracy of ADHD diagnosis by supporting greater criterion E certainty. PMID:25798338
Breakdown parameter for kinetic modeling of multiscale gas flows.
Meng, Jianping; Dongari, Nishanth; Reese, Jason M; Zhang, Yonghao
2014-06-01
Multiscale methods built purely on the kinetic theory of gases provide information about the molecular velocity distribution function. It is therefore both important and feasible to establish new breakdown parameters for assessing the appropriateness of a fluid description at the continuum level by utilizing kinetic information rather than macroscopic flow quantities alone. We propose a new kinetic criterion to indirectly assess the errors introduced by a continuum-level description of the gas flow. The analysis, which includes numerical demonstrations, focuses on the validity of the Navier-Stokes-Fourier equations and corresponding kinetic models and reveals that the new criterion can consistently indicate the validity of continuum-level modeling in both low-speed and high-speed flows at different Knudsen numbers.
Variable selection with stepwise and best subset approaches
2016-01-01
While purposeful selection is performed partly by software and partly by hand, the stepwise and best subset approaches are automatically performed by software. Two R functions stepAIC() and bestglm() are well designed for stepwise and best subset regression, respectively. The stepAIC() function begins with a full or null model, and methods for stepwise regression can be specified in the direction argument with character values “forward”, “backward” and “both”. The bestglm() function begins with a data frame containing explanatory variables and response variables. The response variable should be in the last column. Varieties of goodness-of-fit criteria can be specified in the IC argument. The Bayesian information criterion (BIC) usually results in more parsimonious model than the Akaike information criterion. PMID:27162786
Baumann, Martin; Keinath, Andreas; Krems, Josef F; Bengler, Klaus
2004-05-01
Despite the usefulness of new on-board information systems one has to be concerned about the potential distraction effects that they impose on the driver. Therefore, methods and procedures are necessary to assess the visual demand that is connected to the usage of an on-board system. The occlusion-method is considered a strong candidate as a procedure for evaluating display designs with regard to their visual demand. This paper reports results from two experimental studies conducted to further evaluate this method. In the first study, performance in using an in-car navigation system was measured under three conditions: static (parking lot), occlusion (shutter glasses), and driving. The results show that the occlusion-procedure can be used to simulate visual requirements of real traffic conditions. In a second study the occlusion method was compared to a global evaluation criterion based on the total task time. It can be demonstrated that the occlusion method can identify tasks which meet this criterion, but are yet irresolvable under driving conditions. It is concluded that the occlusion technique seems to be a reliable and valid method for evaluating visual and dialogue aspects of in-car information systems.
Tseng, Yi-Ju; Wu, Jung-Hsuan; Ping, Xiao-Ou; Lin, Hui-Chi; Chen, Ying-Yu; Shang, Rung-Ji; Chen, Ming-Yuan; Lai, Feipei
2012-01-01
Background The emergence and spread of multidrug-resistant organisms (MDROs) are causing a global crisis. Combating antimicrobial resistance requires prevention of transmission of resistant organisms and improved use of antimicrobials. Objectives To develop a Web-based information system for automatic integration, analysis, and interpretation of the antimicrobial susceptibility of all clinical isolates that incorporates rule-based classification and cluster analysis of MDROs and implements control chart analysis to facilitate outbreak detection. Methods Electronic microbiological data from a 2200-bed teaching hospital in Taiwan were classified according to predefined criteria of MDROs. The numbers of organisms, patients, and incident patients in each MDRO pattern were presented graphically to describe spatial and time information in a Web-based user interface. Hierarchical clustering with 7 upper control limits (UCL) was used to detect suspicious outbreaks. The system’s performance in outbreak detection was evaluated based on vancomycin-resistant enterococcal outbreaks determined by a hospital-wide prospective active surveillance database compiled by infection control personnel. Results The optimal UCL for MDRO outbreak detection was the upper 90% confidence interval (CI) using germ criterion with clustering (area under ROC curve (AUC) 0.93, 95% CI 0.91 to 0.95), upper 85% CI using patient criterion (AUC 0.87, 95% CI 0.80 to 0.93), and one standard deviation using incident patient criterion (AUC 0.84, 95% CI 0.75 to 0.92). The performance indicators of each UCL were statistically significantly higher with clustering than those without clustering in germ criterion (P < .001), patient criterion (P = .04), and incident patient criterion (P < .001). Conclusion This system automatically identifies MDROs and accurately detects suspicious outbreaks of MDROs based on the antimicrobial susceptibility of all clinical isolates. PMID:23195868
Evaluation of Criterion Validity for Scales with Congeneric Measures
ERIC Educational Resources Information Center
Raykov, Tenko
2007-01-01
A method for estimating criterion validity of scales with homogeneous components is outlined. It accomplishes point and interval estimation of interrelationship indices between composite scores and criterion variables and is useful for testing hypotheses about criterion validity of measurement instruments. The method can also be used with missing…
Information theoretic methods for image processing algorithm optimization
NASA Astrophysics Data System (ADS)
Prokushkin, Sergey F.; Galil, Erez
2015-01-01
Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback–Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes. PMID:29872389
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot.
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback-Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes.
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Torczon, Virginia
1998-01-01
We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using
Lei, Xusheng; Li, Jingjing
2012-01-01
This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993
The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.
Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C
2017-06-01
The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.
Resolution improvement in positron emission tomography using anatomical Magnetic Resonance Imaging.
Chu, Yong; Su, Min-Ying; Mandelkern, Mark; Nalcioglu, Orhan
2006-08-01
An ideal imaging system should provide information with high-sensitivity, high spatial, and temporal resolution. Unfortunately, it is not possible to satisfy all of these desired features in a single modality. In this paper, we discuss methods to improve the spatial resolution in positron emission imaging (PET) using a priori information from Magnetic Resonance Imaging (MRI). Our approach uses an image restoration algorithm based on the maximization of mutual information (MMI), which has found significant success for optimizing multimodal image registration. The MMI criterion is used to estimate the parameters in the Sharpness-Constrained Wiener filter. The generated filter is then applied to restore PET images of a realistic digital brain phantom. The resulting restored images show improved resolution and better signal-to-noise ratio compared to the interpolated PET images. We conclude that a Sharpness-Constrained Wiener filter having parameters optimized from a MMI criterion may be useful for restoring spatial resolution in PET based on a priori information from correlated MRI.
2008-06-01
The most common outranking methods are the preference ranking organization method for enrichment evaluation ( PROMETHEE ) and the elimination and...Brans and Ph. Vincke, “A Preference Ranking Organization Method: (The PROMETHEE Method for Multiple Criteria Decision-Making),” Management Science 31... PROMETHEE ). This method needs a preference function for each criterion to compute the degree of preference.72 “The credibility of the outranking
Natural learning in NLDA networks.
González, Ana; Dorronsoro, José R
2007-07-01
Non Linear Discriminant Analysis (NLDA) networks combine a standard Multilayer Perceptron (MLP) transfer function with the minimization of a Fisher analysis criterion. In this work we will define natural-like gradients for NLDA network training. Instead of a more principled approach, that would require the definition of an appropriate Riemannian structure on the NLDA weight space, we will follow a simpler procedure, based on the observation that the gradient of the NLDA criterion function J can be written as the expectation nablaJ(W)=E[Z(X,W)] of a certain random vector Z and defining then I=E[Z(X,W)Z(X,W)(t)] as the Fisher information matrix in this case. This definition of I formally coincides with that of the information matrix for the MLP or other square error functions; the NLDA J criterion, however, does not have this structure. Although very simple, the proposed approach shows much faster convergence than that of standard gradient descent, even when its costlier complexity is taken into account. While the faster convergence of natural MLP batch training can be also explained in terms of its relationship with the Gauss-Newton minimization method, this is not the case for NLDA training, as we will see analytically and numerically that the hessian and information matrices are different.
Decohesion models informed by first-principles calculations: The ab initio tensile test
NASA Astrophysics Data System (ADS)
Enrique, Raúl A.; Van der Ven, Anton
2017-10-01
Extreme deformation and homogeneous fracture can be readily studied via ab initio methods by subjecting crystals to numerical "tensile tests", where the energy of locally stable crystal configurations corresponding to elongated and fractured states are evaluated by means of density functional method calculations. The information obtained can then be used to construct traction curves of cohesive zone models in order to address fracture at the macroscopic scale. In this work, we perform an in depth analysis of traction curves and how ab initio calculations must be interpreted to rigorously parameterize an atomic scale cohesive zone model, using crystalline Ag as an example. Our analysis of traction curves reveal the existence of two qualitatively distinct decohesion criteria: (i) an energy criterion whereby the released elastic energy equals the energy cost of creating two new surfaces and (ii) an instability criterion that occurs at a higher and size independent stress than that of the energy criterion. We find that increasing the size of the simulation cell renders parts of the traction curve inaccessible to ab initio calculations involving the uniform decohesion of the crystal. We also find that the separation distance below which a crack heals is not a material parameter as has been proposed in the past. Finally, we show that a large energy barrier separates the uniformly stressed crystal from the decohered crystal, resolving a paradox predicted by a scaling law based on the energy criterion that implies that large crystals will decohere under vanishingly small stresses. This work clarifies confusion in the literature as to how a cohesive zone model is to be parameterized with ab initio "tensile tests" in the presence of internal relaxations.
Phase object imaging inside the airy disc
NASA Astrophysics Data System (ADS)
Tychinsky, Vladimir P.
1991-03-01
The possibility of phase objects superresoluton imaging is theoretically justifieth The measurements with CPM " AIRYSCAN" showed the reality of O structures observations when the Airy disc di ameter i s 0 86 j. . m SUMMARY It has been known that the amount of information contained in the image of any object is mostly determined by the number of points measured i ndependentl y or by spati al resol uti on of the system. From the classic theory of the optical systems it follows that for noncoherent sources the -spatial resolution is limited by the aperture dd 6LX/N. A. ( Rayleigh criterion where X is wave length NA numerical aperture. ) The use of this criterion is equivalent tO the statement that any object inside the Airy disc of radius d that is the difraction image of a point is practical ly unresolved. However at the coherent illumination the intensity distribution in the image plane depends also upon the phase iq (r) of the wave scattered by the object and this is the basis of the Zernike method of phasecontrast microscopy differential interference contrast (DIC) and computer phase microscopy ( CPM ). In theoretical foundation of these methods there was no doubt in the correctness of Rayleigh criterion since the phase information is derived out of intensity distribution and as we know there were no experiments that disproved this
Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network
Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng
2016-01-01
Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods. PMID:27754386
Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network.
Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng
2016-10-13
Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods.
NASA Astrophysics Data System (ADS)
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
A multiple maximum scatter difference discriminant criterion for facial feature extraction.
Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei
2007-12-01
Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.
Spotted Towhee population dynamics in a riparian restoration context
Stacy L. Small; Frank R., III Thompson; Geoffery R. Geupel; John Faaborg
2007-01-01
We investigated factors at multiple scales that might influence nest predation risk for Spotted Towhees (Pipilo maculates) along the Sacramento River, California, within the context of large-scale riparian habitat restoration. We used the logistic-exposure method and Akaike's information criterion (AIC) for model selection to compare predator...
NASA Astrophysics Data System (ADS)
Jia, Chen; Chen, Yong
2015-05-01
In the work of Amann, Schmiedl and Seifert (2010 J. Chem. Phys. 132 041102), the authors derived a sufficient criterion to identify a non-equilibrium steady state (NESS) in a three-state Markov system based on the coarse-grained information of two-state trajectories. In this paper, we present a mathematical derivation and provide a probabilistic interpretation of the Amann-Schmiedl-Seifert (ASS) criterion. Moreover, the ASS criterion is compared with some other criterions for a NESS.
Research of facial feature extraction based on MMC
NASA Astrophysics Data System (ADS)
Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun
2017-07-01
Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.
2013-01-01
Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014
Economic weights for genetic improvement of lactation persistency and milk yield.
Togashi, K; Lin, C Y
2009-06-01
This study aimed to establish a criterion for measuring the relative weight of lactation persistency (the ratio of yield at 280 d in milk to peak yield) in restricted selection index for the improvement of net merit comprising 3-parity total yield and total lactation persistency. The restricted selection index was compared with selection based on first-lactation total milk yield (I(1)), the first-two-lactation total yield (I(2)), and first-three-lactation total yield (I(3)). Results show that genetic response in net merit due to selection on restricted selection index could be greater than, equal to, or less than that due to the unrestricted index depending upon the relative weight of lactation persistency and the restriction level imposed. When the relative weight of total lactation persistency is equal to the criterion, the restricted selection index is equal to the selection method compared (I(1), I(2), or I(3)). The restricted selection index yielded a greater response when the relative weight of total lactation persistency was above the criterion, but a lower response when it was below the criterion. The criterion varied depending upon the restriction level (c) imposed and the selection criteria compared. A curvilinear relationship (concave curve) exists between the criterion and the restricted level. The criterion increases as the restriction level deviates in either direction from 1.5. Without prior information of the economic weight of lactation persistency, the imposition of the restriction level of 1.5 on lactation persistency would maximize change in net merit. The procedure presented allows for simultaneous modification of multi-parity lactation curves.
The Impact of Various Class-Distinction Features on Model Selection in the Mixture Rasch Model
ERIC Educational Resources Information Center
Choi, In-Hee; Paek, Insu; Cho, Sun-Joo
2017-01-01
The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albeverio, Sergio; Chen Kai; Fei Shaoming
A necessary separability criterion that relates the structures of the total density matrix and its reductions is given. The method used is based on the realignment method [K. Chen and L. A. Wu, Quant. Inf. Comput. 3, 193 (2003)]. The separability criterion naturally generalizes the reduction separability criterion introduced independently in the previous work [M. Horodecki and P. Horodecki, Phys. Rev. A 59, 4206 (1999) and N. J. Cerf, C. Adami, and R. M. Gingrich, Phys. Rev. A 60, 898 (1999)]. In special cases, it recovers the previous reduction criterion and the recent generalized partial transposition criterion [K. Chen andmore » L. A. Wu, Phys. Lett. A 306, 14 (2002)]. The criterion involves only simple matrix manipulations and can therefore be easily applied.« less
What constitutes evidence-based patient information? Overview of discussed criteria.
Bunge, Martina; Mühlhauser, Ingrid; Steckelberg, Anke
2010-03-01
To survey quality criteria for evidence-based patient information (EBPI) and to compile the evidence for the identified criteria. Databases PubMed, Cochrane Library, PsycINFO, PSYNDEX and Education Research Information Center (ERIC) were searched to update the pool of criteria for EBPI. A subsequent search aimed to identify evidence for each criterion. Only studies on health issues with cognitive outcome measures were included. Evidence for each criterion is presented using descriptive methods. 3 systematic reviews, 24 randomized-controlled studies and 1 non-systematic review were included. Presentation of numerical data, verbal presentation of risks and diagrams, graphics and charts are based on good evidence. Content of information and meta-information, loss- and gain-framing and patient-oriented outcome measures are based on ethical guidelines. There is a lack of studies on quality of evidence, pictures and drawings, patient narratives, cultural aspects, layout, language and development process. The results of this review allow specification of EBPI and may help to advance the discourse among related disciplines. Research gaps are highlighted. Findings outline the type and extent of content of EBPI, guide the presentation of information and describe the development process. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
The Brockport Physical Fitness Test Training Manual. [Project Target].
ERIC Educational Resources Information Center
Winnick, Joseph P.; Short, Francis X., Ed.
This training manual presents information on the Brockport Physical Fitness Test (BPFT), a criterion-referenced fitness test for children and adolescents with disabilities. The first chapter of the test training manual includes information dealing with health-related criterion-referenced testing, the interaction between physical activity and…
The Information a Test Provides on an Ability Parameter. Research Report. ETS RR-07-18
ERIC Educational Resources Information Center
Haberman, Shelby J.
2007-01-01
In item-response theory, if a latent-structure model has an ability variable, then elementary information theory may be employed to provide a criterion for evaluation of the information the test provides concerning ability. This criterion may be considered even in cases in which the latent-structure model is not valid, although interpretation of…
Method for Evaluating Information to Solve Problems of Control, Monitoring and Diagnostics
NASA Astrophysics Data System (ADS)
Vasil'ev, V. A.; Dobrynina, N. V.
2017-06-01
The article describes a method for evaluating information to solve problems of control, monitoring and diagnostics. It is necessary for reducing the dimensionality of informational indicators of situations, bringing them to relative units, for calculating generalized information indicators on their basis, ranking them by characteristic levels, for calculating the efficiency criterion of a system functioning in real time. The design of information evaluation system has been developed on its basis that allows analyzing, processing and assessing information about the object. Such object can be a complex technical, economic and social system. The method and the based system thereof can find a wide application in the field of analysis, processing and evaluation of information on the functioning of the systems, regardless of their purpose, goals, tasks and complexity. For example, they can be used to assess the innovation capacities of industrial enterprises and management decisions.
Development of advanced acreage estimation methods
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator)
1982-01-01
The development of an accurate and efficient algorithm for analyzing the structure of MSS data, the application of the Akaiki information criterion to mixture models, and a research plan to delineate some of the technical issues and associated tasks in the area of rice scene radiation characterization are discussed. The AMOEBA clustering algorithm is refined and documented.
Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.
Dick, Bernhard
2014-01-14
A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.
Towards a Probabilistic Preliminary Design Criterion for Buckling Critical Composite Shells
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Hilburger, Mark W.
2003-01-01
A probability-based analysis method for predicting buckling loads of compression-loaded laminated-composite shells is presented, and its potential as a basis for a new shell-stability design criterion is demonstrated and discussed. In particular, a database containing information about specimen geometry, material properties, and measured initial geometric imperfections for a selected group of laminated-composite cylindrical shells is used to calculate new buckling-load "knockdown factors". These knockdown factors are shown to be substantially improved, and hence much less conservative than the corresponding deterministic knockdown factors that are presently used by industry. The probability integral associated with the analysis is evaluated by using two methods; that is, by using the exact Monte Carlo method and by using an approximate First-Order Second- Moment method. A comparison of the results from these two methods indicates that the First-Order Second-Moment method yields results that are conservative for the shells considered. Furthermore, the results show that the improved, reliability-based knockdown factor presented always yields a safe estimate of the buckling load for the shells examined.
Andrews, Arthur R.; Bridges, Ana J.; Gomez, Debbie
2014-01-01
Purpose The aims of the study were to evaluate the orthogonality of acculturation for Latinos. Design Regression analyses were used to examine acculturation in two Latino samples (N = 77; N = 40). In a third study (N = 673), confirmatory factor analyses compared unidimensional and bidimensional models. Method Acculturation was assessed with the ARSMA-II (Studies 1 and 2), and language proficiency items from the Children of Immigrants Longitudinal Study (Study 3). Results In Studies 1 and 2, the bidimensional model accounted for slightly more variance (R2Study 1 = .11; R2Study 2 = .21) than the unidimensional model (R2Study 1 = .10; R2Study 2 = .19). In Study 3, the bidimensional model evidenced better fit (Akaike information criterion = 167.36) than the unidimensional model (Akaike information criterion = 1204.92). Discussion/Conclusions Acculturation is multidimensional. Implications for Practice Care providers should examine acculturation as a bidimensional construct. PMID:23361579
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jesus, J.F.; Valentim, R.; Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk
Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γmore » = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.« less
Bayesian analysis of CCDM models
NASA Astrophysics Data System (ADS)
Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.
2017-09-01
Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3αH0 model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.
Spectra of empirical autocorrelation matrices: A random-matrix-theory-inspired perspective
NASA Astrophysics Data System (ADS)
Jamali, Tayeb; Jafari, G. R.
2015-07-01
We construct an autocorrelation matrix of a time series and analyze it based on the random-matrix theory (RMT) approach. The autocorrelation matrix is capable of extracting information which is not easily accessible by the direct analysis of the autocorrelation function. In order to provide a precise conclusion based on the information extracted from the autocorrelation matrix, the results must be first evaluated. In other words they need to be compared with some sort of criterion to provide a basis for the most suitable and applicable conclusions. In the context of the present study, the criterion is selected to be the well-known fractional Gaussian noise (fGn). We illustrate the applicability of our method in the context of stock markets. For the former, despite the non-Gaussianity in returns of the stock markets, a remarkable agreement with the fGn is achieved.
Astrometry and exoplanets in the Gaia era: a Bayesian approach to detection and parameter recovery
NASA Astrophysics Data System (ADS)
Ranalli, P.; Hobbs, D.; Lindegren, L.
2018-06-01
The Gaia mission is expected to make a significant contribution to the knowledge of exoplanet systems, both in terms of their number and of their physical properties. We develop Bayesian methods and detection criteria for orbital fitting, and revise the detectability of exoplanets in light of the in-flight properties of Gaia. Limiting ourselves to one-planet systems as a first step of the development, we simulate Gaia data for exoplanet systems over a grid of S/N, orbital period, and eccentricity. The simulations are then fit using Markov chain Monte Carlo methods. We investigate the detection rate according to three information criteria and the Δχ2. For the Δχ2, the effective number of degrees of freedom depends on the mission length. We find that the choice of the Markov chain starting point can affect the quality of the results; we therefore consider two limit possibilities: an ideal case, and a very simple method that finds the starting point assuming circular orbits. We use 6644 and 4402 simulations to assess the fraction of false positive detections in a 5 yr and in a 10 yr mission, respectively; and 4968 and 4706 simulations to assess the detection rate and how the parameters are recovered. Using Jeffreys' scale of evidence, the fraction of false positives passing a strong evidence criterion is ≲0.2% (0.6%) when considering a 5 yr (10 yr) mission and using the Akaike information criterion or the Watanabe-Akaike information criterion, and <0.02% (<0.06%) when using the Bayesian information criterion. We find that there is a 50% chance of detecting a planet with a minimum S/N = 2.3 (1.7). This sets the maximum distance to which a planet is detectable to 70 pc and 3.5 pc for a Jupiter-mass and Neptune-mass planets, respectively, assuming a 10 yr mission, a 4 au semi-major axis, and a 1 M⊙ star. We show the distribution of the accuracy and precision with which orbital parameters are recovered. The period is the orbital parameter that can be determined with the best accuracy, with a median relative difference between input and output periods of 4.2% (2.9%) assuming a 5 yr (10 yr) mission. The median accuracy of the semi-major axis of the orbit can be recovered with a median relative error of 7% (6%). The eccentricity can also be recovered with a median absolute accuracy of 0.07 (0.06).
Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion
NASA Astrophysics Data System (ADS)
Hamid, M. R. Ab; Sami, W.; Mohmad Sidek, M. H.
2017-09-01
Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.
Comparison of Vocal Vibration-Dose Measures for Potential-Damage Risk Criteria
Hunter, Eric J.
2015-01-01
Purpose Schoolteachers have become a benchmark population for the study of occupational voice use. A decade of vibration-dose studies on the teacher population allows a comparison to be made between specific dose measures for eventual assessment of damage risk. Method Vibration dosimetry is reformulated with the inclusion of collision stress. Two methods of estimating amplitude of vocal-fold vibration are compared to capture variations in vocal intensity. Energy loss from collision is added to the energy-dissipation dose. An equal-energy-dissipation criterion is defined and used on the teacher corpus as a potential-damage risk criterion. Results Comparison of time-, cycle-, distance-, and energy-dose calculations for 57 teachers reveals a progression in information content in the ability to capture variations in duration, speaking pitch, and vocal intensity. The energy-dissipation dose carries the greatest promise in capturing excessive tissue stress and collision but also the greatest liability, due to uncertainty in parameters. Cycle dose is least correlated with the other doses. Conclusion As a first guide to damage risk in excessive voice use, the equal-energy-dissipation dose criterion can be used to structure trade-off relations between loudness, adduction, and duration of speech. PMID:26172434
Kurzeja, Patrick
2016-05-01
Modern imaging techniques, increased simulation capabilities and extended theoretical frameworks, naturally drive the development of multiscale modelling by the question: which new information should be considered? Given the need for concise constitutive relationships and efficient data evaluation; however, one important question is often neglected: which information is sufficient? For this reason, this work introduces the formalized criterion of subscale sufficiency. This criterion states whether a chosen constitutive relationship transfers all necessary information from micro to macroscale within a multiscale framework. It further provides a scheme to improve constitutive relationships. Direct application to static capillary pressure demonstrates usefulness and conditions for subscale sufficiency of saturation and interfacial areas.
Information Centralization of Organization Information Structures via Reports of Exceptions.
ERIC Educational Resources Information Center
Moskowitz, Herbert; Murnighan, John Keith
A team theoretic model that establishes a criterion (decision rule) for a financial institution branch to report exceptional loan requests to headquarters for action was compared to such choices made by graduate industrial management students acting as financial vice-presidents. Results showed that the loan size criterion specified by subjects was…
Diagnostic Group Differences in Parent and Teacher Ratings on the BRIEF and Conners' Scales
ERIC Educational Resources Information Center
Sullivan, Jeremy R.; Riccio, Cynthia A.
2007-01-01
Objective: Behavioral rating scales are common instruments used in evaluations of ADHD and executive function. It is important to explore how different diagnostic groups perform on these measures, as this information can be used to provide criterion-related validity evidence for the measures. Method: Data from 92 children and adolescents were used…
Comparing simple respiration models for eddy flux and dynamic chamber data
Andrew D. Richardson; Bobby H. Braswell; David Y. Hollinger; Prabir Burman; Eric A. Davidson; Robert S. Evans; Lawrence B. Flanagan; J. William Munger; Kathleen Savage; Shawn P. Urbanski; Steven C. Wofsy
2006-01-01
Selection of an appropriate model for respiration (R) is important for accurate gap-filling of CO2 flux data, and for partitioning measurements of net ecosystem exchange (NEE) to respiration and gross ecosystem exchange (GEE). Using cross-validation methods and a version of Akaike's Information Criterion (AIC), we evaluate a wide range of...
Territories typification technique with use of statistical models
NASA Astrophysics Data System (ADS)
Galkin, V. I.; Rastegaev, A. V.; Seredin, V. V.; Andrianov, A. V.
2018-05-01
Territories typification is required for solution of many problems. The results of geological zoning received by means of various methods do not always agree. That is why the main goal of the research given is to develop a technique of obtaining a multidimensional standard classified indicator for geological zoning. In the course of the research, the probabilistic approach was used. In order to increase the reliability of geological information classification, the authors suggest using complex multidimensional probabilistic indicator P K as a criterion of the classification. The second criterion chosen is multidimensional standard classified indicator Z. These can serve as characteristics of classification in geological-engineering zoning. Above mentioned indicators P K and Z are in good correlation. Correlation coefficient values for the entire territory regardless of structural solidity equal r = 0.95 so each indicator can be used in geological-engineering zoning. The method suggested has been tested and the schematic map of zoning has been drawn.
Optimization of algorithm of coding of genetic information of Chlamydia
NASA Astrophysics Data System (ADS)
Feodorova, Valentina A.; Ulyanov, Sergey S.; Zaytsev, Sergey S.; Saltykov, Yury V.; Ulianova, Onega V.
2018-04-01
New method of coding of genetic information using coherent optical fields is developed. Universal technique of transformation of nucleotide sequences of bacterial gene into laser speckle pattern is suggested. Reference speckle patterns of the nucleotide sequences of omp1 gene of typical wild strains of Chlamydia trachomatis of genovars D, E, F, G, J and K and Chlamydia psittaci serovar I as well are generated. Algorithm of coding of gene information into speckle pattern is optimized. Fully developed speckles with Gaussian statistics for gene-based speckles have been used as criterion of optimization.
Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling
2013-07-04
Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
The Effectiveness of Circular Equating as a Criterion for Evaluating Equating.
ERIC Educational Resources Information Center
Wang, Tianyou; Hanson, Bradley A.; Harris, Deborah J.
Equating a test form to itself through a chain of equatings, commonly referred to as circular equating, has been widely used as a criterion to evaluate the adequacy of equating. This paper uses both analytical methods and simulation methods to show that this criterion is in general invalid in serving this purpose. For the random groups design done…
2016-01-01
Modern imaging techniques, increased simulation capabilities and extended theoretical frameworks, naturally drive the development of multiscale modelling by the question: which new information should be considered? Given the need for concise constitutive relationships and efficient data evaluation; however, one important question is often neglected: which information is sufficient? For this reason, this work introduces the formalized criterion of subscale sufficiency. This criterion states whether a chosen constitutive relationship transfers all necessary information from micro to macroscale within a multiscale framework. It further provides a scheme to improve constitutive relationships. Direct application to static capillary pressure demonstrates usefulness and conditions for subscale sufficiency of saturation and interfacial areas. PMID:27279769
NASA Astrophysics Data System (ADS)
Narukawa, Takafumi; Yamaguchi, Akira; Jang, Sunghyon; Amaya, Masaki
2018-02-01
For estimating fracture probability of fuel cladding tube under loss-of-coolant accident conditions of light-water-reactors, laboratory-scale integral thermal shock tests were conducted on non-irradiated Zircaloy-4 cladding tube specimens. Then, the obtained binary data with respect to fracture or non-fracture of the cladding tube specimen were analyzed statistically. A method to obtain the fracture probability curve as a function of equivalent cladding reacted (ECR) was proposed using Bayesian inference for generalized linear models: probit, logit, and log-probit models. Then, model selection was performed in terms of physical characteristics and information criteria, a widely applicable information criterion and a widely applicable Bayesian information criterion. As a result, it was clarified that the log-probit model was the best among the three models to estimate the fracture probability in terms of the degree of prediction accuracy for both next data to be obtained and the true model. Using the log-probit model, it was shown that 20% ECR corresponded to a 5% probability level with a 95% confidence of fracture of the cladding tube specimens.
Saha, Tulshi D; Compton, Wilson M; Chou, S Patricia; Smith, Sharon; Ruan, W June; Huang, Boji; Pickering, Roger P; Grant, Bridget F
2012-04-01
Prior research has demonstrated the dimensionality of alcohol, nicotine and cannabis use disorders criteria. The purpose of this study was to examine the unidimensionality of DSM-IV cocaine, amphetamine and prescription drug abuse and dependence criteria and to determine the impact of elimination of the legal problems criterion on the information value of the aggregate criteria. Factor analyses and Item Response Theory (IRT) analyses were used to explore the unidimensionality and psychometric properties of the illicit drug use criteria using a large representative sample of the U.S. population. All illicit drug abuse and dependence criteria formed unidimensional latent traits. For amphetamines, cocaine, sedatives, tranquilizers and opioids, IRT models fit better for models without legal problems criterion than models with legal problems criterion and there were no differences in the information value of the IRT models with and without the legal problems criterion, supporting the elimination of that criterion. Consistent with findings for alcohol, nicotine and cannabis, amphetamine, cocaine, sedative, tranquilizer and opioid abuse and dependence criteria reflect underlying unitary dimensions of severity. The legal problems criterion associated with each of these substance use disorders can be eliminated with no loss in informational value and an advantage of parsimony. Taken together, these findings support the changes to substance use disorder diagnoses recommended by the American Psychiatric Association's DSM-5 Substance and Related Disorders Workgroup. Published by Elsevier Ireland Ltd.
An information theory criteria based blind method for enumerating active users in DS-CDMA system
NASA Astrophysics Data System (ADS)
Samsami Khodadad, Farid; Abed Hodtani, Ghosheh
2014-11-01
In this paper, a new and blind algorithm for active user enumeration in asynchronous direct sequence code division multiple access (DS-CDMA) in multipath channel scenario is proposed. The proposed method is based on information theory criteria. There are two main categories of information criteria which are widely used in active user enumeration, Akaike Information Criterion (AIC) and Minimum Description Length (MDL) information theory criteria. The main difference between these two criteria is their penalty functions. Due to this difference, MDL is a consistent enumerator which has better performance in higher signal-to-noise ratios (SNR) but AIC is preferred in lower SNRs. In sequel, we propose a SNR compliance method based on subspace and training genetic algorithm to have the performance of both of them. Moreover, our method uses only a single antenna, in difference to the previous methods which decrease hardware complexity. Simulation results show that the proposed method is capable of estimating the number of active users without any prior knowledge and the efficiency of the method.
Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion
NASA Astrophysics Data System (ADS)
Zou, Cuiming; Kou, Kit Ian
2018-05-01
Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.
Model selection for the North American Breeding Bird Survey: A comparison of methods
Link, William; Sauer, John; Niven, Daniel
2017-01-01
The North American Breeding Bird Survey (BBS) provides data for >420 bird species at multiple geographic scales over 5 decades. Modern computational methods have facilitated the fitting of complex hierarchical models to these data. It is easy to propose and fit new models, but little attention has been given to model selection. Here, we discuss and illustrate model selection using leave-one-out cross validation, and the Bayesian Predictive Information Criterion (BPIC). Cross-validation is enormously computationally intensive; we thus evaluate the performance of the Watanabe-Akaike Information Criterion (WAIC) as a computationally efficient approximation to the BPIC. Our evaluation is based on analyses of 4 models as applied to 20 species covered by the BBS. Model selection based on BPIC provided no strong evidence of one model being consistently superior to the others; for 14/20 species, none of the models emerged as superior. For the remaining 6 species, a first-difference model of population trajectory was always among the best fitting. Our results show that WAIC is not reliable as a surrogate for BPIC. Development of appropriate model sets and their evaluation using BPIC is an important innovation for the analysis of BBS data.
Computation of Anisotropic Bi-Material Interfacial Fracture Parameters and Delamination Creteria
NASA Technical Reports Server (NTRS)
Chow, W-T.; Wang, L.; Atluri, S. N.
1998-01-01
This report documents the recent developments in methodologies for the evaluation of the integrity and durability of composite structures, including i) the establishment of a stress-intensity-factor based fracture criterion for bimaterial interfacial cracks in anisotropic materials (see Sec. 2); ii) the development of a virtual crack closure integral method for the evaluation of the mixed-mode stress intensity factors for a bimaterial interfacial crack (see Sec. 3). Analytical and numerical results show that the proposed fracture criterion is a better fracture criterion than the total energy release rate criterion in the characterization of the bimaterial interfacial cracks. The proposed virtual crack closure integral method is an efficient and accurate numerical method for the evaluation of mixed-mode stress intensity factors.
Wiggins, Paul A
2015-07-21
This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Optimal sensor placement for spatial lattice structure based on genetic algorithms
NASA Astrophysics Data System (ADS)
Liu, Wei; Gao, Wei-cheng; Sun, Yi; Xu, Min-jian
2008-10-01
Optimal sensor placement technique plays a key role in structural health monitoring of spatial lattice structures. This paper considers the problem of locating sensors on a spatial lattice structure with the aim of maximizing the data information so that structural dynamic behavior can be fully characterized. Based on the criterion of optimal sensor placement for modal test, an improved genetic algorithm is introduced to find the optimal placement of sensors. The modal strain energy (MSE) and the modal assurance criterion (MAC) have been taken as the fitness function, respectively, so that three placement designs were produced. The decimal two-dimension array coding method instead of binary coding method is proposed to code the solution. Forced mutation operator is introduced when the identical genes appear via the crossover procedure. A computational simulation of a 12-bay plain truss model has been implemented to demonstrate the feasibility of the three optimal algorithms above. The obtained optimal sensor placements using the improved genetic algorithm are compared with those gained by exiting genetic algorithm using the binary coding method. Further the comparison criterion based on the mean square error between the finite element method (FEM) mode shapes and the Guyan expansion mode shapes identified by data-driven stochastic subspace identification (SSI-DATA) method are employed to demonstrate the advantage of the different fitness function. The results showed that some innovations in genetic algorithm proposed in this paper can enlarge the genes storage and improve the convergence of the algorithm. More importantly, the three optimal sensor placement methods can all provide the reliable results and identify the vibration characteristics of the 12-bay plain truss model accurately.
Bauser, G; Hendricks Franssen, Harrie-Jan; Stauffer, Fritz; Kaiser, Hans-Peter; Kuhlmann, U; Kinzelbach, W
2012-08-30
We present the comparison of two control criteria for the real-time management of a water well field. The criteria were used to simulate the operation of the Hardhof well field in the city of Zurich, Switzerland. This well field is threatened by diffuse pollution in the subsurface of the surrounding city area. The risk of attracting pollutants is higher if the pumping rates in four horizontal wells are increased, and can be reduced by increasing artificial recharge in several recharge basins and infiltration wells or by modifying the artificial recharge distribution. A three-dimensional finite elements flow model was built for the Hardhof site. The first control criterion used hydraulic head differences (Δh-criterion) to control the management of the well field and the second criterion used a path line method (%s-criterion) to control the percentage of inflowing water from the city area. Both control methods adapt the allocation of artificial recharge (AR) for given pumping rates in time. The simulation results show that (1) historical management decisions were less effective compared to the optimal control according to the two different criteria and (2) the distribution of artificial recharge calculated with the two control criteria also differ from each other with the %s-criterion giving better results compared to the Δh-criterion. The recharge management with the %s-criterion requires a smaller amount of water to be recharged. The ratio between average artificial recharge and average abstraction is 1.7 for the Δh-criterion and 1.5 for the %s-criterion. Both criteria were tested online. The methodologies were extended to a real-time control method using the Ensemble Kalman Filter method for assimilating 87 online available groundwater head measurements to update the model in real-time. The results of the operational implementation are also satisfying in regard of a reduced risk of well contamination. Copyright © 2012 Elsevier Ltd. All rights reserved.
Model selection for multi-component frailty models.
Ha, Il Do; Lee, Youngjo; MacKenzie, Gilbert
2007-11-20
Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects.
Improved Cluster Method Applied to the InSAR data of the 2007 Piton de la Fournaise eruption
NASA Astrophysics Data System (ADS)
Cayol, V.; Augier, A.; Froger, J. L.; Menassian, S.
2016-12-01
Interpretation of surface displacement induced by reservoirs, whether magmatic, hydrothermal or gaseous, can be done at reduced numerical cost and with little a priori knowledge using cluster methods, where reservoirs are represented by point sources embedded in an elastic half-space. Most of the time, the solution representing the best trade-off between the data fit and the model smoothness (L-curve criterion) is chosen. This study relies on synthetic tests to improve cluster methods in several ways. Firstly, to solve problems involving steep topographies, we construct unit sources numerically. Secondly, we show that the L-curve criterion leads to several plausible solutions where the most realistic are not necessarily the best fitting. We determine that the cross-validation method, with data geographically grouped, is a more reliable way to determine the solution. Thirdly, we propose a new method, based on source ranking according to their contribution and minimization of the Akaike information criteria, to retrieve reservoirs' geometry more accurately and to better reflect information contained in the data. We show that the solution is robust in the presence of correlated noise and that reservoir complexity that can be retrieved decreases with increasing noise. We also show that it is inappropriate to use cluster methods for pressurized fractures. Finally, the method is applied to the summit deflation recorded by InSAR after the caldera collapse which occurred at Piton de la Fournaise in April 2007. Comparison with other data indicate that the deflation is probably related to poro-elastic compaction and fluid flow subsequent to the crater collapse.
Secret Sharing of a Quantum State.
Lu, He; Zhang, Zhen; Chen, Luo-Kan; Li, Zheng-Da; Liu, Chang; Li, Li; Liu, Nai-Le; Ma, Xiongfeng; Chen, Yu-Ao; Pan, Jian-Wei
2016-07-15
Secret sharing of a quantum state, or quantum secret sharing, in which a dealer wants to share a certain amount of quantum information with a few players, has wide applications in quantum information. The critical criterion in a threshold secret sharing scheme is confidentiality: with less than the designated number of players, no information can be recovered. Furthermore, in a quantum scenario, one additional critical criterion exists: the capability of sharing entangled and unknown quantum information. Here, by employing a six-photon entangled state, we demonstrate a quantum threshold scheme, where the shared quantum secrecy can be efficiently reconstructed with a state fidelity as high as 93%. By observing that any one or two parties cannot recover the secrecy, we show that our scheme meets the confidentiality criterion. Meanwhile, we also demonstrate that entangled quantum information can be shared and recovered via our setting, which shows that our implemented scheme is fully quantum. Moreover, our experimental setup can be treated as a decoding circuit of the five-qubit quantum error-correcting code with two erasure errors.
Kojima, Motohiro; Shimazaki, Hideyuki; Iwaya, Keiichi; Kage, Masayoshi; Akiba, Jun; Ohkura, Yasuo; Horiguchi, Shinichiro; Shomori, Kohei; Kushima, Ryoji; Ajioka, Yoichi; Nomura, Shogo; Ochiai, Atsushi
2013-01-01
Aims The goal of this study is to create an objective pathological diagnostic system for blood and lymphatic vessel invasion (BLI). Methods 1450 surgically resected colorectal cancer specimens from eight hospitals were reviewed. Our first step was to compare the current practice of pathology assessment among eight hospitals. Then, H&E stained slides with or without histochemical/immunohistochemical staining were assessed by eight pathologists and concordance of BLI diagnosis was checked. In addition, histological findings associated with BLI having good concordance were reviewed. Based on these results, framework for developing diagnostic criterion was developed, using the Delphi method. The new criterion was evaluated using 40 colorectal cancer specimens. Results Frequency of BLI diagnoses, number of blocks obtained and stained for assessment of BLI varied among eight hospitals. Concordance was low for BLI diagnosis and was not any better when histochemical/immunohistochemical staining was provided. All histological findings associated with BLI from H&E staining were poor in agreement. However, observation of elastica-stained internal elastic membrane covering more than half of the circumference surrounding the tumour cluster as well as the presence of D2-40-stained endothelial cells covering more than half of the circumference surrounding the tumour cluster showed high concordance. Based on this observation, we developed a framework for pathological diagnostic criterion, using the Delphi method. This criterion was found to be useful in improving concordance of BLI diagnosis. Conclusions A framework for pathological diagnostic criterion was developed by reviewing concordance and using the Delphi method. The criterion developed may serve as the basis for creating a standardised procedure for pathological diagnosis. PMID:23592799
Testing the criterion for correct convergence in the complex Langevin method
NASA Astrophysics Data System (ADS)
Nagata, Keitaro; Nishimura, Jun; Shimasaki, Shinji
2018-05-01
Recently the complex Langevin method (CLM) has been attracting attention as a solution to the sign problem, which occurs in Monte Carlo calculations when the effective Boltzmann weight is not real positive. An undesirable feature of the method, however, was that it can happen in some parameter regions that the method yields wrong results even if the Langevin process reaches equilibrium without any problem. In our previous work, we proposed a practical criterion for correct convergence based on the probability distribution of the drift term that appears in the complex Langevin equation. Here we demonstrate the usefulness of this criterion in two solvable theories with many dynamical degrees of freedom, i.e., two-dimensional Yang-Mills theory with a complex coupling constant and the chiral Random Matrix Theory for finite density QCD, which were studied by the CLM before. Our criterion can indeed tell the parameter regions in which the CLM gives correct results.
Methods of evaluating the effects of coding on SAR data
NASA Technical Reports Server (NTRS)
Dutkiewicz, Melanie; Cumming, Ian
1993-01-01
It is recognized that mean square error (MSE) is not a sufficient criterion for determining the acceptability of an image reconstructed from data that has been compressed and decompressed using an encoding algorithm. In the case of Synthetic Aperture Radar (SAR) data, it is also deemed to be insufficient to display the reconstructed image (and perhaps error image) alongside the original and make a (subjective) judgment as to the quality of the reconstructed data. In this paper we suggest a number of additional evaluation criteria which we feel should be included as evaluation metrics in SAR data encoding experiments. These criteria have been specifically chosen to provide a means of ensuring that the important information in the SAR data is preserved. The paper also presents the results of an investigation into the effects of coding on SAR data fidelity when the coding is applied in (1) the signal data domain, and (2) the image domain. An analysis of the results highlights the shortcomings of the MSE criterion, and shows which of the suggested additional criterion have been found to be most important.
Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods
2016-11-01
ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample
Putting the Biological Species Concept to the Test: Using Mating Networks to Delimit Species
Lagache, Lélia; Leger, Jean-Benoist; Daudin, Jean-Jacques; Petit, Rémy J.; Vacher, Corinne
2013-01-01
Although interfertility is the key criterion upon which Mayr’s biological species concept is based, it has never been applied directly to delimit species under natural conditions. Our study fills this gap. We used the interfertility criterion to delimit two closely related oak species in a forest stand by analyzing the network of natural mating events between individuals. The results reveal two groups of interfertile individuals connected by only few mating events. These two groups were largely congruent with those determined using other criteria (morphological similarity, genotypic similarity and individual relatedness). Our study, therefore, shows that the analysis of mating networks is an effective method to delimit species based on the interfertility criterion, provided that adequate network data can be assembled. Our study also shows that although species boundaries are highly congruent across methods of species delimitation, they are not exactly the same. Most of the differences stem from assignment of individuals to an intermediate category. The discrepancies between methods may reflect a biological reality. Indeed, the interfertility criterion is an environment-dependant criterion as species abundances typically affect rates of hybridization under natural conditions. Thus, the methods of species delimitation based on the interfertility criterion are expected to give results slightly different from those based on environment-independent criteria (such as the genotypic similarity criteria). However, whatever the criterion chosen, the challenge we face when delimiting species is to summarize continuous but non-uniform variations in biological diversity. The grade of membership model that we use in this study appears as an appropriate tool. PMID:23818990
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A parsimonious tree-grow method for haplotype inference.
Li, Zhenping; Zhou, Wenfeng; Zhang, Xiang-Sun; Chen, Luonan
2005-09-01
Haplotype information has become increasingly important in analyzing fine-scale molecular genetics data, such as disease genes mapping and drug design. Parsimony haplotyping is one of haplotyping problems belonging to NP-hard class. In this paper, we aim to develop a novel algorithm for the haplotype inference problem with the parsimony criterion, based on a parsimonious tree-grow method (PTG). PTG is a heuristic algorithm that can find the minimum number of distinct haplotypes based on the criterion of keeping all genotypes resolved during tree-grow process. In addition, a block-partitioning method is also proposed to improve the computational efficiency. We show that the proposed approach is not only effective with a high accuracy, but also very efficient with the computational complexity in the order of O(m2n) time for n single nucleotide polymorphism sites in m individual genotypes. The software is available upon request from the authors, or from http://zhangroup.aporc.org/bioinfo/ptg/ chen@elec.osaka-sandai.ac.jp Supporting materials is available from http://zhangroup.aporc.org/bioinfo/ptg/bti572supplementary.pdf
Validation of Cost-Effectiveness Criterion for Evaluating Noise Abatement Measures
DOT National Transportation Integrated Search
1999-04-01
This project will provide the Texas Department of Transportation (TxDOT)with information about the effects of the current cost-effectiveness criterion. The project has reviewed (1) the cost-effectiveness criteria used by other states, (2) the noise b...
Laurenson, Yan C S M; Kyriazakis, Ilias; Bishop, Stephen C
2013-10-18
Estimated breeding values (EBV) for faecal egg count (FEC) and genetic markers for host resistance to nematodes may be used to identify resistant animals for selective breeding programmes. Similarly, targeted selective treatment (TST) requires the ability to identify the animals that will benefit most from anthelmintic treatment. A mathematical model was used to combine the concepts and evaluate the potential of using genetic-based methods to identify animals for a TST regime. EBVs obtained by genomic prediction were predicted to be the best determinant criterion for TST in terms of the impact on average empty body weight and average FEC, whereas pedigree-based EBVs for FEC were predicted to be marginally worse than using phenotypic FEC as a determinant criterion. Whilst each method has financial implications, if the identification of host resistance is incorporated into a wider genomic selection indices or selective breeding programmes, then genetic or genomic information may be plausibly included in TST regimes. Copyright © 2013 Elsevier B.V. All rights reserved.
Boosting for detection of gene-environment interactions.
Pashova, H; LeBlanc, M; Kooperberg, C
2013-01-30
In genetic association studies, it is typically thought that genetic variants and environmental variables jointly will explain more of the inheritance of a phenotype than either of these two components separately. Traditional methods to identify gene-environment interactions typically consider only one measured environmental variable at a time. However, in practice, multiple environmental factors may each be imprecise surrogates for the underlying physiological process that actually interacts with the genetic factors. In this paper, we develop a variant of L(2) boosting that is specifically designed to identify combinations of environmental variables that jointly modify the effect of a gene on a phenotype. Because the effect modifiers might have a small signal compared with the main effects, working in a space that is orthogonal to the main predictors allows us to focus on the interaction space. In a simulation study that investigates some plausible underlying model assumptions, our method outperforms the least absolute shrinkage and selection and Akaike Information Criterion and Bayesian Information Criterion model selection procedures as having the lowest test error. In an example for the Women's Health Initiative-Population Architecture using Genomics and Epidemiology study, the dedicated boosting method was able to pick out two single-nucleotide polymorphisms for which effect modification appears present. The performance was evaluated on an independent test set, and the results are promising. Copyright © 2012 John Wiley & Sons, Ltd.
Procrustes Matching by Congruence Coefficients
ERIC Educational Resources Information Center
Korth, Bruce; Tucker, L. R.
1976-01-01
Matching by Procrustes methods involves the transformation of one matrix to match with another. A special least squares criterion, the congruence coefficient, has advantages as a criterion for some factor analytic interpretations. A Procrustes method maximizing the congruence coefficient is given. (Author/JKS)
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
Kerridge, Bradley T; Saha, Tulshi D; Smith, Sharon; Chou, Patricia S; Pickering, Roger P; Huang, Boji; Ruan, June W; Pulay, Attila J
2011-09-01
Prior research has demonstrated the dimensionality of Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition (DSM-IV) alcohol, nicotine, cannabis, cocaine and amphetamine abuse and dependence criteria. The purpose of this study was to examine the dimensionality of hallucinogen and inhalant/solvent abuse and dependence criteria. In addition, we assessed the impact of elimination of the legal problems abuse criterion on the information value of the aggregate abuse and dependence criteria, another proposed change for DSM-IV currently lacking empirical justification. Factor analyses and item response theory (IRT) analyses were used to explore the unidimisionality and psychometric properties of hallucinogen and inhalant/solvent abuse and dependence criteria using a large representative sample of the United States (U.S.) general population. Hallucinogen and inhalant/solvent abuse and dependence criteria formed unidimensional latent traits. For both substances, IRT models without the legal problems abuse criterion demonstrated better fit than the corresponding model with the legal problem abuse criterion. Further, there were no differences in the information value of the IRT models with and without the legal problems abuse criterion, supporting the elimination of that criterion. No bias in the new diagnoses was observed by sex, age and race-ethnicity. Consistent with findings for alcohol, nicotine, cannabis, cocaine and amphetamine abuse and dependence criteria, hallucinogen and inhalant/solvent criteria reflect underlying dimensions of severity. The legal problems criterion associated with each of these substance use disorders can be eliminated with no loss in informational value and an advantage of parsimony. Taken together, these findings support the changes to substance use disorder diagnoses recommended by the DSM-V Substance and Related Disorders Workgroup, that is, combining DSM-IV abuse and dependence criteria and eliminating the legal problems abuse criterion. Published by Elsevier Ltd.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
NASA Astrophysics Data System (ADS)
Tauscher, Keith; Rapetti, David; Burns, Jack O.; Switzer, Eric
2018-02-01
The sky-averaged (global) highly redshifted 21 cm spectrum from neutral hydrogen is expected to appear in the VHF range of ∼20–200 MHz and its spectral shape and strength are determined by the heating properties of the first stars and black holes, by the nature and duration of reionization, and by the presence or absence of exotic physics. Measurements of the global signal would therefore provide us with a wealth of astrophysical and cosmological knowledge. However, the signal has not yet been detected because it must be seen through strong foregrounds weighted by a large beam, instrumental calibration errors, and ionospheric, ground, and radio-frequency-interference effects, which we collectively refer to as “systematics.” Here, we present a signal extraction method for global signal experiments which uses Singular Value Decomposition of “training sets” to produce systematics basis functions specifically suited to each observation. Instead of requiring precise absolute knowledge of the systematics, our method effectively requires precise knowledge of how the systematics can vary. After calculating eigenmodes for the signal and systematics, we perform a weighted least square fit of the corresponding coefficients and select the number of modes to include by minimizing an information criterion. We compare the performance of the signal extraction when minimizing various information criteria and find that minimizing the Deviance Information Criterion most consistently yields unbiased fits. The methods used here are built into our widely applicable, publicly available Python package, pylinex, which analytically calculates constraints on signals and systematics from given data, errors, and training sets.
1978-09-01
iE ARI TECHNICAL REPORT S~ TR-78-A31 M CCriterion-Reforencod Loasurement In the Army: Development of a Research-Based, Practical, Test Construction ...conducted to develop a Criterion- 1 Referenced Tests (CRTs) Construction Manual. Major accomplishments were the preparation of a written review of the...survey of the literature on Criterion-Referenced Testing’ conducted in order to provide an information base for development of the CRT Construction
Model selection criterion in survival analysis
NASA Astrophysics Data System (ADS)
Karabey, Uǧur; Tutkun, Nihal Ata
2017-07-01
Survival analysis deals with time until occurrence of an event of interest such as death, recurrence of an illness, the failure of an equipment or divorce. There are various survival models with semi-parametric or parametric approaches used in medical, natural or social sciences. The decision on the most appropriate model for the data is an important point of the analysis. In literature Akaike information criteria or Bayesian information criteria are used to select among nested models. In this study,the behavior of these information criterion is discussed for a real data set.
Validity and extension of the SCS-CN method for computing infiltration and rainfall-excess rates
NASA Astrophysics Data System (ADS)
Mishra, Surendra Kumar; Singh, Vijay P.
2004-12-01
A criterion is developed for determining the validity of the Soil Conservation Service curve number (SCS-CN) method. According to this criterion, the existing SCS-CN method is found to be applicable when the potential maximum retention, S, is less than or equal to twice the total rainfall amount. The criterion is tested using published data of two watersheds. Separating the steady infiltration from capillary infiltration, the method is extended for predicting infiltration and rainfall-excess rates. The extended SCS-CN method is tested using 55 sets of laboratory infiltration data on soils varying from Plainfield sand to Yolo light clay, and the computed and observed infiltration and rainfall-excess rates are found to be in good agreement.
New Stopping Criteria for Segmenting DNA Sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wentian
2001-06-18
We propose a solution on the stopping criterion in segmenting inhomogeneous DNA sequences with complex statistical patterns. This new stopping criterion is based on Bayesian information criterion in the model selection framework. When this criterion is applied to telomere of S.cerevisiae and the complete sequence of E.coli, borders of biologically meaningful units were identified, and a more reasonable number of domains was obtained. We also introduce a measure called segmentation strength which can be used to control the delineation of large domains. The relationship between the average domain size and the threshold of segmentation strength is determined for several genomemore » sequences.« less
Kojima, Motohiro; Shimazaki, Hideyuki; Iwaya, Keiichi; Kage, Masayoshi; Akiba, Jun; Ohkura, Yasuo; Horiguchi, Shinichiro; Shomori, Kohei; Kushima, Ryoji; Ajioka, Yoichi; Nomura, Shogo; Ochiai, Atsushi
2013-07-01
The goal of this study is to create an objective pathological diagnostic system for blood and lymphatic vessel invasion (BLI). 1450 surgically resected colorectal cancer specimens from eight hospitals were reviewed. Our first step was to compare the current practice of pathology assessment among eight hospitals. Then, H&E stained slides with or without histochemical/immunohistochemical staining were assessed by eight pathologists and concordance of BLI diagnosis was checked. In addition, histological findings associated with BLI having good concordance were reviewed. Based on these results, framework for developing diagnostic criterion was developed, using the Delphi method. The new criterion was evaluated using 40 colorectal cancer specimens. Frequency of BLI diagnoses, number of blocks obtained and stained for assessment of BLI varied among eight hospitals. Concordance was low for BLI diagnosis and was not any better when histochemical/immunohistochemical staining was provided. All histological findings associated with BLI from H&E staining were poor in agreement. However, observation of elastica-stained internal elastic membrane covering more than half of the circumference surrounding the tumour cluster as well as the presence of D2-40-stained endothelial cells covering more than half of the circumference surrounding the tumour cluster showed high concordance. Based on this observation, we developed a framework for pathological diagnostic criterion, using the Delphi method. This criterion was found to be useful in improving concordance of BLI diagnosis. A framework for pathological diagnostic criterion was developed by reviewing concordance and using the Delphi method. The criterion developed may serve as the basis for creating a standardised procedure for pathological diagnosis.
Optimization of global model composed of radial basis functions using the term-ranking approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Peng; Tao, Chao, E-mail: taochao@nju.edu.cn; Liu, Xiao-Jun
2014-03-15
A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.
Multi-Sample Cluster Analysis Using Akaike’s Information Criterion.
1982-12-20
of Likelihood Criteria for I)fferent Hypotheses," in P. A. Krishnaiah (Ed.), Multivariate Analysis-Il, New York: Academic Press. [5] Fisher, R. A...Methods of Simultaneous Inference in MANOVA," in P. R. Krishnaiah (Ed.), rultivariate Analysis-Il, New York: Academic Press. [8) Kendall, M. G. (1966...1982), Applied Multivariate Statisti- cal-Analysis, Englewood Cliffs: Prentice-Mall, Inc. [1U] Krishnaiah , P. R. (1969), "Simultaneous Test
Posada, David; Buckley, Thomas R
2004-10-01
Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).
Tamboer, Peter; Vorst, Harrie C M; Oort, Frans J
2014-04-01
Methods for identifying dyslexia in adults vary widely between studies. Researchers have to decide how many tests to use, which tests are considered to be the most reliable, and how to determine cut-off scores. The aim of this study was to develop an objective and powerful method for diagnosing dyslexia. We took various methodological measures, most of which are new compared to previous methods. We used a large sample of Dutch first-year psychology students, we considered several options for exclusion and inclusion criteria, we collected as many cognitive tests as possible, we used six independent sources of biographical information for a criterion of dyslexia, we compared the predictive power of discriminant analyses and logistic regression analyses, we used both sum scores and item scores as predictor variables, we used self-report questions as predictor variables, and we retested the reliability of predictions with repeated prediction analyses using an adjusted criterion. We were able to identify 74 dyslexic and 369 non-dyslexic students. For 37 students, various predictions were too inconsistent for a final classification. The most reliable predictions were acquired with item scores and self-report questions. The main conclusion is that it is possible to identify dyslexia with a high reliability, although the exact nature of dyslexia is still unknown. We therefore believe that this study yielded valuable information for future methods of identifying dyslexia in Dutch as well as in other languages, and that this would be beneficial for comparing studies across countries.
A Controlled Evaluation of the Distress Criterion for Binge Eating Disorder
ERIC Educational Resources Information Center
Grilo, Carlos M.; White, Marney A.
2011-01-01
Objective: Research has examined various aspects of the validity of the research criteria for binge eating disorder (BED) but has yet to evaluate the utility of Criterion C, "marked distress about binge eating." This study examined the significance of the marked distress criterion for BED using 2 complementary comparison groups. Method:…
Slope stability analysis using limit equilibrium method in nonlinear criterion.
Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu
2014-01-01
In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci , and the parameter of intact rock m i . There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i , F decreases first and then increases.
Slope Stability Analysis Using Limit Equilibrium Method in Nonlinear Criterion
Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu
2014-01-01
In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci, and the parameter of intact rock m i. There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i, F decreases first and then increases. PMID:25147838
How to (properly) strengthen Bell's theorem using counterfactuals
NASA Astrophysics Data System (ADS)
Bigaj, Tomasz
Bell's theorem in its standard version demonstrates that the joint assumptions of the hidden-variable hypothesis and the principle of local causation lead to a conflict with quantum-mechanical predictions. In his latest counterfactual strengthening of Bell's theorem, Stapp attempts to prove that the locality assumption itself contradicts the quantum-mechanical predictions in the Hardy case. His method relies on constructing a complex, non-truth functional formula which consists of statements about measurements and outcomes in some region R, and whose truth value depends on the selection of a measurement setting in a space-like separated location L. Stapp argues that this fact shows that the information about the measurement selection made in L has to be present in R. I give detailed reasons why this conclusion can and should be resisted. Next I correct and formalize an informal argument by Shimony and Stein showing that the locality condition coupled with Einstein's criterion of reality is inconsistent with quantum-mechanical predictions. I discuss the possibility of avoiding the inconsistency by rejecting Einstein's criterion rather than the locality assumption.
Li, Qizhai; Hu, Jiyuan; Ding, Juan; Zheng, Gang
2014-04-01
A classical approach to combine independent test statistics is Fisher's combination of $p$-values, which follows the $\\chi ^2$ distribution. When the test statistics are dependent, the gamma distribution (GD) is commonly used for the Fisher's combination test (FCT). We propose to use two generalizations of the GD: the generalized and the exponentiated GDs. We study some properties of mis-using the GD for the FCT to combine dependent statistics when one of the two proposed distributions are true. Our results show that both generalizations have better control of type I error rates than the GD, which tends to have inflated type I error rates at more extreme tails. In practice, common model selection criteria (e.g. Akaike information criterion/Bayesian information criterion) can be used to help select a better distribution to use for the FCT. A simple strategy of the two generalizations of the GD in genome-wide association studies is discussed. Applications of the results to genetic pleiotrophic associations are described, where multiple traits are tested for association with a single marker.
Growth curves for ostriches (Struthio camelus) in a Brazilian population.
Ramos, S B; Caetano, S L; Savegnago, R P; Nunes, B N; Ramos, A A; Munari, D P
2013-01-01
The objective of this study was to fit growth curves using nonlinear and linear functions to describe the growth of ostriches in a Brazilian population. The data set consisted of 112 animals with BW measurements from hatching to 383 d of age. Two nonlinear growth functions (Gompertz and logistic) and a third-order polynomial function were applied. The parameters for the models were estimated using the least-squares method and Gauss-Newton algorithm. The goodness-of-fit of the models was assessed using R(2) and the Akaike information criterion. The R(2) calculated for the logistic growth model was 0.945 for hens and 0.928 for cockerels and for the Gompertz growth model, 0.938 for hens and 0.924 for cockerels. The third-order polynomial fit gave R(2) of 0.938 for hens and 0.924 for cockerels. Among the Akaike information criterion calculations, the logistic growth model presented the lowest values in this study, both for hens and for cockerels. Nonlinear models are more appropriate for describing the sigmoid nature of ostrich growth.
Characterizing the functional MRI response using Tikhonov regularization.
Vakorin, Vasily A; Borowsky, Ron; Sarty, Gordon E
2007-09-20
The problem of evaluating an averaged functional magnetic resonance imaging (fMRI) response for repeated block design experiments was considered within a semiparametric regression model with autocorrelated residuals. We applied functional data analysis (FDA) techniques that use a least-squares fitting of B-spline expansions with Tikhonov regularization. To deal with the noise autocorrelation, we proposed a regularization parameter selection method based on the idea of combining temporal smoothing with residual whitening. A criterion based on a generalized chi(2)-test of the residuals for white noise was compared with a generalized cross-validation scheme. We evaluated and compared the performance of the two criteria, based on their effect on the quality of the fMRI response. We found that the regularization parameter can be tuned to improve the noise autocorrelation structure, but the whitening criterion provides too much smoothing when compared with the cross-validation criterion. The ultimate goal of the proposed smoothing techniques is to facilitate the extraction of temporal features in the hemodynamic response for further analysis. In particular, these FDA methods allow us to compute derivatives and integrals of the fMRI signal so that fMRI data may be correlated with behavioral and physiological models. For example, positive and negative hemodynamic responses may be easily and robustly identified on the basis of the first derivative at an early time point in the response. Ultimately, these methods allow us to verify previously reported correlations between the hemodynamic response and the behavioral measures of accuracy and reaction time, showing the potential to recover new information from fMRI data. 2007 John Wiley & Sons, Ltd
14 CFR 255.4 - Display of information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and the weight given to each criterion and the specifications used by the system's programmers in constructing the algorithm. (c) Systems shall not use any factors directly or indirectly relating to carrier...” connecting flights; and (iv) The weight given to each criterion in paragraphs (c)(3)(ii) and (iii) of this...
The stressor criterion for posttraumatic stress disorder: Does it matter?
Roberts, Andrea L.; Dohrenwend, Bruce P.; Aiello, Allison; Wright, Rosalind J.; Maercker, Andreas; Galea, Sandro; Koenen, Karestan C.
2013-01-01
Objective The definition of the stressor criterion for posttraumatic stress disorder (“Criterion A1”) is hotly debated with major revisions being considered for DSM-V. We examine whether symptoms, course, and consequences of PTSD vary predictably with the type of stressful event that precipitates symptoms. Method We used data from the 2009 PTSD diagnostic subsample (N=3,013) of the Nurses Health Study II. We asked respondents about exposure to stressful events qualifying under 1) DSM-III, 2) DSM-IV, or 3) not qualifying under DSM Criterion A1. Respondents selected the event they considered worst and reported subsequent PTSD symptoms. Among participants who met all other DSM-IV PTSD criteria, we compared distress, symptom severity, duration, impairment, receipt of professional help, and nine physical, behavioral, and psychiatric sequelae (e.g. physical functioning, unemployment, depression) by precipitating event group. Various assessment tools were used to determine fulfillment of PTSD Criteria B through F and to assess these 14 outcomes. Results Participants with PTSD from DSM-III events reported on average 1 more symptom (DSM-III mean=11.8 symptoms, DSM-IV=10.7, non-DSM=10.9) and more often reported symptoms lasted one year or longer compared to participants with PTSD from other groups. However, sequelae of PTSD did not vary systematically with precipitating event type. Conclusions Results indicate the stressor criterion as defined by the DSM may not be informative in characterizing PTSD symptoms and sequelae. In the context of ongoing DSM-V revision, these results suggest that Criterion A1 could be expanded in DSM-V without much consequence for our understanding of PTSD phenomenology. Events not considered qualifying stressors under the DSM produced PTSD as consequential as PTSD following DSM-III events, suggesting PTSD may be an aberrantly severe but nonspecific stress response syndrome. PMID:22401487
Hao, Chen; LiJun, Chen; Albright, Thomas P.
2007-01-01
Invasive exotic species pose a growing threat to the economy, public health, and ecological integrity of nations worldwide. Explaining and predicting the spatial distribution of invasive exotic species is of great importance to prevention and early warning efforts. We are investigating the potential distribution of invasive exotic species, the environmental factors that influence these distributions, and the ability to predict them using statistical and information-theoretic approaches. For some species, detailed presence/absence occurrence data are available, allowing the use of a variety of standard statistical techniques. However, for most species, absence data are not available. Presented with the challenge of developing a model based on presence-only information, we developed an improved logistic regression approach using Information Theory and Frequency Statistics to produce a relative suitability map. This paper generated a variety of distributions of ragweed (Ambrosia artemisiifolia L.) from logistic regression models applied to herbarium specimen location data and a suite of GIS layers including climatic, topographic, and land cover information. Our logistic regression model was based on Akaike's Information Criterion (AIC) from a suite of ecologically reasonable predictor variables. Based on the results we provided a new Frequency Statistical method to compartmentalize habitat-suitability in the native range. Finally, we used the model and the compartmentalized criterion developed in native ranges to "project" a potential distribution onto the exotic ranges to build habitat-suitability maps. ?? Science in China Press 2007.
NASA Astrophysics Data System (ADS)
Ma, Yuanxu; Huang, He Qing
2016-07-01
Accurate estimation of flow resistance is crucial for flood routing, flow discharge and velocity estimation, and engineering design. Various empirical and semiempirical flow resistance models have been developed during the past century; however, a universal flow resistance model for varying types of rivers has remained difficult to be achieved to date. In this study, hydrometric data sets from six stations in the lower Yellow River during 1958-1959 are used to calibrate three empirical flow resistance models (Eqs. (5)-(7)) and evaluate their predictability. A group of statistical measures have been used to evaluate the goodness of fit of these models, including root mean square error (RMSE), coefficient of determination (CD), the Nash coefficient (NA), mean relative error (MRE), mean symmetry error (MSE), percentage of data with a relative error ≤ 50% and 25% (P50, P25), and percentage of data with overestimated error (POE). Three model selection criterions are also employed to assess the model predictability: Akaike information criterion (AIC), Bayesian information criterion (BIC), and a modified model selection criterion (MSC). The results show that mean flow depth (d) and water surface slope (S) can only explain a small proportion of variance in flow resistance. When channel width (w) and suspended sediment concentration (SSC) are involved, the new model (7) achieves a better performance than the previous ones. The MRE of model (7) is generally < 20%, which is apparently better than that reported by previous studies. This model is validated using the data sets from the corresponding stations during 1965-1966, and the results show larger uncertainties than the calibrating model. This probably resulted from the temporal shift of dominant controls caused by channel change resulting from varying flow regime. With the advancements of earth observation techniques, information about channel width, mean flow depth, and suspended sediment concentration can be effectively extracted from multisource satellite images. We expect that the empirical methods developed in this study can be used as an effective surrogate in estimation of flow resistance in the large sand-bed rivers like the lower Yellow River.
Seves, Mauro; Haidl, Theresa; Eggers, Susanne; Rostamzadeh, Ayda; Genske, Anna; Jünger, Saskia; Woopen, Christiane; Jessen, Frank; Ruhrmann, Stephan
2018-01-01
Abstract Background Numerous studies suggest that health literacy (HL) plays a crucial role in maintaining and improving individual health. Furthermore, empirical findings highlight the relation between levels of a person’s HL and clinical outcomes. So far, there are no reviews, which investigate HL in individuals at-risk for psychosis. The aim of the current review is to assess how individuals at risk of developing a first episode of psychosis gain access to, understand, evaluate and apply risk-related health information. Methods A mixed-methods approach was used to analyze and synthesize a variety of study types including qualitative and quantitative studies. Search strategy, screening and data selection have been carried out according to the PRISMA criteria. The systematic search was applied on peer-reviewed literature in PUBMED, Cochrane Library, PsycINFO and Web of Science. Studies were included if participants met clinical high risk criteria (CHR), including the basic symptom criterion (BS) and the ultra-high risk (UHR) criteria. The UHR criteria comprise the attenuated psychotic symptom criterion (APS), the brief limited psychotic symptom criterion (BLIPS) and the genetic risk and functional decline criterion (GRDP) Furthermore, studies must have used validated HL measures or any operationalization of the HL’s subdimensions (access, understanding, appraisal, decision-making or action) as a primary outcome. A third inclusion criterion comprised that the concept of HL or one of the four dimensions was mentioned in title or abstract. Data extraction and synthesis was implemented according to existing recommendations for appraising evidence from different study types. The quality of the included studies was evaluated and related to the study results. Results The search string returned 10587 papers. After data extraction 15 quantitative as well as 4 qualitative studies and 3 reviews were included. The Quality assessment evaluated 12 publications as “good”, 9 as “fair” and one paper as “poor”. Only one of the studies assessed HL with as primary outcome. In the other studies, the five different subdimensions of HL were investigated as a secondary outcome respectively mentioned in the paper. “Gaining Access” was examined in 18 of the 22 studies. “Understanding” has been assessed in 7 publications. “Appraise” was examined in 9 studies. “Apply decision making” and “Apply health behavior” were investigated in 1 of 8 studies. Since none of the included publications operationalized neither HL nor the subdimensions of HL with a validated measure, no explicit influencing factors could be found. Discussion Quantitative and qualitative evidence indicates that subjects at-risk for psychosis describe a lack of understanding about their state and fear stigmatization that might lead to dysfunctional coping strategies, such as ignoring and hiding symptoms. Affected subjects are eager to be informed about their condition and describe favoured channels for obtaining information. The internet, family members, school personnel and GP’s play a crucial role in gain access to, understand, evaluate and apply risk-related health information. The results clearly highlight that more research should be dedicated to HL in individuals at risk of developing a psychosis. Further studies should explore the relation between HL and clinical outcomes in this target population by assessing the underlining constructs with validated tools.
Jaman, Ajmery; Latif, Mahbub A H M; Bari, Wasimul; Wahed, Abdus S
2016-05-20
In generalized estimating equations (GEE), the correlation between the repeated observations on a subject is specified with a working correlation matrix. Correct specification of the working correlation structure ensures efficient estimators of the regression coefficients. Among the criteria used, in practice, for selecting working correlation structure, Rotnitzky-Jewell, Quasi Information Criterion (QIC) and Correlation Information Criterion (CIC) are based on the fact that if the assumed working correlation structure is correct then the model-based (naive) and the sandwich (robust) covariance estimators of the regression coefficient estimators should be close to each other. The sandwich covariance estimator, used in defining the Rotnitzky-Jewell, QIC and CIC criteria, is biased downward and has a larger variability than the corresponding model-based covariance estimator. Motivated by this fact, a new criterion is proposed in this paper based on the bias-corrected sandwich covariance estimator for selecting an appropriate working correlation structure in GEE. A comparison of the proposed and the competing criteria is shown using simulation studies with correlated binary responses. The results revealed that the proposed criterion generally performs better than the competing criteria. An example of selecting the appropriate working correlation structure has also been shown using the data from Madras Schizophrenia Study. Copyright © 2015 John Wiley & Sons, Ltd.
ASRDI oxygen technology survey. Volume 4: Low temperature measurement
NASA Technical Reports Server (NTRS)
Sparks, L. L.
1974-01-01
Information is presented on temperature measurement between the triple point and critical point of liquid oxygen. The criterion selected is that all transducers which may reasonably be employed in the liquid oxygen (LO2) temperature range are considered. The temperature range for each transducer is the appropriate full range for the particular thermometer. The discussion of each thermometer or type of thermometer includes the following information: (1) useful temperature range, (2) general and particular methods of construction and the advantages of each type, (3) specifications (accuracy, reproducibility, response time, etc.), (4) associated instrumentation, (5) calibrations and procedures, and (6) analytical representations.
Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen
2017-12-01
A method is proposed and verified for selecting the optimum segmentation of a TEM reconstruction among the results of several segmentation algorithms. The selection criterion is the accuracy of the segmentation. To do this selection, a parameter for the comparison of the accuracies of the different segmentations has been defined. It consists of the mutual information value between the acquired TEM images of the sample and the Radon projections of the segmented volumes. In this work, it has been proved that this new mutual information parameter and the Jaccard coefficient between the segmented volume and the ideal one are correlated. In addition, the results of the new parameter are compared to the results obtained from another validated method to select the optimum segmentation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bürger, W; Streibelt, M
2015-02-01
Stepwise Occupational Reintegration (SOR) measures are of growing importance for the German statutory pension insurance. There is moderate evidence that patients with a poor prognosis in terms of a successful return to work, profit most from SOR measures. However, it is not clear to what extend these information are utilized when recommending SOR to a patient. A questionnaire was sent to 40406 persons (up to 59 years old, excluding rehabilitation after hospital stay) before admission to a medical rehabilitation service. The survey data were matched with data from the discharge report and information on the participation in a SOR measure. Initially, a single criterion was defined which describes the need of SOR measures. This criterion is based on 3 different items: patients with at least 12 weeks sickness absence, (a) a SIBAR score>7 and/or (b) a perceived need of SOR.The main aspect of our analyses was to describe the association between the SOR need-criterion and the participation in SOR measures as well as between the predictors of SOR participation when fulfilling the SOR need-criterion. The analyses were based on a multiple logistic regression model. For 16408 patients full data were available. The formal prerequisites for SOR were given for 33% of the sample, out of which 32% received a SOR after rehabilitation and 43% fulfilled the SOR needs criterion. A negative relationship between these 2 categories was observed (phi=-0.08, p<0.01). For patients that fulfilled the need-criterion the probability for participating in SOR decreased by 22% (RR=0.78). The probability of SOR participation increased with a decreasing SIBAR score (OR=0.56) and in patients who showed more confidence in being able be return to work. Participation in SOR measures cannot be predicted by the empirically defined SOR need-criterion: the probability even decreased when fulfilling the criterion. Furthermore, the results of a multivariate analysis show a positive selection of the patients who participate in SOR measures. Our results point strongly to the need of an indication guideline for physicians in rehabilitation centres. Further research addressing the success of SOR measures have to show whether the information used in this case can serve as a base for such a guideline. © Georg Thieme Verlag KG Stuttgart · New York.
Mixture of autoregressive modeling orders and its implication on single trial EEG classification
Atyabi, Adham; Shic, Frederick; Naples, Adam
2016-01-01
Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331
Assessment of selenium effects in lotic ecosystems
Hamilton, Steven J.; Palace, Vince
2001-01-01
The selenium literature has grown substantially in recent years to encompass new information in a variety of areas. Correspondingly, several different approaches to establishing a new water quality criterion for selenium have been proposed since establishment of the national water quality criterion in 1987. Diverging viewpoints and interpretations of the selenium literature have lead to opposing perspectives on issues such as establishing a national criterion based on a sediment-based model, using hydrologic units to set criteria for stream reaches, and applying lentic-derived effects to lotic environments. This Commentary presents information on the lotic verse lentic controversy. Recently, an article was published that concluded that no adverse effects were occurring in a cutthroat trout population in a coldwater river with elevated selenium concentrations (C. J. Kennedy, L. E. McDonald, R. Loveridge, and M. M. Strosher, 2000, Arch. Environ. Contam. Toxicol. 39, 46–52). This article has added to the controversy rather than provided further insight into selenium toxicology. Information, or rather missing information, in the article has been critically reviewed and problems in the interpretations are discussed.
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.
Multispectral image fusion for illumination-invariant palmprint recognition
Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064
Multispectral image fusion for illumination-invariant palmprint recognition.
Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133
Li, Pengxiang; Kim, Michelle M; Doshi, Jalpa A
2010-08-20
The Centers for Medicare and Medicaid Services (CMS) has implemented the CMS-Hierarchical Condition Category (CMS-HCC) model to risk adjust Medicare capitation payments. This study intends to assess the performance of the CMS-HCC risk adjustment method and to compare it to the Charlson and Elixhauser comorbidity measures in predicting in-hospital and six-month mortality in Medicare beneficiaries. The study used the 2005-2006 Chronic Condition Data Warehouse (CCW) 5% Medicare files. The primary study sample included all community-dwelling fee-for-service Medicare beneficiaries with a hospital admission between January 1st, 2006 and June 30th, 2006. Additionally, four disease-specific samples consisting of subgroups of patients with principal diagnoses of congestive heart failure (CHF), stroke, diabetes mellitus (DM), and acute myocardial infarction (AMI) were also selected. Four analytic files were generated for each sample by extracting inpatient and/or outpatient claims for each patient. Logistic regressions were used to compare the methods. Model performance was assessed using the c-statistic, the Akaike's information criterion (AIC), the Bayesian information criterion (BIC) and their 95% confidence intervals estimated using bootstrapping. The CMS-HCC had statistically significant higher c-statistic and lower AIC and BIC values than the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality across all samples in analytic files that included claims from the index hospitalization. Exclusion of claims for the index hospitalization generally led to drops in model performance across all methods with the highest drops for the CMS-HCC method. However, the CMS-HCC still performed as well or better than the other two methods. The CMS-HCC method demonstrated better performance relative to the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality. The CMS-HCC model is preferred over the Charlson and Elixhauser methods if information about the patient's diagnoses prior to the index hospitalization is available and used to code the risk adjusters. However, caution should be exercised in studies evaluating inpatient processes of care and where data on pre-index admission diagnoses are unavailable.
Industry Software Trustworthiness Criterion Research Based on Business Trustworthiness
NASA Astrophysics Data System (ADS)
Zhang, Jin; Liu, Jun-fei; Jiao, Hai-xing; Shen, Yi; Liu, Shu-yuan
To industry software Trustworthiness problem, an idea aiming to business to construct industry software trustworthiness criterion is proposed. Based on the triangle model of "trustworthy grade definition-trustworthy evidence model-trustworthy evaluating", the idea of business trustworthiness is incarnated from different aspects of trustworthy triangle model for special industry software, power producing management system (PPMS). Business trustworthiness is the center in the constructed industry trustworthy software criterion. Fusing the international standard and industry rules, the constructed trustworthy criterion strengthens the maneuverability and reliability. Quantitive evaluating method makes the evaluating results be intuitionistic and comparable.
Multi-Informant Assessment of Temperament in Children with Externalizing Behavior Problems
ERIC Educational Resources Information Center
Copeland, William; Landry, Kerry; Stanger, Catherine; Hudziak, James J.
2004-01-01
We examined the criterion validity of parent and self-report versions of the Junior Temperament and Character Inventory (JTCI) in children with high levels of externalizing problems. The sample included 412 children (206 participants and 206 siblings) participating in a family study of attention and aggressive behavior problems. Criterion validity…
ERIC Educational Resources Information Center
Messick, Samuel
Cognitive styles--defined as information processing habits--should be considered as a criterion variable in the evaluation of instruction. Research findings identify the characteristics of different cognitive stles. Used in educational practice and evaluation, cognitive styles would be new process variables extending the assessment of mental…
The Validity of the Instructional Reading Level.
ERIC Educational Resources Information Center
Powell, William R.
Presented is a critical inquiry about the product of the informal reading inventory (IRI) and about some of the elements used in the process of determining that product. Recent developments on this topic are briefly reviewed. Questions are raised concerning what is a suitable criterion level for word recognition. The original criterion of 95…
On the predictive information criteria for model determination in seismic hazard analysis
NASA Astrophysics Data System (ADS)
Varini, Elisa; Rotondi, Renata
2016-04-01
Many statistical tools have been developed for evaluating, understanding, and comparing models, from both frequentist and Bayesian perspectives. In particular, the problem of model selection can be addressed according to whether the primary goal is explanation or, alternatively, prediction. In the former case, the criteria for model selection are defined over the parameter space whose physical interpretation can be difficult; in the latter case, they are defined over the space of the observations, which has a more direct physical meaning. In the frequentist approaches, model selection is generally based on an asymptotic approximation which may be poor for small data sets (e.g. the F-test, the Kolmogorov-Smirnov test, etc.); moreover, these methods often apply under specific assumptions on models (e.g. models have to be nested in the likelihood ratio test). In the Bayesian context, among the criteria for explanation, the ratio of the observed marginal densities for two competing models, named Bayes Factor (BF), is commonly used for both model choice and model averaging (Kass and Raftery, J. Am. Stat. Ass., 1995). But BF does not apply to improper priors and, even when the prior is proper, it is not robust to the specification of the prior. These limitations can be extended to two famous penalized likelihood methods as the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), since they are proved to be approximations of -2log BF . In the perspective that a model is as good as its predictions, the predictive information criteria aim at evaluating the predictive accuracy of Bayesian models or, in other words, at estimating expected out-of-sample prediction error using a bias-correction adjustment of within-sample error (Gelman et al., Stat. Comput., 2014). In particular, the Watanabe criterion is fully Bayesian because it averages the predictive distribution over the posterior distribution of parameters rather than conditioning on a point estimate, but it is hardly applicable to data which are not independent given parameters (Watanabe, J. Mach. Learn. Res., 2010). A solution is given by Ando and Tsay criterion where the joint density may be decomposed into the product of the conditional densities (Ando and Tsay, Int. J. Forecast., 2010). The above mentioned criteria are global summary measures of model performance, but more detailed analysis could be required to discover the reasons for poor global performance. In this latter case, a retrospective predictive analysis is performed on each individual observation. In this study we performed the Bayesian analysis of Italian data sets by four versions of a long-term hazard model known as the stress release model (Vere-Jones, J. Physics Earth, 1978; Bebbington and Harte, Geophys. J. Int., 2003; Varini and Rotondi, Environ. Ecol. Stat., 2015). Then we illustrate the results on their performance evaluated by Bayes Factor, predictive information criteria and retrospective predictive analysis.
A method for tailoring the information content of a software process model
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Arend, Mark B.
1990-01-01
The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to set of accepted processes and products for achieving each criterion; (5) Select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.
A method for tailoring the information content of a software process model
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Arend, Mark B.
1990-01-01
The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to a set of accepted processes and products for achieving each criterion; (5) select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.
Optimal experimental designs for fMRI when the model matrix is uncertain.
Kao, Ming-Hung; Zhou, Lin
2017-07-15
This study concerns optimal designs for functional magnetic resonance imaging (fMRI) experiments when the model matrix of the statistical model depends on both the selected stimulus sequence (fMRI design), and the subject's uncertain feedback (e.g. answer) to each mental stimulus (e.g. question) presented to her/him. While practically important, this design issue is challenging. This mainly is because that the information matrix cannot be fully determined at the design stage, making it difficult to evaluate the quality of the selected designs. To tackle this challenging issue, we propose an easy-to-use optimality criterion for evaluating the quality of designs, and an efficient approach for obtaining designs optimizing this criterion. Compared with a previously proposed method, our approach requires a much less computing time to achieve designs with high statistical efficiencies. Copyright © 2017 Elsevier Inc. All rights reserved.
Digital focusing of OCT images based on scalar diffraction theory and information entropy.
Liu, Guozhong; Zhi, Zhongwei; Wang, Ruikang K
2012-11-01
This paper describes a digital method that is capable of automatically focusing optical coherence tomography (OCT) en face images without prior knowledge of the point spread function of the imaging system. The method utilizes a scalar diffraction model to simulate wave propagation from out-of-focus scatter to the focal plane, from which the propagation distance between the out-of-focus plane and the focal plane is determined automatically via an image-definition-evaluation criterion based on information entropy theory. By use of the proposed approach, we demonstrate that the lateral resolution close to that at the focal plane can be recovered from the imaging planes outside the depth of field region with minimal loss of resolution. Fresh onion tissues and mouse fat tissues are used in the experiments to show the performance of the proposed method.
Segmentation and clustering as complementary sources of information
NASA Astrophysics Data System (ADS)
Dale, Michael B.; Allison, Lloyd; Dale, Patricia E. R.
2007-03-01
This paper examines the effects of using a segmentation method to identify change-points or edges in vegetation. It identifies coherence (spatial or temporal) in place of unconstrained clustering. The segmentation method involves change-point detection along a sequence of observations so that each cluster formed is composed of adjacent samples; this is a form of constrained clustering. The protocol identifies one or more models, one for each section identified, and the quality of each is assessed using a minimum message length criterion, which provides a rational basis for selecting an appropriate model. Although the segmentation is less efficient than clustering, it does provide other information because it incorporates textural similarity as well as homogeneity. In addition it can be useful in determining various scales of variation that may apply to the data, providing a general method of small-scale pattern analysis.
A differentiable reformulation for E-optimal design of experiments in nonlinear dynamic biosystems.
Telen, Dries; Van Riet, Nick; Logist, Flip; Van Impe, Jan
2015-06-01
Informative experiments are highly valuable for estimating parameters in nonlinear dynamic bioprocesses. Techniques for optimal experiment design ensure the systematic design of such informative experiments. The E-criterion which can be used as objective function in optimal experiment design requires the maximization of the smallest eigenvalue of the Fisher information matrix. However, one problem with the minimal eigenvalue function is that it can be nondifferentiable. In addition, no closed form expression exists for the computation of eigenvalues of a matrix larger than a 4 by 4 one. As eigenvalues are normally computed with iterative methods, state-of-the-art optimal control solvers are not able to exploit automatic differentiation to compute the derivatives with respect to the decision variables. In the current paper a reformulation strategy from the field of convex optimization is suggested to circumvent these difficulties. This reformulation requires the inclusion of a matrix inequality constraint involving positive semidefiniteness. In this paper, this positive semidefiniteness constraint is imposed via Sylverster's criterion. As a result the maximization of the minimum eigenvalue function can be formulated in standard optimal control solvers through the addition of nonlinear constraints. The presented methodology is successfully illustrated with a case study from the field of predictive microbiology. Copyright © 2015. Published by Elsevier Inc.
May, Michael R; Moore, Brian R
2016-11-01
Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified [Formula: see text] of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers-in order to clarify whether these methods can make reliable inferences from empirical datasets-and to theoretical biologists-in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
May, Michael R.; Moore, Brian R.
2016-01-01
Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified ≈30% of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers—in order to clarify whether these methods can make reliable inferences from empirical datasets—and to theoretical biologists—in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.] PMID:27037081
NASA Astrophysics Data System (ADS)
Gromov, V. A.; Sharygin, G. S.; Mironov, M. V.
2012-08-01
An interval method of radar signal detection and selection based on non-energetic polarization parameter - the ellipticity angle - is suggested. The examined method is optimal by the Neumann-Pearson criterion. The probability of correct detection for a preset probability of false alarm is calculated for different signal/noise ratios. Recommendations for optimization of the given method are provided.
Multimodel predictive system for carbon dioxide solubility in saline formation waters.
Wang, Zan; Small, Mitchell J; Karamalidis, Athanasios K
2013-02-05
The prediction of carbon dioxide solubility in brine at conditions relevant to carbon sequestration (i.e., high temperature, pressure, and salt concentration (T-P-X)) is crucial when this technology is applied. Eleven mathematical models for predicting CO(2) solubility in brine are compared and considered for inclusion in a multimodel predictive system. Model goodness of fit is evaluated over the temperature range 304-433 K, pressure range 74-500 bar, and salt concentration range 0-7 m (NaCl equivalent), using 173 published CO(2) solubility measurements, particularly selected for those conditions. The performance of each model is assessed using various statistical methods, including the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). Different models emerge as best fits for different subranges of the input conditions. A classification tree is generated using machine learning methods to predict the best-performing model under different T-P-X subranges, allowing development of a multimodel predictive system (MMoPS) that selects and applies the model expected to yield the most accurate CO(2) solubility prediction. Statistical analysis of the MMoPS predictions, including a stratified 5-fold cross validation, shows that MMoPS outperforms each individual model and increases the overall accuracy of CO(2) solubility prediction across the range of T-P-X conditions likely to be encountered in carbon sequestration applications.
A Statistical Approach to Provide Individualized Privacy for Surveys
Esponda, Fernando; Huerta, Kael; Guerrero, Victor M.
2016-01-01
In this paper we propose an instrument for collecting sensitive data that allows for each participant to customize the amount of information that she is comfortable revealing. Current methods adopt a uniform approach where all subjects are afforded the same privacy guarantees; however, privacy is a highly subjective property with intermediate points between total disclosure and non-disclosure: each respondent has a different criterion regarding the sensitivity of a particular topic. The method we propose empowers respondents in this respect while still allowing for the discovery of interesting findings through the application of well-known inferential procedures. PMID:26824758
Registration of segmented histological images using thin plate splines and belief propagation
NASA Astrophysics Data System (ADS)
Kybic, Jan
2014-03-01
We register images based on their multiclass segmentations, for cases when correspondence of local features cannot be established. A discrete mutual information is used as a similarity criterion. It is evaluated at a sparse set of location on the interfaces between classes. A thin-plate spline regularization is approximated by pairwise interactions. The problem is cast into a discrete setting and solved efficiently by belief propagation. Further speedup and robustness is provided by a multiresolution framework. Preliminary experiments suggest that our method can provide similar registration quality to standard methods at a fraction of the computational cost.
A Bayesian truth serum for subjective data.
Prelec, Drazen
2004-10-15
Subjective judgments, an essential information source for science and policy, are problematic because there are no public criteria for assessing judgmental truthfulness. I present a scoring method for eliciting truthful subjective data in situations where objective truth is unknowable. The method assigns high scores not to the most common answers but to the answers that are more common than collectively predicted, with predictions drawn from the same population. This simple adjustment in the scoring criterion removes all bias in favor of consensus: Truthful answers maximize expected score even for respondents who believe that their answer represents a minority view.
ERIC Educational Resources Information Center
Meredith, Keith E.; Sabers, Darrell L.
Data required for evaluating a Criterion Referenced Measurement (CRM) is described with a matrix. The information within the matrix consists of the "pass-fail" decisions of two CRMs. By differentially defining these two CRMs, different concepts of reliability and validity can be examined. Indices suggested for analyzing the matrix are listed with…
Water-Sediment Controversy in Setting Environmental Standards for Selenium
Steven J. Hamilton; A. Dennis Lemly
1999-01-01
A substantial amount of laboratory and field research on selenium effects to biota has been accomplished since the national water quality criterion was published for selenium in 1987. Many articles have documented adverse effects on biota at concentrations below the current chronic criterion of 5 µg/L. This commentary will present information to support a national...
ERIC Educational Resources Information Center
Tibbetts, Katherine A.; And Others
This paper describes the development of a criterion-referenced, performance-based measure of third grade reading comprehension. The primary purpose of the assessment is to contribute unique and valid information for use in the formative evaluation of a whole literacy program. A secondary purpose is to supplement other program efforts to…
ERIC Educational Resources Information Center
Hirschi, Andreas
2009-01-01
Interest differentiation and elevation are supposed to provide important information about a person's state of interest development, yet little is known about their development and criterion validity. The present study explored these constructs among a group of Swiss adolescents. Study 1 applied a cross-sectional design with 210 students in 11th…
A Humanistic Approach to Criterion Referenced Testing.
ERIC Educational Resources Information Center
Wilson, H. A.
Test construction is not the strictly logical process that we might wish it to be. This is particularly true in a large on-going project such as the National Assessment of Educational Progress (NAEP). Most of the really deep questions can only be answered by the exercise of well-informed human judgment. Criterion-referenced testing is still a term…
Changing the criterion for memory conformity in free recall and recognition.
Wright, Daniel B; Gabbert, Fiona; Memon, Amina; London, Kamala
2008-02-01
People's responses during memory studies are affected by what other people say. This memory conformity effect has been shown in both free recall and recognition. Here we examine whether accurate, inaccurate, and suggested answers are affected similarly when the response criterion is varied. In the first study, participants saw four pictures of detailed scenes and then discussed the content of these scenes with another participant who saw the same scenes, but with a couple of details changed. Participants were either told to recall everything they could and not to worry about making mistakes (lenient), or only to recall items if they were sure that they were accurate (strict). The strict instructions reduced the amount of inaccurate information reported that the other person suggested, but also reduced the number of accurate details recalled. In the second study, participants were shown a large set of faces and then their memory recognition was tested with a confederate on these and fillers. Here also, the criterion manipulation shifted both accurate and inaccurate responses, and those suggested by the confederate. The results are largely consistent with a shift in response criterion affecting accurate, inaccurate, and suggested information. In addition we varied the level of secrecy in the participants' responses. The effects of secrecy were complex and depended on the level of response criterion. Implications for interviewing eyewitnesses and line-ups are discussed.
Takahashi, M; Onozawa, S; Ogawa, R; Uesawa, Y; Echizen, H
2015-02-01
Clinical pharmacists have a challenging task when answering patients' question about whether they can take specific drugs with grapefruit juice (GFJ) without risk of drug interaction. To identify the most practicable method for predicting clinically relevant changes in plasma concentrations of orally administered drugs caused by the ingestion of GFJ, we compared the predictive performance of three methods using data obtained from the literature. We undertook a systematic search of drug interactions associated with GFJ using MEDLINE and the Metabolism & Transport Drug Interaction Database (DIDB version 4.0). We considered an elevation of the area under the plasma concentration-time curve (AUC) of 2 or greater relative to the control value [AUC ratio (AUCR) ≥ 2.0] as a clinically significant interaction. The data from 74 drugs (194 data sets) were analysed. When the reported information of CYP3A involvement in the metabolism of a drug of interest was adopted as a predictive criterion for GFJ-drug interaction, the performance assessed by positive predictive value (PPV) was low (0.26), but that assessed by negative predictive value (NPV) and sensitivity was high (1.00 for both). When the reported oral bioavailability of ≤ 0.1 was used as a criterion, the PPV improved to 0.50 with an acceptable NPV of 0.81, but sensitivity was reduced to 0.21. When the reported AUCR was ≥ 10 after co-administration of a typical CYP3A inhibitor, the corresponding values were 0.64, 0.79 and 0.19, respectively. We consider that an oral bioavailability of ≤ 0.1 or an AUCR of ≥ 10 caused by a CYP3A inhibitor of a drug of interest may be a practical prediction criterion for avoiding significant interactions with GFJ. Information about the involvement of CYP3A in their metabolism should also be taken into account for drugs with narrow therapeutic ranges. © 2014 John Wiley & Sons Ltd.
Restoration of STORM images from sparse subset of localizations (Conference Presentation)
NASA Astrophysics Data System (ADS)
Moiseev, Alexander A.; Gelikonov, Grigory V.; Gelikonov, Valentine M.
2016-02-01
To construct a Stochastic Optical Reconstruction Microscopy (STORM) image one should collect sufficient number of localized fluorophores to satisfy Nyquist criterion. This requirement limits time resolution of the method. In this work we propose a probabalistic approach to construct STORM images from a subset of localized fluorophores 3-4 times sparser than required from Nyquist criterion. Using a set of STORM images constructed from number of localizations sufficient for Nyquist criterion we derive a model which allows us to predict the probability for every location to be occupied by a fluorophore at the end of hypothetical acquisition, having as an input parameters distribution of already localized fluorophores in the proximity of this location. We show that probability map obtained from number of fluorophores 3-4 times less than required by Nyquist criterion may be used as superresolution image itself. Thus we are able to construct STORM image from a subset of localized fluorophores 3-4 times sparser than required from Nyquist criterion, proportionaly decreasing STORM data acquisition time. This method may be used complementary with other approaches desined for increasing STORM time resolution.
Selection criteria of residents for residency programs in Kuwait
2013-01-01
Background In Kuwait, 21 residency training programs were offered in the year 2011; however, no data is available regarding the criteria of selecting residents for these programs. This study aims to provide information about the importance of these criteria. Methods A self-administered questionnaire was used to collect data from members (e.g. chairmen, directors, assistants …etc.) of residency programs in Kuwait. A total of 108 members were invited to participate. They were asked to rate the importance level (scale from 1 to 5) of criteria that may affect the acceptance of an applicant to their residency programs. Average scores were calculated for each criterion. Results Of the 108 members invited to participate, only 12 (11.1%) declined to participate. Interview performance was ranked as the most important criteria for selecting residents (average score: 4.63/5.00), followed by grade point average (average score: 3.78/5.00) and honors during medical school (average score: 3.67/5.00). On the other hand, receiving disciplinary action during medical school and failure in a required clerkship were considered as the most concerning among other criteria used to reject applicants (average scores: 3.83/5.00 and 3.54/5.00 respectively). Minor differences regarding the importance level of each criterion were noted across different programs. Conclusions This study provided general information about the criteria that are used to accept/reject applicants to residency programs in Kuwait. Future studies should be conducted to investigate each criterion individually, and to assess if these criteria are related to residents' success during their training. PMID:23331670
NASA Astrophysics Data System (ADS)
Wu, Li; Adoko, Amoussou Coffi; Li, Bo
2018-04-01
In tunneling, determining quantitatively the rock mass strength parameters of the Hoek-Brown (HB) failure criterion is useful since it can improve the reliability of the design of tunnel support systems. In this study, a quantitative method is proposed to determine the rock mass quality parameters of the HB failure criterion, namely the Geological Strength Index (GSI) and the disturbance factor ( D) based on the structure of drilling core and weathering condition of rock mass combined with acoustic wave test to calculate the strength of rock mass. The Rock Mass Structure Index and the Rock Mass Weathering Index are used to quantify the GSI while the longitudinal wave velocity ( V p) is employed to derive the value of D. The DK383+338 tunnel face of Yaojia tunnel of Shanghai-Kunming passenger dedicated line served as illustration of how the methodology is implemented. The values of the GSI and D are obtained using the HB criterion and then using the proposed method. The measured in situ stress is used to evaluate their accuracy. To this end, the major and minor principal stresses are calculated based on the GSI and D given by HB criterion and the proposed method. The results indicated that both methods were close to the field observation which suggests that the proposed method can be used for determining quantitatively the rock quality parameters, as well. However, these results remain valid only for rock mass quality and rock type similar to those of the DK383+338 tunnel face of Yaojia tunnel.
Beymer, Matthew R; Weiss, Robert E; Sugar, Catherine A; Bourque, Linda B; Gee, Gilbert C; Morisky, Donald E; Shu, Suzanne B; Javanbakht, Marjan; Bolan, Robert K
2017-01-01
Preexposure prophylaxis (PrEP) has emerged as a human immunodeficiency virus (HIV) prevention tool for populations at highest risk for HIV infection. Current US Centers for Disease Control and Prevention (CDC) guidelines for identifying PrEP candidates may not be specific enough to identify gay, bisexual, and other men who have sex with men (MSM) at the highest risk for HIV infection. We created an HIV risk score for HIV-negative MSM based on Syndemics Theory to develop a more targeted criterion for assessing PrEP candidacy. Behavioral risk assessment and HIV testing data were analyzed for HIV-negative MSM attending the Los Angeles LGBT Center between January 2009 and June 2014 (n = 9481). Syndemics Theory informed the selection of variables for a multivariable Cox proportional hazards model. Estimated coefficients were summed to create an HIV risk score, and model fit was compared between our model and CDC guidelines using the Akaike Information Criterion and Bayesian Information Criterion. Approximately 51% of MSM were above a cutpoint that we chose as an illustrative risk score to qualify for PrEP, identifying 75% of all seroconverting MSM. Our model demonstrated a better overall fit when compared with the CDC guidelines (Akaike Information Criterion Difference = 68) in addition to identifying a greater proportion of HIV infections. Current CDC PrEP guidelines should be expanded to incorporate substance use, partner-level, and other Syndemic variables that have been shown to contribute to HIV acquisition. Deployment of such personalized algorithms may better hone PrEP criteria and allow providers and their patients to make a more informed decision prior to PrEP use.
Muscle Fiber Orientation Angle Dependence of the Tensile Fracture Behavior of Frozen Fish Muscle
NASA Astrophysics Data System (ADS)
Hagura, Yoshio; Okamoto, Kiyoshi; Suzuki, Kanichi; Kubota, Kiyoshi
We have proposed a new cutting method for frozen fish named "cryo-cutting". This method applied tensile fracture force or bending fracture force to the frozen fish at appropriate low temperatures. In this paper, to clarify cryo-cutting mechanism, we analyzed tensile fracture behavior of the frozen fish muscle. In the analysis, the frozen fish muscle was considered unidirectionally fiber-reinforced composite material which consisted of fiber (muscle fiber) and matrix (connective tissue). Fracture criteria (maximum stress criterion, Tsai-Hill criterion) for the unidirectionally fiber-reinforced composite material were used. The following results were obtained: (1) By using Tsai-Hill criterion, muscle fiber orientation angle dependence of the tensile fracture stress could be calculated. (2) By using the maximum stress theory jointly with Tsai-Hill criterion, muscle fiber orientation angle dependence of the fracture mode of the frozen fish muscle could be estimated.
Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong
2012-10-01
In this paper, a novel lesion segmentation within breast ultrasound (BUS) image based on the cellular automata principle is proposed. Its energy transition function is formulated based on global image information difference and local image information difference using different energy transfer strategies. First, an energy decrease strategy is used for modeling the spatial relation information of pixels. For modeling global image information difference, a seed information comparison function is developed using an energy preserve strategy. Then, a texture information comparison function is proposed for considering local image difference in different regions, which is helpful for handling blurry boundaries. Moreover, two neighborhood systems (von Neumann and Moore neighborhood systems) are integrated as the evolution environment, and a similarity-based criterion is used for suppressing noise and reducing computation complexity. The proposed method was applied to 205 clinical BUS images for studying its characteristic and functionality, and several overlapping area error metrics and statistical evaluation methods are utilized for evaluating its performance. The experimental results demonstrate that the proposed method can handle BUS images with blurry boundaries and low contrast well and can segment breast lesions accurately and effectively.
ERIC Educational Resources Information Center
Stewart, Kelise K.; Carr, James E.; Brandt, Charles W.; McHenry, Meade M.
2007-01-01
The present study evaluated the effects of both a traditional lecture and the conservative dual-criterion (CDC) judgment aid on the ability of 6 university students to visually inspect AB-design line graphs. The traditional lecture reliably failed to improve visual inspection accuracy, whereas the CDC method substantially improved the performance…
NASA Technical Reports Server (NTRS)
Eigen, D. J.; Fromm, F. R.; Northouse, R. A.
1974-01-01
A new clustering algorithm is presented that is based on dimensional information. The algorithm includes an inherent feature selection criterion, which is discussed. Further, a heuristic method for choosing the proper number of intervals for a frequency distribution histogram, a feature necessary for the algorithm, is presented. The algorithm, although usable as a stand-alone clustering technique, is then utilized as a global approximator. Local clustering techniques and configuration of a global-local scheme are discussed, and finally the complete global-local and feature selector configuration is shown in application to a real-time adaptive classification scheme for the analysis of remote sensed multispectral scanner data.
Universal first-order reliability concept applied to semistatic structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1994-01-01
A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.
Universal first-order reliability concept applied to semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-07-01
A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less
Schiffman, Eric L.; Truelove, Edmond L.; Ohrbach, Richard; Anderson, Gary C.; John, Mike T.; List, Thomas; Look, John O.
2011-01-01
AIMS The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. An overview is presented, including Axis I and II methodology and descriptive statistics for the study participant sample. This paper details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. Validity testing for the Axis II biobehavioral instruments was based on previously validated reference standards. METHODS The Axis I reference standards were based on the consensus of 2 criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion exam reliability was also assessed within study sites. RESULTS Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas ≥ 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion exam agreement with reference standards was excellent (k ≥ 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). CONCLUSION The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods. PMID:20213028
Promoted Combustion Test Data Re-Examined
NASA Technical Reports Server (NTRS)
Lewis, Michelle; Jeffers, Nathan; Stoltzfus, Joel
2010-01-01
Promoted combustion testing of metallic materials has been performed by NASA since the mid-1980s to determine the burn resistance of materials in oxygen-enriched environments. As the technolo gy has advanced, the method of interpreting, presenting, and applying the promoted combustion data has advanced as well. Recently NASA changed the bum criterion from 15 cm (6 in.) to 3 cm (1.2 in.). This new burn criterion was adopted for ASTM G 124, Standard Test Method for Determining the Combustion Behavior- of Metallic Materials in Oxygen-Enriched Atmospheres. Its effect on the test data and the latest method to display the test data will be discussed. Two specific examples that illustrate how this new criterion affects the burn/no-bum thresholds of metal alloys will also be presented.
Procedures for Constructing and Using Criterion-Referenced Performance Tests.
ERIC Educational Resources Information Center
Campbell, Clifton P.; Allender, Bill R.
1988-01-01
Criterion-referenced performance tests (CRPT) provide a realistic method for objectively measuring task proficiency against predetermined attainment standards. This article explains the procedures of constructing, validating, and scoring CRPTs and includes a checklist for a welding test. (JOW)
Simulation of Thermal Neutron Transport Processes Directly from the Evaluated Nuclear Data Files
NASA Astrophysics Data System (ADS)
Androsenko, P. A.; Malkov, M. R.
The main idea of the method proposed in this paper is to directly extract thetrequired information for Monte-Carlo calculations from nuclear data files. The met od being developed allows to directly utilize the data obtained from libraries and seehs to be the most accurate technique. Direct simulation of neutron scattering in themmal energy range using file 7 ENDF-6 format in terms of code system BRAND has beer achieved. Simulation algorithms have been verified using the criterion x2
NASA Astrophysics Data System (ADS)
Bai, F.; Gagar, D.; Foote, P.; Zhao, Y.
2017-02-01
Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors in an array is essential in performing localisation. Currently, this is determined using a fixed threshold which is particularly prone to errors when not set to optimal values. This paper presents three new methods for determining the onset of AE signals without the need for a predetermined threshold. The performance of the techniques is evaluated using AE signals generated during fatigue crack growth and compared to the established Akaike Information Criterion (AIC) and fixed threshold methods. It was found that the 1D location accuracy of the new methods was within the range of < 1 - 7.1 % of the monitored region compared to 2.7% for the AIC method and a range of 1.8-9.4% for the conventional Fixed Threshold method at different threshold levels.
Digital focusing of OCT images based on scalar diffraction theory and information entropy
Liu, Guozhong; Zhi, Zhongwei; Wang, Ruikang K.
2012-01-01
This paper describes a digital method that is capable of automatically focusing optical coherence tomography (OCT) en face images without prior knowledge of the point spread function of the imaging system. The method utilizes a scalar diffraction model to simulate wave propagation from out-of-focus scatter to the focal plane, from which the propagation distance between the out-of-focus plane and the focal plane is determined automatically via an image-definition-evaluation criterion based on information entropy theory. By use of the proposed approach, we demonstrate that the lateral resolution close to that at the focal plane can be recovered from the imaging planes outside the depth of field region with minimal loss of resolution. Fresh onion tissues and mouse fat tissues are used in the experiments to show the performance of the proposed method. PMID:23162717
NASA Astrophysics Data System (ADS)
Massiot, Cécile; Townend, John; Nicol, Andrew; McNamara, David D.
2017-08-01
Acoustic borehole televiewer (BHTV) logs provide measurements of fracture attributes (orientations, thickness, and spacing) at depth. Orientation, censoring, and truncation sampling biases similar to those described for one-dimensional outcrop scanlines, and other logging or drilling artifacts specific to BHTV logs, can affect the interpretation of fracture attributes from BHTV logs. K-means, fuzzy K-means, and agglomerative clustering methods provide transparent means of separating fracture groups on the basis of their orientation. Fracture spacing is calculated for each of these fracture sets. Maximum likelihood estimation using truncated distributions permits the fitting of several probability distributions to the fracture attribute data sets within truncation limits, which can then be extrapolated over the entire range where they naturally occur. Akaike Information Criterion (AIC) and Schwartz Bayesian Criterion (SBC) statistical information criteria rank the distributions by how well they fit the data. We demonstrate these attribute analysis methods with a data set derived from three BHTV logs acquired from the high-temperature Rotokawa geothermal field, New Zealand. Varying BHTV log quality reduces the number of input data points, but careful selection of the quality levels where fractures are deemed fully sampled increases the reliability of the analysis. Spacing data analysis comprising up to 300 data points and spanning three orders of magnitude can be approximated similarly well (similar AIC rankings) with several distributions. Several clustering configurations and probability distributions can often characterize the data at similar levels of statistical criteria. Thus, several scenarios should be considered when using BHTV log data to constrain numerical fracture models.
Criterion distances and environmental correlates of active commuting to school in children
2011-01-01
Background Active commuting to school can contribute to daily physical activity levels in children. Insight into the determinants of active commuting is needed, to promote such behavior in children living within a feasible commuting distance from school. This study determined feasible distances for walking and cycling to school (criterion distances) in 11- to 12-year-old Belgian children. For children living within these criterion distances from school, the correlation between parental perceptions of the environment, the number of motorized vehicles per family and the commuting mode (active/passive) to school was investigated. Methods Parents (n = 696) were contacted through 44 randomly selected classes of the final year (sixth grade) in elementary schools in East- and West-Flanders. Parental environmental perceptions were obtained using the parent version of Neighborhood Environment Walkability Scale for Youth (NEWS-Y). Information about active commuting to school was obtained using a self-reported questionnaire for parents. Distances from the children's home to school were objectively measured with Routenet online route planner. Criterion distances were set at the distance in which at least 85% of the active commuters lived. After the determination of these criterion distances, multilevel analyses were conducted to determine correlates of active commuting to school within these distances. Results Almost sixty percent (59.3%) of the total sample commuted actively to school. Criterion distances were set at 1.5 kilometers for walking and 3.0 kilometers for cycling. In the range of 2.01 - 2.50 kilometers household distance from school, the number of passive commuters exceeded the number of active commuters. For children who were living less than 3.0 kilometers away from school, only perceived accessibility by the parents was positively associated with active commuting to school. Within the group of active commuters, a longer distance to school was associated with more cycling to school compared to walking to school. Conclusions Household distance from school is an important correlate of transport mode to school in children. Interventions to promote active commuting in 11-12 year olds should be focusing on children who are living within the criterion distance of 3.0 kilometers from school by improving the accessibility en route from children's home to school. PMID:21831276
Progressive Failure Analysis Methodology for Laminated Composite Structures
NASA Technical Reports Server (NTRS)
Sleight, David W.
1999-01-01
A progressive failure analysis method has been developed for predicting the failure of laminated composite structures under geometrically nonlinear deformations. The progressive failure analysis uses C(exp 1) shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms and several options are available to degrade the material properties after failures. The progressive failure analysis method is implemented in the COMET finite element analysis code and can predict the damage and response of laminated composite structures from initial loading to final failure. The different failure criteria and material degradation methods are compared and assessed by performing analyses of several laminated composite structures. Results from the progressive failure method indicate good correlation with the existing test data except in structural applications where interlaminar stresses are important which may cause failure mechanisms such as debonding or delaminations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumgartner, S.; Bieli, R.; Bergmann, U. C.
2012-07-01
An overview is given of existing CPR design criteria and the methods used in BWR reload analysis to evaluate the impact of channel bow on CPR margins. Potential weaknesses in today's methodologies are discussed. Westinghouse in collaboration with KKL and Axpo - operator and owner of the Leibstadt NPP - has developed an optimized CPR methodology based on a new criterion to protect against dryout during normal operation and with a more rigorous treatment of channel bow. The new steady-state criterion is expressed in terms of an upper limit of 0.01 for the dryout failure probability per year. This ismore » considered a meaningful and appropriate criterion that can be directly related to the probabilistic criteria set-up for the analyses of Anticipated Operation Occurrences (AOOs) and accidents. In the Monte Carlo approach a statistical modeling of channel bow and an accurate evaluation of CPR response functions allow the associated CPR penalties to be included directly in the plant SLMCPR and OLMCPR in a best-estimate manner. In this way, the treatment of channel bow is equivalent to all other uncertainties affecting CPR. Emphasis is put on quantifying the statistical distribution of channel bow throughout the core using measurement data. The optimized CPR methodology has been implemented in the Westinghouse Monte Carlo code, McSLAP. The methodology improves the quality of dryout safety assessments by supplying more valuable information and better control of conservatisms in establishing operational limits for CPR. The methodology is demonstrated with application examples from the introduction at KKL. (authors)« less
Varying the valuating function and the presentable bank in computerized adaptive testing.
Barrada, Juan Ramón; Abad, Francisco José; Olea, Julio
2011-05-01
In computerized adaptive testing, the most commonly used valuating function is the Fisher information function. When the goal is to keep item bank security at a maximum, the valuating function that seems most convenient is the matching criterion, valuating the distance between the estimated trait level and the point where the maximum of the information function is located. Recently, it has been proposed not to keep the same valuating function constant for all the items in the test. In this study we expand the idea of combining the matching criterion with the Fisher information function. We also manipulate the number of strata into which the bank is divided. We find that the manipulation of the number of items administered with each function makes it possible to move from the pole of high accuracy and low security to the opposite pole. It is possible to greatly improve item bank security with much fewer losses in accuracy by selecting several items with the matching criterion. In general, it seems more appropriate not to stratify the bank.
Bayesian meta-analysis of Cronbach's coefficient alpha to evaluate informative hypotheses.
Okada, Kensuke
2015-12-01
This paper proposes a new method to evaluate informative hypotheses for meta-analysis of Cronbach's coefficient alpha using a Bayesian approach. The coefficient alpha is one of the most widely used reliability indices. In meta-analyses of reliability, researchers typically form specific informative hypotheses beforehand, such as 'alpha of this test is greater than 0.8' or 'alpha of one form of a test is greater than the others.' The proposed method enables direct evaluation of these informative hypotheses. To this end, a Bayes factor is calculated to evaluate the informative hypothesis against its complement. It allows researchers to summarize the evidence provided by previous studies in favor of their informative hypothesis. The proposed approach can be seen as a natural extension of the Bayesian meta-analysis of coefficient alpha recently proposed in this journal (Brannick and Zhang, 2013). The proposed method is illustrated through two meta-analyses of real data that evaluate different kinds of informative hypotheses on superpopulation: one is that alpha of a particular test is above the criterion value, and the other is that alphas among different test versions have ordered relationships. Informative hypotheses are supported from the data in both cases, suggesting that the proposed approach is promising for application. Copyright © 2015 John Wiley & Sons, Ltd.
SU-F-T-272: Patient Specific Quality Assurance of Prostate VMAT Plans with Portal Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darko, J; Osei, E; University of Waterloo, Waterloo, ON
Purpose: To evaluate the effectiveness of using the Portal Dosimetry (PD) method for patient specific quality assurance of prostate VMAT plans. Methods: As per institutional protocol all VMAT plans were measured using the Varian Portal Dosimetry (PD) method. A gamma evaluation criterion of 3%-3mm with a minimum area gamma pass rate (gamma <1) of 95% is used clinically for all plans. We retrospectively evaluated the portal dosimetry results for 170 prostate patients treated with VMAT technique. Three sets of criterions were adopted for re-evaluating the measurements; 3%-3mm, 2%-2mm and 1%-1mm. For all criterions two areas, Field+1cm and MLC-CIAO were analysed.Tomore » ascertain the effectiveness of the portal dosimetry technique in determining the delivery accuracy of prostate VMAT plans, 10 patients previously measured with portal dosimetry, were randomly selected and their measurements repeated using the ArcCHECK method. The same criterion used in the analysis of PD was used for the ArcCHECK measurements. Results: All patient plans reviewed met the institutional criteria for Area Gamma pass rate. Overall, the gamma pass rate (gamma <1) decreases for 3%-3mm, 2%-2mm and 1%-1mm criterion. For each criterion the pass rate was significantly reduced when the MLC-CIAO was used instead of FIELD+1cm. There was noticeable change in sensitivity for MLC-CIAO with 2%-2mm criteria and much more significant reduction at 1%-1mm. Comparable results were obtained for the ArcCHECK measurements. Although differences were observed between the clockwise verses the counter clockwise plans in both the PD and ArcCHECK measurements, this was not deemed to be statistically significant. Conclusion: This work demonstrates that Portal Dosimetry technique can be effectively used for quality assurance of VMAT plans. Results obtained show similar sensitivity compared to ArcCheck. To reveal certain delivery inaccuracies, the use of a combination of criterions may provide an effective way in improving the overall sensitivity of PD. Funding provided in part by the Prostate Ride for Dad, Kitchener-Waterloo, Canada.« less
Heuristic Bayesian segmentation for discovery of coexpressed genes within genomic regions.
Pehkonen, Petri; Wong, Garry; Törönen, Petri
2010-01-01
Segmentation aims to separate homogeneous areas from the sequential data, and plays a central role in data mining. It has applications ranging from finance to molecular biology, where bioinformatics tasks such as genome data analysis are active application fields. In this paper, we present a novel application of segmentation in locating genomic regions with coexpressed genes. We aim at automated discovery of such regions without requirement for user-given parameters. In order to perform the segmentation within a reasonable time, we use heuristics. Most of the heuristic segmentation algorithms require some decision on the number of segments. This is usually accomplished by using asymptotic model selection methods like the Bayesian information criterion. Such methods are based on some simplification, which can limit their usage. In this paper, we propose a Bayesian model selection to choose the most proper result from heuristic segmentation. Our Bayesian model presents a simple prior for the segmentation solutions with various segment numbers and a modified Dirichlet prior for modeling multinomial data. We show with various artificial data sets in our benchmark system that our model selection criterion has the best overall performance. The application of our method in yeast cell-cycle gene expression data reveals potential active and passive regions of the genome.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo
2014-04-15
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-11-18
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Quantifying Human Movement Using the Movn Smartphone App: Validation and Field Study
2017-01-01
Background The use of embedded smartphone sensors offers opportunities to measure physical activity (PA) and human movement. Big data—which includes billions of digital traces—offers scientists a new lens to examine PA in fine-grained detail and allows us to track people’s geocoded movement patterns to determine their interaction with the environment. Objective The objective of this study was to examine the validity of the Movn smartphone app (Moving Analytics) for collecting PA and human movement data. Methods The criterion and convergent validity of the Movn smartphone app for estimating energy expenditure (EE) were assessed in both laboratory and free-living settings, compared with indirect calorimetry (criterion reference) and a stand-alone accelerometer that is commonly used in PA research (GT1m, ActiGraph Corp, convergent reference). A supporting cross-validation study assessed the consistency of activity data when collected across different smartphone devices. Global positioning system (GPS) and accelerometer data were integrated with geographical information software to demonstrate the feasibility of geospatial analysis of human movement. Results A total of 21 participants contributed to linear regression analysis to estimate EE from Movn activity counts (standard error of estimation [SEE]=1.94 kcal/min). The equation was cross-validated in an independent sample (N=42, SEE=1.10 kcal/min). During laboratory-based treadmill exercise, EE from Movn was comparable to calorimetry (bias=0.36 [−0.07 to 0.78] kcal/min, t82=1.66, P=.10) but overestimated as compared with the ActiGraph accelerometer (bias=0.93 [0.58-1.29] kcal/min, t89=5.27, P<.001). The absolute magnitude of criterion biases increased as a function of locomotive speed (F1,4=7.54, P<.001) but was relatively consistent for the convergent comparison (F1,4=1.26, P<.29). Furthermore, 95% limits of agreement were consistent for criterion and convergent biases, and EE from Movn was strongly correlated with both reference measures (criterion r=.91, convergent r=.92, both P<.001). Movn overestimated EE during free-living activities (bias=1.00 [0.98-1.02] kcal/min, t6123=101.49, P<.001), and biases were larger during high-intensity activities (F3,6120=1550.51, P<.001). In addition, 95% limits of agreement for convergent biases were heterogeneous across free-living activity intensity levels, but Movn and ActiGraph measures were strongly correlated (r=.87, P<.001). Integration of GPS and accelerometer data within a geographic information system (GIS) enabled creation of individual temporospatial maps. Conclusions The Movn smartphone app can provide valid passive measurement of EE and can enrich these data with contextualizing temporospatial information. Although enhanced understanding of geographic and temporal variation in human movement patterns could inform intervention development, it also presents challenges for data processing and analytics. PMID:28818819
Gaudin, Valérie
2017-09-01
Screening methods are used as a first-line approach to detect the presence of antibiotic residues in food of animal origin. The validation process guarantees that the method is fit-for-purpose, suited to regulatory requirements, and provides evidence of its performance. This article is focused on intra-laboratory validation. The first step in validation is characterisation of performance, and the second step is the validation itself with regard to pre-established criteria. The validation approaches can be absolute (a single method) or relative (comparison of methods), overall (combination of several characteristics in one) or criterion-by-criterion. Various approaches to validation, in the form of regulations, guidelines or standards, are presented and discussed to draw conclusions on their potential application for different residue screening methods, and to determine whether or not they reach the same conclusions. The approach by comparison of methods is not suitable for screening methods for antibiotic residues. The overall approaches, such as probability of detection (POD) and accuracy profile, are increasingly used in other fields of application. They may be of interest for screening methods for antibiotic residues. Finally, the criterion-by-criterion approach (Decision 2002/657/EC and of European guideline for the validation of screening methods), usually applied to the screening methods for antibiotic residues, introduced a major characteristic and an improvement in the validation, i.e. the detection capability (CCβ). In conclusion, screening methods are constantly evolving, thanks to the development of new biosensors or liquid chromatography coupled to tandem-mass spectrometry (LC-MS/MS) methods. There have been clear changes in validation approaches these last 20 years. Continued progress is required and perspectives for future development of guidelines, regulations and standards for validation are presented here.
Bayesian Decision Tree for the Classification of the Mode of Motion in Single-Molecule Trajectories
Türkcan, Silvan; Masson, Jean-Baptiste
2013-01-01
Membrane proteins move in heterogeneous environments with spatially (sometimes temporally) varying friction and with biochemical interactions with various partners. It is important to reliably distinguish different modes of motion to improve our knowledge of the membrane architecture and to understand the nature of interactions between membrane proteins and their environments. Here, we present an analysis technique for single molecule tracking (SMT) trajectories that can determine the preferred model of motion that best matches observed trajectories. The method is based on Bayesian inference to calculate the posteriori probability of an observed trajectory according to a certain model. Information theory criteria, such as the Bayesian information criterion (BIC), the Akaike information criterion (AIC), and modified AIC (AICc), are used to select the preferred model. The considered group of models includes free Brownian motion, and confined motion in 2nd or 4th order potentials. We determine the best information criteria for classifying trajectories. We tested its limits through simulations matching large sets of experimental conditions and we built a decision tree. This decision tree first uses the BIC to distinguish between free Brownian motion and confined motion. In a second step, it classifies the confining potential further using the AIC. We apply the method to experimental Clostridium Perfingens -toxin (CPT) receptor trajectories to show that these receptors are confined by a spring-like potential. An adaptation of this technique was applied on a sliding window in the temporal dimension along the trajectory. We applied this adaptation to experimental CPT trajectories that lose confinement due to disaggregation of confining domains. This new technique adds another dimension to the discussion of SMT data. The mode of motion of a receptor might hold more biologically relevant information than the diffusion coefficient or domain size and may be a better tool to classify and compare different SMT experiments. PMID:24376584
Time series ARIMA models for daily price of palm oil
NASA Astrophysics Data System (ADS)
Ariff, Noratiqah Mohd; Zamhawari, Nor Hashimah; Bakar, Mohd Aftar Abu
2015-02-01
Palm oil is deemed as one of the most important commodity that forms the economic backbone of Malaysia. Modeling and forecasting the daily price of palm oil is of great interest for Malaysia's economic growth. In this study, time series ARIMA models are used to fit the daily price of palm oil. The Akaike Infromation Criterion (AIC), Akaike Infromation Criterion with a correction for finite sample sizes (AICc) and Bayesian Information Criterion (BIC) are used to compare between different ARIMA models being considered. It is found that ARIMA(1,2,1) model is suitable for daily price of crude palm oil in Malaysia for the year 2010 to 2012.
ERIC Educational Resources Information Center
Shriver, Edgar L.; And Others
This document furnishes a complete copy of the Test Subject's Instructions and the Test Administrator's Handbook for a battery of criterion referenced Job Task Performance Tests (JTPT) for electronic maintenance. General information is provided on soldering, Radar Set AN/APN-147(v), Radar Set Special Equipment, Radar Set Bench Test Set-Up, and…
Palinkas, Lawrence A; Horwitz, Sarah M; Green, Carla A; Wisdom, Jennifer P; Duan, Naihua; Hoagwood, Kimberly
2015-09-01
Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.
Methods for threshold determination in multiplexed assays
Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J
2014-06-24
Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.
Thoughts on Information Literacy and the 21st Century Workplace.
ERIC Educational Resources Information Center
Beam, Walter R.
2001-01-01
Discusses changes in society that have led to literacy skills being a criterion for employment. Topics include reading; communication skills; writing; cognitive processes; math; computers, the Internet, and the information revolution; information needs and access; information cross-linking; information literacy; and hardware and software use. (LRW)
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc
2015-10-01
This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.
Lee, Donghyun; Lee, Hojun
2016-01-01
Background Internet search query data reflect the attitudes of the users, using which we can measure the past orientation to commit suicide. Examinations of past orientation often highlight certain predispositions of attitude, many of which can be suicide risk factors. Objective To investigate the relationship between past orientation and suicide rate by examining Google search queries. Methods We measured the past orientation using Google search query data by comparing the search volumes of the past year and those of the future year, across the 50 US states and the District of Columbia during the period from 2004 to 2012. We constructed a panel dataset with independent variables as control variables; we then undertook an analysis using multiple ordinary least squares regression and methods that leverage the Akaike information criterion and the Bayesian information criterion. Results It was found that past orientation had a positive relationship with the suicide rate (P≤.001) and that it improves the goodness-of-fit of the model regarding the suicide rate. Unemployment rate (P≤.001 in Models 3 and 4), Gini coefficient (P≤.001), and population growth rate (P≤.001) had a positive relationship with the suicide rate, whereas the gross state product (P≤.001) showed a negative relationship with the suicide rate. Conclusions We empirically identified the positive relationship between the suicide rate and past orientation, which was measured by big data-driven Google search query. PMID:26868917
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
A survey of kernel-type estimators for copula and their applications
NASA Astrophysics Data System (ADS)
Sumarjaya, I. W.
2017-10-01
Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.
NASA Astrophysics Data System (ADS)
Park, Ju H.; Kwon, O. M.
In the letter, the global asymptotic stability of bidirectional associative memory (BAM) neural networks with delays is investigated. The delay is assumed to be time-varying and belongs to a given interval. A novel stability criterion for the stability is presented based on the Lyapunov method. The criterion is represented in terms of linear matrix inequality (LMI), which can be solved easily by various optimization algorithms. Two numerical examples are illustrated to show the effectiveness of our new result.
Automatic control systems satisfying certain general criterions on transient behavior
NASA Technical Reports Server (NTRS)
Boksenbom, Aaron S; Hood, Richard
1952-01-01
An analytic method for the design of automatic controls is developed that starts from certain arbitrary criterions on the behavior of the controlled system and gives those physically realizable equations that the control system can follow in order to realize this behavior. The criterions used are developed in the form of certain time integrals. General results are shown for systems of second order and of any number of degrees of freedom. Detailed examples for several cases in the control of a turbojet engine are presented.
Microcephaly in north-east Brazil: a retrospective study on neonates born between 2012 and 2015
Soares de Araújo, Juliana Sousa; Regis, Cláudio Teixeira; Gomes, Renata Grigório Silva; Tavares, Thiago Ribeiro; Rocha dos Santos, Cícera; Assunção, Patrícia Melo; Nóbrega, Renata Valéria; Pinto, Diana de Fátima Alves; Bezerra, Bruno Vinícius Dantas
2016-01-01
Abstract Objective To assess the number of children born with microcephaly in the State of Paraíba, north-east Brazil. Methods We contacted 21 maternity centres belonging to a paediatric cardiology network, with access to information regarding more than 100 000 neonates born between 1 January 2012 and 31 December 2015. For 10% of these neonates, nurses were requested to retrieve head circumference measurements data from delivery-room books. We used three separate criteria to classify whether a neonate had microcephaly: (i) the Brazilian Ministry of Health proposed criterion: term neonates (gestational age ≥ 37 weeks) with a head circumference of less than 32 cm; (ii) Fenton curves: neonates with a head circumference of less than −3 standard deviation for age and gender; or (iii) the proportionality criterion: neonates with a head circumference of less than ((height/2))+10) ± 2. Findings Between 1 and 31 December 2015, nurses obtained data for 16 208 neonates. Depending on which criterion we used, the number of neonates with microcephaly varied from 678 to 1272 (4.2–8.2%). Two per cent (316) of the neonates fulfilled all three criteria. We observed temporal fluctuations of microcephaly prevalence from late 2012. Conclusion The numbers of microcephaly reported here are much higher than the 6.4 per 10 000 live births reported by the Brazilian live birth information system. The results raise questions about the notification system, the appropriateness of the diagnostic criteria and future implications for the affected children and their families. More studies are needed to understand the epidemiology and the implications for the Brazilian health system. PMID:27821886
Criteria for clinical audit of women friendly care and providers' perception in Malawi
Kongnyuy, Eugene J; van den Broek, Nynke
2008-01-01
Background There are two dimensions of quality of maternity care, namely quality of health outcomes and quality as perceived by clients. The feasibility of using clinical audit to assess and improve the quality of maternity care as perceived by women was studied in Malawi. Objective We sought to (a) establish standards for women friendly care and (b) explore attitudinal barriers which could impede the proper implementation of clinical audit. Methods We used evidence from Malawi national guidelines and World Health Organisation manuals to establish local standards for women friendly care in three districts. We equally conducted a survey of health care providers to explore their attitudes towards criterion based audit. Results The standards addressed different aspects of care given to women in maternity units, namely (i) reception, (ii) attitudes towards women, (iii) respect for culture, (iv) respect for women, (v) waiting time, (vi) enabling environment, (vii) provision of information, (viii) individualised care, (ix) provision of skilled attendance at birth and emergency obstetric care, (x) confidentiality, and (xi) proper management of patient information. The health providers in Malawi generally held a favourable attitude towards clinical audit: 100.0% (54/54) agreed that criterion based audit will improve the quality of care and 92.6% believed that clinical audit is a good educational tool. However, there are concerns that criterion based audit would create a feeling of blame among providers (35.2%), and that manager would use clinical audit to identify and punish providers who fail to meet standards (27.8%). Conclusion Developing standards of maternity care that are acceptable to, and valued by, women requires consideration of both the research evidence and cultural values. Clinical audit is acceptable to health professionals in Malawi although there are concerns about its negative implications to the providers. PMID:18647388
Zhang, Xujun; Pang, Yuanyuan; Cui, Mengjing; Stallones, Lorann; Xiang, Huiyun
2015-02-01
Road traffic injuries have become a major public health problem in China. This study aimed to develop statistical models for predicting road traffic deaths and to analyze seasonality of deaths in China. A seasonal autoregressive integrated moving average (SARIMA) model was used to fit the data from 2000 to 2011. Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were used to evaluate the constructed models. Autocorrelation function and partial autocorrelation function of residuals and Ljung-Box test were used to compare the goodness-of-fit between the different models. The SARIMA model was used to forecast monthly road traffic deaths in 2012. The seasonal pattern of road traffic mortality data was statistically significant in China. SARIMA (1, 1, 1) (0, 1, 1)12 model was the best fitting model among various candidate models; the Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were -483.679, -475.053, and 4.937, respectively. Goodness-of-fit testing showed nonautocorrelations in the residuals of the model (Ljung-Box test, Q = 4.86, P = .993). The fitted deaths using the SARIMA (1, 1, 1) (0, 1, 1)12 model for years 2000 to 2011 closely followed the observed number of road traffic deaths for the same years. The predicted and observed deaths were also very close for 2012. This study suggests that accurate forecasting of road traffic death incidence is possible using SARIMA model. The SARIMA model applied to historical road traffic deaths data could provide important evidence of burden of road traffic injuries in China. Copyright © 2015 Elsevier Inc. All rights reserved.
de Geus, Eveline; Aalfs, Cora M; Menko, Fred H; Sijmons, Rolf H; Verdam, Mathilde G E; de Haes, Hanneke C J M; Smets, Ellen M A
2015-08-01
Despite the use of genetic services, counselees do not always share hereditary cancer information with at-risk relatives. Reasons for not informing relatives may be categorized as a lack of: knowledge, motivation, and/or self-efficacy. This study aims to develop and test the psychometric properties of the Informing Relatives Inventory, a battery of instruments that intend to measure counselees' knowledge, motivation, and self-efficacy regarding the disclosure of hereditary cancer risk information to at-risk relatives. Guided by the proposed conceptual framework, existing instruments were selected and new instruments were developed. We tested the instruments' acceptability, dimensionality, reliability, and criterion-related validity in consecutive index patients visiting the Clinical Genetics department with questions regarding hereditary breast and/or ovarian cancer or colon cancer. Data of 211 index patients were included (response rate = 62%). The Informing Relatives Inventory (IRI) assesses three barriers in disclosure representing seven domains. Instruments assessing index patients' (positive) motivation and self-efficacy were acceptable and reliable and suggested good criterion-related validity. Psychometric properties of instruments assessing index patients knowledge were disputable. These items were moderately accepted by index patients and the criterion-related validity was weaker. This study presents a first conceptual framework and associated inventory (IRI) that improves insight into index patients' barriers regarding the disclosure of genetic cancer information to at-risk relatives. Instruments assessing (positive) motivation and self-efficacy proved to be reliable measurements. Measuring index patients knowledge appeared to be more challenging. Further research is necessary to ensure IRI's dimensionality and sensitivity to change.
Hierarchical semi-numeric method for pairwise fuzzy group decision making.
Marimin, M; Umano, M; Hatono, I; Tamura, H
2002-01-01
Gradual improvements to a single-level semi-numeric method, i.e., linguistic labels preference representation by fuzzy sets computation for pairwise fuzzy group decision making are summarized. The method is extended to solve multiple criteria hierarchical structure pairwise fuzzy group decision-making problems. The problems are hierarchically structured into focus, criteria, and alternatives. Decision makers express their evaluations of criteria and alternatives based on each criterion by using linguistic labels. The labels are converted into and processed in triangular fuzzy numbers (TFNs). Evaluations of criteria yield relative criteria weights. Evaluations of the alternatives, based on each criterion, yield a degree of preference for each alternative or a degree of satisfaction for each preference value. By using a neat ordered weighted average (OWA) or a fuzzy weighted average operator, solutions obtained based on each criterion are aggregated into final solutions. The hierarchical semi-numeric method is suitable for solving a larger and more complex pairwise fuzzy group decision-making problem. The proposed method has been verified and applied to solve some real cases and is compared to Saaty's (1996) analytic hierarchy process (AHP) method.
Huber, J; Hüsler, J; Dieppe, P; Günther, K P; Dreinhöfer, K; Judge, A
2016-03-01
To validate a new method to identify responders (relative effect per patient (REPP) >0.2) using the OMERACT-OARSI criteria as gold standard in a large multicentre sample. The REPP ([score before - after treatment]/score before treatment) was calculated for 845 patients of a large multicenter European cohort study for THR. The patients with a REPP >0.2 were defined as responders. The responder rate was compared to the gold standard (OMERACT-OARSI criteria) using receiver operator characteristic (ROC) curve analysis for sensitivity, specificity and percentage of appropriately classified patients. With the criterion REPP>0.2 85.4% of the patients were classified as responders, applying the OARSI-OMERACT criteria 85.7%. The new method had 98.8% sensitivity, 94.2% specificity and 98.1% of the patients were correctly classified compared to the gold standard. The external validation showed a high sensitivity and also specificity of a new criterion to identify a responder compared to the gold standard method. It is simple and has no uncertainties due to a single classification criterion. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Perekhodtseva, E. V.
2009-09-01
Development of successful method of forecast of storm winds, including squalls and tornadoes and heavy rainfalls, that often result in human and material losses, could allow one to take proper measures against destruction of buildings and to protect people. Well-in-advance successful forecast (from 12 hours to 48 hour) makes possible to reduce the losses. Prediction of the phenomena involved is a very difficult problem for synoptic till recently. The existing graphic and calculation methods still depend on subjective decision of an operator. Nowadays in Russia there is no hydrodynamic model for forecast of the maximal precipitation and wind velocity V> 25m/c, hence the main tools of objective forecast are statistical methods using the dependence of the phenomena involved on a number of atmospheric parameters (predictors). Statistical decisive rule of the alternative and probability forecast of these events was obtained in accordance with the concept of "perfect prognosis" using the data of objective analysis. For this purpose the different teaching samples of present and absent of this storm wind and rainfalls were automatically arranged that include the values of forty physically substantiated potential predictors. Then the empirical statistical method was used that involved diagonalization of the mean correlation matrix R of the predictors and extraction of diagonal blocks of strongly correlated predictors. Thus for these phenomena the most informative predictors were selected without loosing information. The statistical decisive rules for diagnosis and prognosis of the phenomena involved U(X) were calculated for choosing informative vector-predictor. We used the criterion of distance of Mahalanobis and criterion of minimum of entropy by Vapnik-Chervonenkis for the selection predictors. Successful development of hydrodynamic models for short-term forecast and improvement of 36-48h forecasts of pressure, temperature and others parameters allowed us to use the prognostic fields of those models for calculations of the discriminant functions in the nodes of the grid 150x150km and the values of probabilities P of dangerous wind and thus to get fully automated forecasts. In order to change to the alternative forecast the author proposes the empirical threshold values specified for this phenomenon and advance period 36 hours. In the accordance to the Pirsey-Obukhov criterion (T), the success of these automated statistical methods of forecast of squalls and tornadoes to 36 -48 hours ahead and heavy rainfalls in the warm season for the territory of Italy, Spain and Balkan countries is T = 1-a-b=0,54: 0,78 after author experiments. A lot of examples of very successful forecasts of summer storm wind and heavy rainfalls over the Italy and Spain territory are submitted at this report. The same decisive rules were applied to the forecast of these phenomena during cold period in this year too. This winter heavy snowfalls in Spain and in Italy and storm wind at this territory were observed very often. And our forecasts are successful.
Janknegt, R; Steenhoek, A
1997-04-01
Rational drug selection for formulary purposes is important. Besides rational selection criteria, other factors play a role in drug decision making, such as emotional, personal financial and even unconscious criteria. It is agreed that these factors should be excluded as much as possible in the decision making process. A model for drug decision making for formulary purposes is described, the System of Objectified Judgement Analysis (SOJA). In the SOJA method, selection criteria for a given group of drugs are prospectively defined and the extent to which each drug fulfils the requirements for each criterion is determined. Each criterion is given a relative weight, i.e. the more important a given selection criterion is considered, the higher the relative weight. Both the relative scores for each drug per selection criterion and the relative weight of each criterion are determined by a panel of experts in this field. The following selection criteria are applied in all SOJA scores: clinical efficacy, incidence and severity of adverse effects, dosage frequency, drug interactions, acquisition cost, documentation, pharmacokinetics and pharmaceutical aspects. Besides these criteria, group specific criteria are also used, such as development of resistance when a SOJA score was made for antimicrobial agents. The relative weight that is assigned to each criterion will always be a subject of discussion. Therefore, interactive software programs for use on a personal computer have been developed, in which the user of the system may enter their own personal relative weight to each selection criterion and make their own personal SOJA score. The main advantage of the SOJA method is that all nonrational selection criteria are excluded and that drug decision making is based solely on rational criteria. The use of the interactive SOJA discs makes the decision process fully transparent as it becomes clear on which criteria and weighting decisions are based. We have seen that the use of this method for drug decision making greatly aids the discussion in the formulary committee, as discussion becomes much more concrete. The SOJA method is time dependent. Documentation on most products is still increasing and the score for this criterion will therefore change continuously. New products are introduced and prices are also subject to change. To overcome the time-dependence of the SOJA method, regular updates of interactive software programs are being made, in which changes in acquisition cost, documentation or a different weighting of criteria are included, as well as newly introduced products. The possibility of changing the official acquisition cost into the actual purchasing costs for the hospital in question provides a tailor-made interactive program.
On the measurement of criterion noise in signal detection theory: the case of recognition memory.
Kellen, David; Klauer, Karl Christoph; Singmann, Henrik
2012-07-01
Traditional approaches within the framework of signal detection theory (SDT; Green & Swets, 1966), especially in the field of recognition memory, assume that the positioning of response criteria is not a noisy process. Recent work (Benjamin, Diaz, & Wee, 2009; Mueller & Weidemann, 2008) has challenged this assumption, arguing not only for the existence of criterion noise but also for its large magnitude and substantive contribution to individuals' performance. A review of these recent approaches for the measurement of criterion noise in SDT identifies several shortcomings and confoundings. A reanalysis of Benjamin et al.'s (2009) data sets as well as the results from a new experimental method indicate that the different forms of criterion noise proposed in the recognition memory literature are of very low magnitudes, and they do not provide a significant improvement over the account already given by traditional SDT without criterion noise. Copyright 2012 APA, all rights reserved.
Dilatancy Criteria for Salt Cavern Design: A Comparison Between Stress- and Strain-Based Approaches
NASA Astrophysics Data System (ADS)
Labaune, P.; Rouabhi, A.; Tijani, M.; Blanco-Martín, L.; You, T.
2018-02-01
This paper presents a new approach for salt cavern design, based on the use of the onset of dilatancy as a design threshold. In the proposed approach, a rheological model that includes dilatancy at the constitutive level is developed, and a strain-based dilatancy criterion is defined. As compared to classical design methods that consist in simulating cavern behavior through creep laws (fitted on long-term tests) and then using a criterion (derived from short-terms tests or experience) to determine the stability of the excavation, the proposed approach is consistent both with short- and long-term conditions. The new strain-based dilatancy criterion is compared to a stress-based dilatancy criterion through numerical simulations of salt caverns under cyclic loading conditions. The dilatancy zones predicted by the strain-based criterion are larger than the ones predicted by the stress-based criteria, which is conservative yet constructive for design purposes.
Sequential decision tree using the analytic hierarchy process for decision support in rectal cancer.
Suner, Aslı; Çelikoğlu, Can Cengiz; Dicle, Oğuz; Sökmen, Selman
2012-09-01
The aim of the study is to determine the most appropriate method for construction of a sequential decision tree in the management of rectal cancer, using various patient-specific criteria and treatments such as surgery, chemotherapy, and radiotherapy. An analytic hierarchy process (AHP) was used to determine the priorities of variables. Relevant criteria used in two decision steps and their relative priorities were established by a panel of five general surgeons. Data were collected via a web-based application and analyzed using the "Expert Choice" software specifically developed for the AHP. Consistency ratios in the AHP method were calculated for each set of judgments, and the priorities of sub-criteria were determined. A sequential decision tree was constructed for the best treatment decision process, using priorities determined by the AHP method. Consistency ratios in the AHP method were calculated for each decision step, and the judgments were considered consistent. The tumor-related criterion "presence of perforation" (0.331) and the patient-surgeon-related criterion "surgeon's experience" (0.630) had the highest priority in the first decision step. In the second decision step, the tumor-related criterion "the stage of the disease" (0.230) and the patient-surgeon-related criterion "surgeon's experience" (0.281) were the paramount criteria. The results showed some variation in the ranking of criteria between the decision steps. In the second decision step, for instance, the tumor-related criterion "presence of perforation" was just the fifth. The consistency of decision support systems largely depends on the quality of the underlying decision tree. When several choices and variables have to be considered in a decision, it is very important to determine priorities. The AHP method seems to be effective for this purpose. The decision algorithm developed by this method is more realistic and will improve the quality of the decision tree. Copyright © 2012 Elsevier B.V. All rights reserved.
Identification and analysis of damaged or porous hair.
Hill, Virginia; Loni, Elvan; Cairns, Thomas; Sommer, Jonathan; Schaffer, Michael
2014-06-01
Cosmetic hair treatments have been referred to as 'the pitfall' of hair analysis. However, most cosmetic treatments, when applied to the hair as instructed by the product vendors, do not interfere with analysis, provided such treatments can be identified by the laboratory and the samples analyzed and reported appropriately for the condition of the hair. This paper provides methods for identifying damaged or porous hair samples using digestion rates of hair in dithiothreitol with and without proteinase K, as well as a protein measurement method applied to dithiothreitol-digested samples. Extremely damaged samples may be unsuitable for analysis. Aggressive and extended aqueous washing of hair samples is a proven method for removing or identifying externally derived drug contamination of hair. In addition to this wash procedure, we have developed an alternative wash procedure using 90% ethanol for washing damaged or porous hair. The procedure, like the aqueous wash procedure, requires analysis of the last of five washes to evaluate the effectiveness of the washing procedure. This evaluation, termed the Wash Criterion, is derived from studies of the kinetics of washing of hair samples that have been experimentally contaminated and of hair from drug users. To study decontamination methods, in vitro contaminated drug-negative hair samples were washed by both the aqueous buffer method and a 90% ethanol method. Analysis of cocaine and methamphetamine was by liquid chromatography-tandem mass spectrometry (LC/MS/MS). Porous hair samples from drug users, when washed in 90% ethanol, pass the wash criterion although they may fail the aqueous wash criterion. Those samples that fail both the ethanolic and aqueous wash criterion are not reported as positive for ingestion. Similar ratios of the metabolite amphetamine relative to methamphetamine in the last wash and the hair is an additional criterion for assessing contamination vs. ingestion of methamphetamine. Copyright © 2014 John Wiley & Sons, Ltd.
Schiffman, Eric L; Truelove, Edmond L; Ohrbach, Richard; Anderson, Gary C; John, Mike T; List, Thomas; Look, John O
2010-01-01
The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. The aim of this article is to provide an overview of the project's methodology, descriptive statistics, and data for the study participant sample. This article also details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. The Axis I reference standards were based on the consensus of two criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion examination reliability was also assessed within study sites. Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas > or = 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion examiner agreement with reference standards was excellent (k > or = 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods.
Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter
2015-01-01
Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160
A new tracer‐density criterion for heterogeneous porous media
Barth, Gilbert R.; Illangasekare, Tissa H.; Hill, Mary C.; Rajaram, Harihar
2001-01-01
Tracer experiments provide information about aquifer material properties vital for accurate site characterization. Unfortunately, density‐induced sinking can distort tracer movement, leading to an inaccurate assessment of material properties. Yet existing criteria for selecting appropriate tracer concentrations are based on analysis of homogeneous media instead of media with heterogeneities typical of field sites. This work introduces a hydraulic‐gradient correction for heterogeneous media and applies it to a criterion previously used to indicate density‐induced instabilities in homogeneous media. The modified criterion was tested using a series of two‐dimensional heterogeneous intermediate‐scale tracer experiments and data from several detailed field tracer tests. The intermediate‐scale experimental facility (10.0×1.2×0.06 m) included both homogeneous and heterogeneous (σln k2 = 1.22) zones. The field tracer tests were less heterogeneous (0.24 < σln k2 < 0.37), but measurements were sufficient to detect density‐induced sinking. Evaluation of the modified criterion using the experiments and field tests demonstrates that the new criterion appears to account for the change in density‐induced sinking due to heterogeneity. The criterion demonstrates the importance of accounting for heterogeneity to predict density‐induced sinking and differences in the onset of density‐induced sinking in two‐ and three‐dimensional systems.
Contribution of criterion A2 to PTSD screening in the presence of traumatic events.
Pereda, Noemí; Forero, Carlos G
2012-10-01
Criterion A2 according to the Diagnostic and Statistical Manual of Mental Disorders (4(th) ed.; DSM-IV; American Psychiatric Association [APA], 1994) for posttraumatic stress disorder (PTSD) aims to assess the individual's subjective appraisal of an event, but it has been claimed that it might not be sufficiently specific for diagnostic purposes. We analyse the contribution of Criterion A2 and DSM-IV criteria to detect PTSD for the most distressing life events experienced by our subjects. Young adults (N = 1,033) reported their most distressing life events, together with PTSD criteria (Criteria A2, B, C, D, E, and F). PTSD prevalence and criterion specificity and agreement with probable diagnoses were estimated. Our results indicate 80.30% of the individuals experienced traumatic events and met one or more PTSD criteria; 13.22% cases received a positive diagnosis of PTSD. Criterion A2 showed poor agreement with the final probable PTSD diagnosis (correlation with PTSD .13, specificity = .10); excluding it from PTSD diagnosis did not the change the estimated disorder prevalence significantly. Based on these findings it appears that Criterion A2 is scarcely specific and provides little information to confirm a probable PTSD case. Copyright © 2012 International Society for Traumatic Stress Studies.
Gromisch, Elizabeth S; Zemon, Vance; Holtzer, Roee; Chiaravalloti, Nancy D; DeLuca, John; Beier, Meghan; Farrell, Eileen; Snyder, Stacey; Schairer, Laura C; Glukhovsky, Lisa; Botvinick, Jason; Sloan, Jessica; Picone, Mary Ann; Kim, Sonya; Foley, Frederick W
2016-10-01
Cognitive dysfunction is prevalent in multiple sclerosis. As self-reported cognitive functioning is unreliable, brief objective screening measures are needed. Utilizing widely used full-length neuropsychological tests, this study aimed to establish the criterion validity of highly abbreviated versions of the Brief Visuospatial Memory Test - Revised (BVMT-R), Symbol Digit Modalities Test (SDMT), Delis-Kaplan Executive Function System (D-KEFS) Sorting Test, and Controlled Oral Word Association Test (COWAT) in order to begin developing an MS-specific screening battery. Participants from Holy Name Medical Center and the Kessler Foundation were administered one or more of these four measures. Using test-specific criterion to identify impairment at both -1.5 and -2.0 SD, receiver-operating-characteristic (ROC) analyses of BVMT-R Trial 1, Trial 2, and Trial 1 + 2 raw data (N = 286) were run to calculate the classification accuracy of the abbreviated version, as well as the sensitivity and specificity. The same methods were used for SDMT 30-s and 60-s (N = 321), D-KEFS Sorting Free Card Sort 1 (N = 120), and COWAT letters F and A (N = 298). Using these definitions of impairment, each analysis yielded high classification accuracy (89.3 to 94.3%). BVMT-R Trial 1, SDMT 30-s, D-KEFS Free Card Sort 1, and COWAT F possess good criterion validity in detecting impairment on their respective overall measure, capturing much of the same information as the full version. Along with the first two trials of the California Verbal Learning Test - Second Edition (CVLT-II), these five highly abbreviated measures may be used to develop a brief screening battery.
Sindall, Paul; Lenton, John P.; Whytock, Katie; Tolfrey, Keith; Oyster, Michelle L.; Cooper, Rory A.; Goosey-Tolfrey, Victoria L.
2013-01-01
Purpose To compare the criterion validity and accuracy of a 1 Hz non-differential global positioning system (GPS) and data logger device (DL) for the measurement of wheelchair tennis court movement variables. Methods Initial validation of the DL device was performed. GPS and DL were fitted to the wheelchair and used to record distance (m) and speed (m/second) during (a) tennis field (b) linear track, and (c) match-play test scenarios. Fifteen participants were monitored at the Wheelchair British Tennis Open. Results Data logging validation showed underestimations for distance in right (DLR) and left (DLL) logging devices at speeds >2.5 m/second. In tennis-field tests, GPS underestimated distance in five drills. DLL was lower than both (a) criterion and (b) DLR in drills moving forward. Reversing drill direction showed that DLR was lower than (a) criterion and (b) DLL. GPS values for distance and average speed for match play were significantly lower than equivalent values obtained by DL (distance: 2816 (844) vs. 3952 (1109) m, P = 0.0001; average speed: 0.7 (0.2) vs. 1.0 (0.2) m/second, P = 0.0001). Higher peak speeds were observed in DL (3.4 (0.4) vs. 3.1 (0.5) m/second, P = 0.004) during tennis match play. Conclusions Sampling frequencies of 1 Hz are too low to accurately measure distance and speed during wheelchair tennis. GPS units with a higher sampling rate should be advocated in further studies. Modifications to existing DL devices may be required to increase measurement precision. Further research into the validity of movement devices during match play will further inform the demands and movement patterns associated with wheelchair tennis. PMID:23820154
Evaluation of Measurement Instrument Criterion Validity in Finite Mixture Settings
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong
2016-01-01
A method for evaluating the validity of multicomponent measurement instruments in heterogeneous populations is discussed. The procedure can be used for point and interval estimation of criterion validity of linear composites in populations representing mixtures of an unknown number of latent classes. The approach permits also the evaluation of…
Selecting Items for Criterion-Referenced Tests.
ERIC Educational Resources Information Center
Mellenbergh, Gideon J.; van der Linden, Wim J.
1982-01-01
Three item selection methods for criterion-referenced tests are examined: the classical theory of item difficulty and item-test correlation; the latent trait theory of item characteristic curves; and a decision-theoretic approach for optimal item selection. Item contribution to the standardized expected utility of mastery testing is discussed. (CM)
Setting Meaningful Criterion-Reference Cut Scores as an Effective Professional Development
ERIC Educational Resources Information Center
Munyofu, Paul
2010-01-01
The state of Pennsylvania, like many organizations interested in performance improvement, routinely engages in professional development activities. Educators in this hands-on activity engaged in setting meaningful criterion-referenced cut scores for career and technical education assessments using two methods. The main purposes of this study were…
Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2012-01-01
A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…
NASA Astrophysics Data System (ADS)
Bai, Yan-Kui; Li, Shu-Shen; Zheng, Hou-Zhi
2005-11-01
We present a method for checking the Peres separability criterion in an arbitrary bipartite quantum state ρAB within local operations and classical communication scenario. The method does not require noise operation which is needed in making the partial transposition map physically implementable. The main task for the two observers, Alice and Bob, is to measure some specific functions of the partial transposed matrix. With these functions, they can determine the eigenvalues of ρABTB , among which the minimum serves as an entanglement witness.
Two-dimensional thermal modeling of power monolithic microwave integrated circuits (MMIC's)
NASA Technical Reports Server (NTRS)
Fan, Mark S.; Christou, Aris; Pecht, Michael G.
1992-01-01
Numerical simulations of the two-dimensional temperature distributions for a typical GaAs MMIC circuit are conducted, aiming at understanding the heat conduction process of the circuit chip and providing temperature information for device reliability analysis. The method used is to solve the two-dimensional heat conduction equation with a control-volume-based finite difference scheme. In particular, the effects of the power dissipation and the ambient temperature are examined, and the criterion for the worst operating environment is discussed in terms of the allowed highest device junction temperature.
Entanglement detection in the vicinity of arbitrary Dicke states.
Duan, L-M
2011-10-28
Dicke states represent a class of multipartite entangled states that can be generated experimentally with many applications in quantum information. We propose a method to experimentally detect genuine multipartite entanglement in the vicinity of arbitrary Dicke states. The detection scheme can be used to experimentally quantify the entanglement depth of many-body systems and is easy to implement as it requires measurement of only three collective spin operators. The detection criterion is strong as it heralds multipartite entanglement even in cases where the state fidelity goes down exponentially with the number of qubits.
NASA Astrophysics Data System (ADS)
Zheng, Yu-Lin; Zhen, Yi-Zheng; Chen, Zeng-Bing; Liu, Nai-Le; Chen, Kai; Pan, Jian-Wei
2017-01-01
The striking and distinctive nonlocal features of quantum mechanics were discovered by Einstein, Podolsky, and Rosen (EPR) beyond classical physics. At the core of the EPR argument, it was "steering" that Schrödinger proposed in 1935. Besides its fundamental significance, quantum steering opens up a novel application for quantum communication. Recent work has precisely characterized its properties; however, witnessing the EPR nonlocality remains a big challenge under arbitrary local measurements. Here we present an alternative linear criterion and complement existing results to efficiently testify steering for high-dimensional system in practice. By developing a novel and analytical method to tackle the maximization problem in deriving the bound of a steering criterion, we show how observed correlations can reveal powerfully the EPR nonlocality in an easily accessed manner. Although the criteria is not necessary and sufficient, it can recover some of the known results under a few settings of local measurements and is applicable even if the size of the system or the number of measurement settings are high. Remarkably, a deep connection is explicitly established between the steering and amount of entanglement. The results promise viable paths for secure communication with an untrusted source, providing optional loophole-free tests of the EPR nonlocality for high-dimensional states, as well as motivating solutions for other related problems in quantum information theory.
NASA Astrophysics Data System (ADS)
Park, N.; Huh, H.; Yoon, J. W.
2017-09-01
This paper deals with the prediction of fracture initiation in square cup drawing of DP980 steel sheet with the thickness of 1.2 mm. In an attempt to consider the influence of material anisotropy on the fracture initiation, an uncoupled anisotropic ductile fracture criterion is developed based on the Lou—Huh ductile fracture criterion. Tensile tests are carried out at different loading directions of 0°, 45°, and 90° to the rolling direction of the sheet using various specimen geometries including pure shear, dog-bone, and flat grooved specimens so as to calibrate the parameters of the proposed fracture criterion. Equivalent plastic strain distribution on the specimen surface is computed using Digital Image Correlation (DIC) method until surface crack initiates. The proposed fracture criterion is implemented into the commercial finite element code ABAQUS/Explicit by developing the Vectorized User-defined MATerial (VUMAT) subroutine which features the non-associated flow rule. Simulation results of the square cup drawing test clearly show that the proposed fracture criterion is capable of predicting the fracture initiation with sufficient accuracy considering the material anisotropy.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
A Joint Optimization Criterion for Blind DS-CDMA Detection
NASA Astrophysics Data System (ADS)
Durán-Díaz, Iván; Cruces-Alvarez, Sergio A.
2006-12-01
This paper addresses the problem of the blind detection of a desired user in an asynchronous DS-CDMA communications system with multipath propagation channels. Starting from the inverse filter criterion introduced by Tugnait and Li in 2001, we propose to tackle the problem in the context of the blind signal extraction methods for ICA. In order to improve the performance of the detector, we present a criterion based on the joint optimization of several higher-order statistics of the outputs. An algorithm that optimizes the proposed criterion is described, and its improved performance and robustness with respect to the near-far problem are corroborated through simulations. Additionally, a simulation using measurements on a real software-radio platform at 5 GHz has also been performed.
Platzer, Christine; Bröder, Arndt; Heck, Daniel W
2014-05-01
Decision situations are typically characterized by uncertainty: Individuals do not know the values of different options on a criterion dimension. For example, consumers do not know which is the healthiest of several products. To make a decision, individuals can use information about cues that are probabilistically related to the criterion dimension, such as sugar content or the concentration of natural vitamins. In two experiments, we investigated how the accessibility of cue information in memory affects which decision strategy individuals rely on. The accessibility of cue information was manipulated by means of a newly developed paradigm, the spatial-memory-cueing paradigm, which is based on a combination of the looking-at-nothing phenomenon and the spatial-cueing paradigm. The results indicated that people use different decision strategies, depending on the validity of easily accessible information. If the easily accessible information is valid, people stop information search and decide according to a simple take-the-best heuristic. If, however, information that comes to mind easily has a low predictive validity, people are more likely to integrate all available cue information in a compensatory manner.
Information presentation format moderates the unconscious-thought effect: The role of recollection.
Abadie, Marlène; Waroquier, Laurent; Terrier, Patrice
2016-09-01
The unconscious-thought effect occurs when distraction improves complex decision-making. In two experiments using the unconscious-thought paradigm, we investigated the effect of presentation format of decision information (i) on memory for decision-relevant information and (ii) on the quality of decisions made after distraction, conscious deliberation or immediately. We used the process-dissociation procedure to measure recollection and familiarity. The two studies showed that presenting information blocked per criterion led participants to recollect more decision-relevant details compared to a presentation by option. Moreover, a Bayesian meta-analysis of the two studies provided strong evidence that conscious deliberation resulted in better decisions when the information was presented blocked per criterion and substantial evidence that distraction improved decision quality when the information was presented blocked per option. Finally, Study 2 revealed that the recollection of decision-relevant details mediated the effect of presentation format on decision quality in the deliberation condition. This suggests that recollection contributes to conscious deliberation efficacy.
Zhang, Xingwu; Wang, Chenxi; Gao, Robert X.; Yan, Ruqiang; Chen, Xuefeng; Wang, Shibin
2016-01-01
Milling vibration is one of the most serious factors affecting machining quality and precision. In this paper a novel hybrid error criterion-based frequency-domain LMS active control method is constructed and used for vibration suppression of milling processes by piezoelectric actuators and sensors, in which only one Fast Fourier Transform (FFT) is used and no Inverse Fast Fourier Transform (IFFT) is involved. The correction formulas are derived by a steepest descent procedure and the control parameters are analyzed and optimized. Then, a novel hybrid error criterion is constructed to improve the adaptability, reliability and anti-interference ability of the constructed control algorithm. Finally, based on piezoelectric actuators and acceleration sensors, a simulation of a spindle and a milling process experiment are presented to verify the proposed method. Besides, a protection program is added in the control flow to enhance the reliability of the control method in applications. The simulation and experiment results indicate that the proposed method is an effective and reliable way for on-line vibration suppression, and the machining quality can be obviously improved. PMID:26751448
Soft Clustering Criterion Functions for Partitional Document Clustering
2004-05-26
in the clus- ter that it already belongs to. The refinement phase ends, as soon as we perform an iteration in which no documents moved between...for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 26 MAY 2004 2... it with the one obtained by the hard criterion functions. We present a comprehensive experimental evaluation involving twelve differ- ent datasets
Optimal Tikhonov regularization for DEER spectroscopy
NASA Astrophysics Data System (ADS)
Edwards, Thomas H.; Stoll, Stefan
2018-03-01
Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.
Neuropathological diagnostic criteria for Alzheimer's disease.
Murayama, Shigeo; Saito, Yuko
2004-09-01
Neuropathological diagnostic criteria for Alzheimer's disease (AD) are based on tau-related pathology: NFT or neuritic plaques (NP). The Consortium to Establish a Registry for Alzheimer's disease (CERAD) criterion evaluates the highest density of neocortical NP from 0 (none) to C (abundant). Clinical documentation of dementia and NP stage A in younger cases, B in young old cases and C in older cases fulfils the criterion of AD. The CERAD criterion is most frequently used in clinical outcome studies because of its inclusion of clinical information. Braak and Braak's criterion evaluates the density and distribution of NFT and classifies them into: I/II, entorhinal; III/IV, limbic; and V/VI, neocortical stage. These three stages correspond to normal cognition, cognitive impairment and dementia, respectively. As Braak's criterion is based on morphological evaluation of the brain alone, this criterion is usually adopted in the research setting. The National Institute for Aging and Ronald and Nancy Reagan Institute of the Alzheimer's Association criterion combines these two criteria and categorizes cases into NFT V/VI and NP C, NFT III/IV and NP B, and NFT I/II and NP A, corresponding to high, middle and low probability of AD, respectively. As most AD cases in the aged population are categorized into Braak tangle stage IV and CERAD stage C, the usefulness of this criterion has not yet been determined. The combination of Braak's NFT stage equal to or above IV and Braak's senile plaque Stage C provides, arguably, the highest sensitivity and specificity. In future, the criteria should include in vivo dynamic neuropathological data, including 3D MRI, PET scan and CSF biomarkers, as well as more sensitive and specific immunohistochemical and immunochemical grading of AD.
Palinkas, Lawrence A.; Horwitz, Sarah M.; Green, Carla A.; Wisdom, Jennifer P.; Duan, Naihua; Hoagwood, Kimberly
2013-01-01
Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research. PMID:24193818
NASA Astrophysics Data System (ADS)
Wu, Jiangning; Wang, Xiaohuan
Rapidly increasing amount of mobile phone users and types of services leads to a great accumulation of complaining information. How to use this information to enhance the quality of customers' services is a big issue at present. To handle this kind of problem, the paper presents an approach to construct a domain knowledge map for navigating the explicit and tacit knowledge in two ways: building the Topic Map-based explicit knowledge navigation model, which includes domain TM construction, a semantic topic expansion algorithm and VSM-based similarity calculation; building Social Network Analysis-based tacit knowledge navigation model, which includes a multi-relational expert navigation algorithm and the criterions to evaluate the performance of expert networks. In doing so, both the customer managers and operators in call centers can find the appropriate knowledge and experts quickly and exactly. The experimental results show that the above method is very powerful for knowledge navigation.
Reliable two-dimensional phase unwrapping method using region growing and local linear estimation.
Zhou, Kun; Zaitsev, Maxim; Bao, Shanglian
2009-10-01
In MRI, phase maps can provide useful information about parameters such as field inhomogeneity, velocity of blood flow, and the chemical shift between water and fat. As phase is defined in the (-pi,pi] range, however, phase wraps often occur, which complicates image analysis and interpretation. This work presents a two-dimensional phase unwrapping algorithm that uses quality-guided region growing and local linear estimation. The quality map employs the variance of the second-order partial derivatives of the phase as the quality criterion. Phase information from unwrapped neighboring pixels is used to predict the correct phase of the current pixel using a linear regression method. The algorithm was tested on both simulated and real data, and is shown to successfully unwrap phase images that are corrupted by noise and have rapidly changing phase. (c) 2009 Wiley-Liss, Inc.
A graph-based watershed merging using fuzzy C-means and simulated annealing for image segmentation
NASA Astrophysics Data System (ADS)
Vadiveloo, Mogana; Abdullah, Rosni; Rajeswari, Mandava
2015-12-01
In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.
Optimal atlas construction through hierarchical image registration
NASA Astrophysics Data System (ADS)
Grevera, George J.; Udupa, Jayaram K.; Odhner, Dewey; Torigian, Drew A.
2016-03-01
Atlases (digital or otherwise) are common in medicine. However, there is no standard framework for creating them from medical images. One traditional approach is to pick a representative subject and then proceed to label structures/regions of interest in this image. Another is to create a "mean" or average subject. Atlases may also contain more than a single representative (e.g., the Visible Human contains both a male and a female data set). Other criteria besides gender may be used as well, and the atlas may contain many examples for a given criterion. In this work, we propose that atlases be created in an optimal manner using a well-established graph theoretic approach using a min spanning tree (or more generally, a collection of them). The resulting atlases may contain many examples for a given criterion. In fact, our framework allows for the addition of new subjects to the atlas to allow it to evolve over time. Furthermore, one can apply segmentation methods to the graph (e.g., graph-cut, fuzzy connectedness, or cluster analysis) which allow it to be separated into "sub-atlases" as it evolves. We demonstrate our method by applying it to 50 3D CT data sets of the chest region, and by comparing it to a number of traditional methods using measures such as Mean Squared Difference, Mattes Mutual Information, and Correlation, and for rigid registration. Our results demonstrate that optimal atlases can be constructed in this manner and outperform other methods of construction using freely available software.
Bhagat, Shaum P.; Bass, Johnnie K.; White, Stephanie T.; Qaddoumi, Ibrahim; Wilson, Matthew W.; Wu, Jianrong; Rodriguez-Galindo, Carlos
2016-01-01
Objective Carboplatin is a common chemotherapy agent with potential ototoxic side effects that is used to treat a variety of pediatric cancers, including retinoblastoma. Retinoblastoma is a malignant tumor of the retina that is usually diagnosed in young children. Distortion-product otoacoustic emission tests offer an effective method of monitoring for ototoxicity in young children. This study was designed to compare measurements of distortion-product otoacoustic emissions obtained before and after several courses of carboplatin chemotherapy in order to examine if (a) mean distortion-product otoacoustic emission levels were significantly different; and (b) if criterion reductions in distortion-product otoacoustic emission levels were observed in individual children. Methods A prospective repeated measures study. Ten children with a median age of 7.6 months (range, 3–72 months) diagnosed with unilateral or bilateral retinoblastoma were examined. Distortion-product otoacoustic emissions were acquired from both ears of the children with 65/55 dB SPL primary tones (f2= 793–7996 Hz) and a frequency resolution of 3 points/octave. Distortion-product otoacoustic emission levels in dB SPL were measured before chemotherapy treatment (baseline measurement) and after 3–4 courses of chemotherapy (interim measurement). Comparisons were made between baseline and interim distortion-product otoacoustic emission levels (collapsed across ears). Evidence of ototoxicity was based on criterion reductions (≥ 6 dB) in distortion-product otoacoustic emission levels. Results Significant differences between baseline and interim mean distortion-product otoacoustic emission levels were only observed at f2=7996 Hz. Four children exhibited criterion reductions in distortion-product otoacoustic emission levels. Conclusions Mean distortion-product otoacoustic emission levels at most frequencies were not changed following 3–4 courses of carboplatin chemotherapy in children with retinoblastoma. However, on an individual basis, children receiving higher doses of carboplatin exhibited criterion reductions in distortion-product otoacoustic emission level at several frequencies. These findings suggest that higher doses of carboplatin affect outer hair cell function, and distortion-product otoacoustic emission tests can provide useful information when monitoring children at risk of developing carboplatin ototoxicity. PMID:20667604
Inter-Identity Autobiographical Amnesia in Patients with Dissociative Identity Disorder
Huntjens, Rafaële J. C.; Verschuere, Bruno; McNally, Richard J.
2012-01-01
Background A major symptom of Dissociative Identity Disorder (DID; formerly Multiple Personality Disorder) is dissociative amnesia, the inability to recall important personal information. Only two case studies have directly addressed autobiographical memory in DID. Both provided evidence suggestive of dissociative amnesia. The aim of the current study was to objectively assess transfer of autobiographical information between identities in a larger sample of DID patients. Methods Using a concealed information task, we assessed recognition of autobiographical details in an amnesic identity. Eleven DID patients, 27 normal controls, and 23 controls simulating DID participated. Controls and simulators were matched to patients on age, education level, and type of autobiographical memory tested. Findings Although patients subjectively reported amnesia for the autobiographical details included in the task, the results indicated transfer of information between identities. Conclusion The results call for a revision of the DID definition. The amnesia criterion should be modified to emphasize its subjective nature. PMID:22815769
Image restoration, uncertainty, and information.
Yu, F T
1969-01-01
Some of the physical interpretations about image restoration are discussed. From the theory of information the unrealizability of an inverse filter can be explained by degradation of information, which is due to distortion on the recorded image. The image restoration is a time and space problem, which can be recognized from the theory of relativity (the problem of image restoration is related to Heisenberg's uncertainty principle in quantum mechanics). A detailed discussion of the relationship between information and energy is given. Two general results may be stated: (1) the restoration of the image from the distorted signal is possible only if it satisfies the detectability condition. However, the restored image, at the best, can only approach to the maximum allowable time criterion. (2) The restoration of an image by superimposing the distorted signal (due to smearing) is a physically unrealizable method. However, this restoration procedure may be achieved by the expenditure of an infinite amount of energy.
A comparison of the fractal and JPEG algorithms
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Shahshahani, M.
1991-01-01
A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.
A Nonpharmacologic Method for Enhancing Sleep in PTSD
2015-10-01
medications include: Alcohol (during intoxication or withdrawal); cannabis (during intoxication); hallucinogens (during intoxication), phencyclidine... medications are taken solely under appropriate medical supervision, this criterion is not considered to be met. SEDATIVE/ HYPNOTIC/ANX CANNABIS ...are taken solely under appropriate medical supervision, this criterion is not considered to be met. SEDATIVE/ HYPNOTIC/ANX CANNABIS STIMULANTS
Evaluation of Weighted Scale Reliability and Criterion Validity: A Latent Variable Modeling Approach
ERIC Educational Resources Information Center
Raykov, Tenko
2007-01-01
A method is outlined for evaluating the reliability and criterion validity of weighted scales based on sets of unidimensional measures. The approach is developed within the framework of latent variable modeling methodology and is useful for point and interval estimation of these measurement quality coefficients in counseling and education…
A measurable Lawson criterion and hydro-equivalent curves for inertial confinement fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, C. D.; Betti, R.
2008-01-01
This article demonstrates how the ignition condition (Lawson criterion) for inertial confinement fusion (ICF) can be cast in a form depending on the only two parameters of the compressed fuel assembly that can be measured with methods already in existence: the hot spot ion temperature and the total areal density.
Chen, Liang-Hsuan; Hsueh, Chan-Ching
2007-06-01
Fuzzy regression models are useful to investigate the relationship between explanatory and response variables with fuzzy observations. Different from previous studies, this correspondence proposes a mathematical programming method to construct a fuzzy regression model based on a distance criterion. The objective of the mathematical programming is to minimize the sum of distances between the estimated and observed responses on the X axis, such that the fuzzy regression model constructed has the minimal total estimation error in distance. Only several alpha-cuts of fuzzy observations are needed as inputs to the mathematical programming model; therefore, the applications are not restricted to triangular fuzzy numbers. Three examples, adopted in the previous studies, and a larger example, modified from the crisp case, are used to illustrate the performance of the proposed approach. The results indicate that the proposed model has better performance than those in the previous studies based on either distance criterion or Kim and Bishu's criterion. In addition, the efficiency and effectiveness for solving the larger example by the proposed model are also satisfactory.
Renard, Bernhard Y.; Xu, Buote; Kirchner, Marc; Zickmann, Franziska; Winter, Dominic; Korten, Simone; Brattig, Norbert W.; Tzur, Amit; Hamprecht, Fred A.; Steen, Hanno
2012-01-01
Currently, the reliable identification of peptides and proteins is only feasible when thoroughly annotated sequence databases are available. Although sequencing capacities continue to grow, many organisms remain without reliable, fully annotated reference genomes required for proteomic analyses. Standard database search algorithms fail to identify peptides that are not exactly contained in a protein database. De novo searches are generally hindered by their restricted reliability, and current error-tolerant search strategies are limited by global, heuristic tradeoffs between database and spectral information. We propose a Bayesian information criterion-driven error-tolerant peptide search (BICEPS) and offer an open source implementation based on this statistical criterion to automatically balance the information of each single spectrum and the database, while limiting the run time. We show that BICEPS performs as well as current database search algorithms when such algorithms are applied to sequenced organisms, whereas BICEPS only uses a remotely related organism database. For instance, we use a chicken instead of a human database corresponding to an evolutionary distance of more than 300 million years (International Chicken Genome Sequencing Consortium (2004) Sequence and comparative analysis of the chicken genome provide unique perspectives on vertebrate evolution. Nature 432, 695–716). We demonstrate the successful application to cross-species proteomics with a 33% increase in the number of identified proteins for a filarial nematode sample of Litomosoides sigmodontis. PMID:22493179
Information hidden in the velocity distribution of ions and the exact kinetic Bohm criterion
NASA Astrophysics Data System (ADS)
Tsankov, Tsanko V.; Czarnetzki, Uwe
2017-05-01
Non-equilibrium distribution functions of electrons and ions play an important role in plasma physics. A prominent example is the kinetic Bohm criterion. Since its first introduction it has been controversial for theoretical reasons and due to the lack of experimental data, in particular on the ion distribution function. Here we resolve the theoretical as well as the experimental difficulties by an exact solution of the kinetic Boltzmann equation including charge exchange collisions and ionization. This also allows for the first time non-invasive measurement of spatially resolved ion velocity distributions, absolute values of the ion and electron densities, temperatures, and mean energies as well as the electric field and the plasma potential in the entire plasma. The non-invasive access to the spatially resolved distribution functions of electrons and ions is applied to the problem of the kinetic Bohm criterion. Theoretically a so far missing term in the criterion is derived and shown to be of key importance. With the new term the validity of the kinetic criterion at high collisionality and its agreement with the fluid picture are restored. All findings are supported by experimental data, theory and a numerical model with excellent agreement throughout.
A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image
NASA Astrophysics Data System (ADS)
Barat, Christian; Phlypo, Ronald
2010-12-01
We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.
Satisfying the Einstein-Podolsky-Rosen criterion with massive particles
NASA Astrophysics Data System (ADS)
Peise, J.; Kruse, I.; Lange, K.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Hammerer, K.; Santos, L.; Smerzi, A.; Klempt, C.
2016-03-01
In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, as shown successfully with light fields. Here, we report on the production of massive particles which meet the EPR criterion for continuous phase/amplitude variables. The created quantum state of ultracold atoms shows an EPR parameter of 0.18(3), which is 2.4 standard deviations below the threshold of 1/4. Our state presents a resource for tests of quantum nonlocality with massive particles and a wide variety of applications in the field of continuous-variable quantum information and metrology.
Delineating slowly and rapidly evolving fractions of the Drosophila genome.
Keith, Jonathan M; Adams, Peter; Stephen, Stuart; Mattick, John S
2008-05-01
Evolutionary conservation is an important indicator of function and a major component of bioinformatic methods to identify non-protein-coding genes. We present a new Bayesian method for segmenting pairwise alignments of eukaryotic genomes while simultaneously classifying segments into slowly and rapidly evolving fractions. We also describe an information criterion similar to the Akaike Information Criterion (AIC) for determining the number of classes. Working with pairwise alignments enables detection of differences in conservation patterns among closely related species. We analyzed three whole-genome and three partial-genome pairwise alignments among eight Drosophila species. Three distinct classes of conservation level were detected. Sequences comprising the most slowly evolving component were consistent across a range of species pairs, and constituted approximately 62-66% of the D. melanogaster genome. Almost all (>90%) of the aligned protein-coding sequence is in this fraction, suggesting much of it (comprising the majority of the Drosophila genome, including approximately 56% of non-protein-coding sequences) is functional. The size and content of the most rapidly evolving component was species dependent, and varied from 1.6% to 4.8%. This fraction is also enriched for protein-coding sequence (while containing significant amounts of non-protein-coding sequence), suggesting it is under positive selection. We also classified segments according to conservation and GC content simultaneously. This analysis identified numerous sub-classes of those identified on the basis of conservation alone, but was nevertheless consistent with that classification. Software, data, and results available at www.maths.qut.edu.au/-keithj/. Genomic segments comprising the conservation classes available in BED format.
Improved Multi-Axial, Temperature and Time Dependent (MATT) Failure Model
NASA Technical Reports Server (NTRS)
Richardson, D. E.; Anderson, G. L.; Macon, D. J.
2002-01-01
An extensive effort has recently been completed by the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle program to completely characterize the effects of multi-axial loading, temperature and time on the failure characteristics of three filled epoxy adhesives (TIGA 321, EA913NA, EA946). As part of this effort, a single general failure criterion was developed that accounted for these effects simultaneously. This model was named the Multi- Axial, Temperature, and Time Dependent or MATT failure criterion. Due to the intricate nature of the failure criterion, some parameters were required to be calculated using complex equations or numerical methods. This paper documents some simple but accurate modifications to the failure criterion to allow for calculations of failure conditions without complex equations or numerical techniques.
Medical privacy protection based on granular computing.
Wang, Da-Wei; Liau, Churn-Jung; Hsu, Tsan-Sheng
2004-10-01
Based on granular computing methodology, we propose two criteria to quantitatively measure privacy invasion. The total cost criterion measures the effort needed for a data recipient to find private information. The average benefit criterion measures the benefit a data recipient obtains when he received the released data. These two criteria remedy the inadequacy of the deterministic privacy formulation proposed in Proceedings of Asia Pacific Medical Informatics Conference, 2000; Int J Med Inform 2003;71:17-23. Granular computing methodology provides a unified framework for these quantitative measurements and previous bin size and logical approaches. These two new criteria are implemented in a prototype system Cellsecu 2.0. Preliminary system performance evaluation is conducted and reviewed.
ERIC Educational Resources Information Center
Xi, Xiaoming
2008-01-01
Although the primary use of the speaking section of the Test of English as a Foreign Language™ Internet-based test (TOEFL® iBT Speaking test) is to inform admissions decisions at English medium universities, it may also be useful as an initial screening measure for international teaching assistants (ITAs). This study provides criterion-related…
Study on the criterion to determine the bottom deployment modes of a coilable mast
NASA Astrophysics Data System (ADS)
Ma, Haibo; Huang, Hai; Han, Jianbin; Zhang, Wei; Wang, Xinsheng
2017-12-01
A practical design criterion that allows the coilable mast bottom to deploy in local coil mode was proposed. The criterion was defined with initial bottom helical angle and obtained by bottom deformation analyses. Discretizing the longerons into short rods, analyses were conducted based on the cylinder assumption and Kirchhoff's kinetic analogy theory. Then, iterative calculations aiming at the bottom four rods were carried out. A critical bottom helical angle was obtained while the angle changing rate equaled to zero. The critical value was defined as a criterion for judgement of bottom deployment mode. Subsequently, micro-gravity deployment tests were carried out and bottom deployment simulations based on finite element method were developed. Through comparisons of bottom helical angles in critical state, the proposed criterion was evaluated and modified, that is, an initial bottom helical angle less than critical value with a design margin of -13.7% could ensure the mast bottom deploying in local coil mode, and further determine a successful local coil deployment of entire coilable mast.
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
Efficient structure from motion on large scenes using UAV with position and pose information
NASA Astrophysics Data System (ADS)
Teng, Xichao; Yu, Qifeng; Shang, Yang; Luo, Jing; Wang, Gang
2018-04-01
In this paper, we exploit prior information from global positioning systems and inertial measurement units to speed up the process of large scene reconstruction from images acquired by Unmanned Aerial Vehicles. We utilize weak pose information and intrinsic parameter to obtain the projection matrix for each view. As compared to unmanned aerial vehicles' flight altitude, topographic relief can usually be ignored, we assume that the scene is flat and use weak perspective camera to get projective transformations between two views. Furthermore, we propose an overlap criterion and select potentially matching view pairs between projective transformed views. A robust global structure from motion method is used for image based reconstruction. Our real world experiments show that the approach is accurate, scalable and computationally efficient. Moreover, projective transformations between views can also be used to eliminate false matching.
Trial-to-trial carry-over of item- and relational-information in auditory short-term memory
Visscher, Kristina M.; Kahana, Michael J.; Sekuler, Robert
2009-01-01
Using a short-term recognition memory task we evaluated the carry-over across trials of two types of auditory information: the characteristics of individual study sounds (item information), and the relationships between the study sounds (relational information). On each trial, subjects heard two successive broadband study sounds and then decided whether a subsequently presented probe sound had been in the study set. On some trials, the probe item's similarity to stimuli presented on the preceding trial was manipulated. This item information interfered with recognition, increasing false alarms from 0.4% to 4.4%. Moreover, the interference was tuned so that only stimuli very similar to each other interfered. On other trials, the relationship among stimuli was manipulated in order to alter the criterion subjects used in making recognition judgments. The effect of this manipulation was confined to the very trial on which the criterion change was generated, and did not affect the subsequent trial. These results demonstrate the existence of a sharply-tuned carry-over of auditory item information, but no carry-over of relational information. PMID:19210080
[Evaluation and improvement of the management of informed consent in the emergency department].
del Pozo, P; García, J A; Escribano, M; Soria, V; Campillo-Soto, A; Aguayo-Albasini, J L
2009-01-01
To assess the preoperative management in our emergency surgical service and to improve the quality of the care provided to patients. In order to find the causes of non-compliance, the Ishikawa Fishbone diagram was used and eight assessment criteria were chosen. The first assessment includes 120 patients operated on from January to April 2007. Corrective measures were implemented, which consisted of meetings and conferences with doctors and nurses, insisting on the importance of the informed consent as a legal document which must be signed by patients, and the obligation of giving a copy to patients or relatives. The second assessment includes the period from July to October 2007 (n=120). We observed a high non-compliance of C1 signing of surgical consent (CRITERION 1: all patients or relatives have to sign the surgical informed consent for the operation to be performed [27.5%]) and C2 giving a copy of the surgical consent (CRITERION 2: all patients or relatives must have received a copy of the surgical informed consent for the Surgery to be performed [72.5%]) and C4 anaesthetic consent copy (CRITERION 4: all patients or relatives must have received a copy of the Anaesthesia informed consent corresponding to the operation performed [90%]). After implementing corrective measures a significant improvement was observed in the compliance of C2 and C4. In C1 there was an improvement without statistical significance. The carrying out of an improvement cycle enabled the main objective of this paper to be achieved: to improve the management of informed consent and the quality of the care and information provided to our patients.
Slope histogram distribution-based parametrisation of Martian geomorphic features
NASA Astrophysics Data System (ADS)
Balint, Zita; Székely, Balázs; Kovács, Gábor
2014-05-01
The application of geomorphometric methods on the large Martian digital topographic datasets paves the way to analyse the Martian areomorphic processes in more detail. One of the numerous methods is the analysis is to analyse local slope distributions. To this implementation a visualization program code was developed that allows to calculate the local slope histograms and to compare them based on Kolmogorov distance criterion. As input data we used the digital elevation models (DTMs) derived from HRSC high-resolution stereo camera image from various Martian regions. The Kolmogorov-criterion based discrimination produces classes of slope histograms that displayed using coloration obtaining an image map. In this image map the distribution can be visualized by their different colours representing the various classes. Our goal is to create a local slope histogram based classification for large Martian areas in order to obtain information about general morphological characteristics of the region. This is a contribution of the TMIS.ascrea project, financed by the Austrian Research Promotion Agency (FFG). The present research is partly realized in the frames of TÁMOP 4.2.4.A/2-11-1-2012-0001 high priority "National Excellence Program - Elaborating and Operating an Inland Student and Researcher Personal Support System convergence program" project's scholarship support, using Hungarian state and European Union funds and cofinances from the European Social Fund.
Deblurring traffic sign images based on exemplars
Qiu, Tianshuang; Luan, Shengyang; Song, Haiyu; Wu, Linxiu
2018-01-01
Motion blur appearing in traffic sign images may lead to poor recognition results, and therefore it is of great significance to study how to deblur the images. In this paper, a novel method for deblurring traffic sign is proposed based on exemplars and several related approaches are also made. First, an exemplar dataset construction method is proposed based on multiple-size partition strategy to lower calculation cost of exemplar matching. Second, a matching criterion based on gradient information and entropy correlation coefficient is also proposed to enhance the matching accuracy. Third, L0.5-norm is introduced as the regularization item to maintain the sparsity of blur kernel. Experiments verify the superiority of the proposed approaches and extensive evaluations against state-of-the-art methods demonstrate the effectiveness of the proposed algorithm. PMID:29513677
NASA Technical Reports Server (NTRS)
Kubala, A.; Black, D.; Szebehely, V.
1993-01-01
A comparison is made between the stability criteria of Hill and that of Laplace to determine the stability of outer planetary orbits encircling binary stars. The restricted, analytically determined results of Hill's method by Szebehely and coworkers and the general, numerically integrated results of Laplace's method by Graziani and Black (1981) are compared for varying values of the mass parameter mu. For mu = 0 to 0.15, the closest orbit (lower limit of radius) an outer planet in a binary system can have and still remain stable is determined by Hill's stability criterion. For mu greater than 0.15, the critical radius is determined by Laplace's stability criterion. It appears that the Graziani-Black stability criterion describes the critical orbit within a few percent for all values of mu.
Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.
2010-01-01
Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121
Experiment design for pilot identification in compensatory tracking tasks
NASA Technical Reports Server (NTRS)
Wells, W. R.
1976-01-01
A design criterion for input functions in laboratory tracking tasks resulting in efficient parameter estimation is formulated. The criterion is that the statistical correlations between pairs of parameters be reduced in order to minimize the problem of nonuniqueness in the extraction process. The effectiveness of the method is demonstrated for a lower order dynamic system.
Factors Affecting the Identification of Research Problems in Educational Administration Studies
ERIC Educational Resources Information Center
Yalçin, Mikail; Bektas, Fatih; Öztekin, Özge; Karadag, Engin
2016-01-01
The purpose of this study is to reveal the factors that affect the identification of research problems in educational administration studies. The study was designed using the case study method. Criterion sampling was used to determine the work group; the criterion used to select the participants was that of having a study in the field of…
da Silva, Wanderson Roberto; Dias, Juliana Chioda Ribeiro; Maroco, João; Campos, Juliana Alvares Duarte Bonini
2014-09-01
This study aimed at evaluating the validity, reliability, and factorial invariance of the complete (34-item) and shortened (8-item and 16-item) versions of the Body Shape Questionnaire (BSQ) when applied to Brazilian university students. A total of 739 female students with a mean age of 20.44 (standard deviation=2.45) years participated. Confirmatory factor analysis was conducted to verify the degree to which the one-factor structure satisfies the proposal for the BSQ's expected structure. Two items of the 34-item version were excluded because they had factor weights (λ)<40. All models had adequate convergent validity (average variance extracted=.43-.58; composite reliability=.85-.97) and internal consistency (α=.85-.97). The 8-item B version was considered the best shortened BSQ version (Akaike information criterion=84.07, Bayes information criterion=157.75, Browne-Cudeck criterion=84.46), with strong invariance for independent samples (Δχ(2)λ(7)=5.06, Δχ(2)Cov(8)=5.11, Δχ(2)Res(16)=19.30). Copyright © 2014 Elsevier Ltd. All rights reserved.
Zheng, Wenming; Lin, Zhouchen; Wang, Haixian
2014-04-01
A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Perekhodtseva, Elvira V.
2010-05-01
Development of successful method of forecast of storm winds, including squalls and tornadoes, that often result in human and material losses, could allow one to take proper measures against destruction of buildings and to protect people. Well-in-advance successful forecast (from 12 hours to 48 hour) makes possible to reduce the losses. Prediction of the phenomena involved is a very difficult problem for synoptic till recently. The existing graphic and calculation methods still depend on subjective decision of an operator. Nowadays in Russia there is no hydrodynamic model for forecast of the maximal wind velocity V> 25m/c, hence the main tools of objective forecast are statistical methods using the dependence of the phenomena involved on a number of atmospheric parameters (predictors). . Statistical decisive rule of the alternative and probability forecast of these events was obtained in accordance with the concept of "perfect prognosis" using the data of objective analysis. For this purpose the different teaching samples of present and absent of this storm wind and rainfalls were automatically arranged that include the values of forty physically substantiated potential predictors. Then the empirical statistical method was used that involved diagonalization of the mean correlation matrix R of the predictors and extraction of diagonal blocks of strongly correlated predictors. Thus for these phenomena the most informative predictors were selected without loosing information. The statistical decisive rules for diagnosis and prognosis of the phenomena involved U(X) were calculated for choosing informative vector-predictor. We used the criterion of distance of Mahalanobis and criterion of minimum of entropy by Vapnik-Chervonenkis for the selection predictors. Successful development of hydrodynamic models for short-term forecast and improvement of 36-48h forecasts of pressure, temperature and others parameters allowed us to use the prognostic fields of those models for calculations of the discriminant functions in the nodes of the grid 75x75km and the values of probabilities P of dangerous wind and thus to get fully automated forecasts. . In order to apply the alternative forecast to European part of Russia and Europe the author proposes the empirical threshold values specified for this phenomenon and advance period 36 hours. According to the Pirsey-Obukhov criterion (T), the success of this hydrometeorological-statistical method of forecast of storm wind and tornadoes to 36 -48 hours ahead in the warm season for the territory of Europe part of Russia and Siberia is T = 1-a-b=0,54-0,78 after independent and author experiments during the period 2004-2009 years. A lot of examples of very successful forecasts are submitted at this report for the territory of Europe and Russia. The same decisive rules were applied to the forecast of these phenomena during cold period in 2009-2010 years too. On the first month of 2010 a lot of cases of storm wind with heavy snowfall were observed and were forecasting over the territory of France, Italy and Germany.
Automated quality control in a file-based broadcasting workflow
NASA Astrophysics Data System (ADS)
Zhang, Lina
2014-04-01
Benefit from the development of information and internet technologies, television broadcasting is transforming from inefficient tape-based production and distribution to integrated file-based workflows. However, no matter how many changes have took place, successful broadcasting still depends on the ability to deliver a consistent high quality signal to the audiences. After the transition from tape to file, traditional methods of manual quality control (QC) become inadequate, subjective, and inefficient. Based on China Central Television's full file-based workflow in the new site, this paper introduces an automated quality control test system for accurate detection of hidden troubles in media contents. It discusses the system framework and workflow control when the automated QC is added. It puts forward a QC criterion and brings forth a QC software followed this criterion. It also does some experiments on QC speed by adopting parallel processing and distributed computing. The performance of the test system shows that the adoption of automated QC can make the production effective and efficient, and help the station to achieve a competitive advantage in the media market.
NASA Astrophysics Data System (ADS)
Sun, Kai; Xu, Jin-Shi; Ye, Xiang-Jun; Wu, Yu-Chun; Chen, Jing-Ling; Li, Chuan-Feng; Guo, Guang-Can
2014-10-01
Einstein-Podolsky-Rosen (EPR) steering, a generalization of the original concept of "steering" proposed by Schrödinger, describes the ability of one system to nonlocally affect another system's states through local measurements. Some experimental efforts to test EPR steering in terms of inequalities have been made, which usually require many measurement settings. Analogy to the "all-versus-nothing" (AVN) proof of Bell's theorem without inequalities, testing steerability without inequalities would be more strong and require less resources. Moreover, the practical meaning of steering implies that it should also be possible to store the state information on the side to be steered, a result that has not yet been experimentally demonstrated. Using a recent AVN criterion for two-qubit entangled states, we experimentally implement a practical steering game using quantum memory. Furthermore, we develop a theoretical method to deal with the noise and finite measurement statistics within the AVN framework and apply it to analyze the experimental data. Our results clearly show the facilitation of the AVN criterion for testing steerability and provide a particularly strong perspective for understanding EPR steering.
Ranking Schools' Academic Performance Using a Fuzzy VIKOR
NASA Astrophysics Data System (ADS)
Musani, Suhaina; Aziz Jemain, Abdul
2015-06-01
Determination rank is structuring alternatives in order of priority. It is based on the criteria determined for each alternative involved. Evaluation criteria are performed and then a composite index composed of each alternative for the purpose of arranging in order of preference alternatives. This practice is known as multiple criteria decision making (MCDM). There are several common approaches to MCDM, one of the practice is known as VIKOR (Multi-criteria Optimization and Compromise Solution). The objective of this study is to develop a rational method for school ranking based on linguistic information of a criterion. The school represents an alternative, while the results for a number of subjects as the criterion. The results of the examination for a course, is given according to the student percentage of each grade. Five grades of excellence, honours, average, pass and fail is used to indicate a level of achievement in linguistics. Linguistic variables are transformed to fuzzy numbers to form a composite index of school performance. Results showed that fuzzy set theory can solve the limitations of using MCDM when there is uncertainty problems exist in the data.
Wilson, G. Terence; Sysko, Robyn
2013-01-01
Objective In DSM-IV, to be diagnosed with Bulimia Nervosa (BN) or the provisional diagnosis of Binge Eating Disorder (BED), an individual must experience episodes of binge eating is “at least twice a week” on average, for three or six months respectively. The purpose of this review was to examine the validity and utility of the frequency criterion for BN and BED. Method Published studies evaluating the frequency criterion were reviewed. Results Our review found little evidence to support the validity or utility of the DSM-IV frequency criterion of twice a week binge eating; however, the number of studies available for our review was limited. Conclusion A number of options are available for the frequency criterion in DSM-V, and the optimal diagnostic threshold for binge eating remains to be determined. PMID:19610014
A variational dynamic programming approach to robot-path planning with a distance-safety criterion
NASA Technical Reports Server (NTRS)
Suh, Suk-Hwan; Shin, Kang G.
1988-01-01
An approach to robot-path planning is developed by considering both the traveling distance and the safety of the robot. A computationally-efficient algorithm is developed to find a near-optimal path with a weighted distance-safety criterion by using a variational calculus and dynamic programming (VCDP) method. The algorithm is readily applicable to any factory environment by representing the free workspace as channels. A method for deriving these channels is also proposed. Although it is developed mainly for two-dimensional problems, this method can be easily extended to a class of three-dimensional problems. Numerical examples are presented to demonstrate the utility and power of this method.
Bayesian transformation cure frailty models with multivariate failure time data.
Yin, Guosheng
2008-12-10
We propose a class of transformation cure frailty models to accommodate a survival fraction in multivariate failure time data. Established through a general power transformation, this family of cure frailty models includes the proportional hazards and the proportional odds modeling structures as two special cases. Within the Bayesian paradigm, we obtain the joint posterior distribution and the corresponding full conditional distributions of the model parameters for the implementation of Gibbs sampling. Model selection is based on the conditional predictive ordinate statistic and deviance information criterion. As an illustration, we apply the proposed method to a real data set from dentistry.
NASA Technical Reports Server (NTRS)
Wu, S. T.
1974-01-01
The responses of the solar atmosphere due to an outward propagation shock are examined by employing the Lax-Wendroff method to solve the set of nonlinear partial differential equations in the model of the solar atmosphere. It is found that this theoretical model can be used to explain the solar phenomena of surge and spray. A criterion to discriminate the surge and spray is established and detailed information concerning the density, velocity, and temperature distribution with respect to the height and time is presented. The complete computer program is also included.
The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis
NASA Astrophysics Data System (ADS)
Xu, X.; Tong, S.; Wang, L.
2017-12-01
How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.
Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan
2017-01-01
This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second degree where the parabola is its graphical representation.
A Controlled Evaluation of the Distress Criterion for Binge Eating Disorder
Grilo, Carlos M.; White, Marney A.
2012-01-01
Objective Research has examined various aspects of the validity of the research criteria for binge eating disorder (BED) but has yet to evaluate the utility of criterion C “marked distress about binge eating.” This study examined the significance of the marked distress criterion for BED using two complementary comparisons groups. Method A total of 1075 community volunteers completed a battery of self-report instruments as part of an internet study. Analyses compared body mass index (BMI), eating-disorder psychopathology, and depressive levels in four groups: 97 participants with BED except for the distress criterion (BED-ND), 221 participants with BED including the distress criterion (BED), 79 participants with bulimia nervosa (BN), and 489 obese participants without binge-eating or purging (NBPO). Parallel analyses compared these study groups using the broadened frequency criterion (i.e., once-weekly for binge/purge behaviors) proposed for DSM-5 and the DSM-IV twice-weekly frequency criterion. Results The BED group had significantly greater eating-disorder psychopathology and depressive levels than the BED-ND group. The BED group, but not the BED-ND group, had significantly greater eating-disorder psychopathology than the NBPO comparison group. The BN group had significantly greater eating-disorder psychopathology and depressive levels than all three other groups. The group differences existed even after controlling for depression levels, BMI, and demographic variables, although some differences between the BN and BED groups were attenuated when controlling for depression levels. Conclusions These findings provide support for the validity of the “marked distress” criterion for the diagnosis of BED. PMID:21707133
Salloum, Alison; Scheeringa, Michael S.; Cohen, Judith A.; Storch, Eric A.
2014-01-01
Background In order to develop Stepped Care Trauma-Focused Cognitive Behavioral Therapy (TF-CBT), a definition of early response/non-response is needed to guide decisions about the need for subsequent treatment. Objective The purpose of this article is to (1) establish criterion for defining an early indicator of response/nonresponse to the first step within Stepped Care TF-CBT, and (2) to explore the preliminary clinical utility of the early response/non-response criterion. Method Data from two studies were used: (1) treatment outcome data from a clinical trial in which 17 young children (ages 3 to 6 years) received therapist-directed CBT for children with PTSS were examined to empirically establish the number of posttraumatic stress symptoms to define early treatment response/non-response; and (2) three case examples with young children in Stepped Care TF-CBT were used to explore the utility of the treatment response criterion. Results For defining the responder status criterion, an algorithm of either 3 or fewer PTSS on a clinician-rated measure or being below the clinical cutoff score on a parent-rated measure of childhood PTSS, and being rated as improved, much improved or free of symptoms functioned well for determining whether or not to step up to more intensive treatment. Case examples demonstrated how the criterion were used to guide subsequent treatment, and that responder status criterion after Step One may or may not be aligned with parent preference. Conclusion Although further investigation is needed, the responder status criterion for young children used after Step One of Stepped Care TF-CBT appears promising. PMID:25663796
Pak, Mehmet; Gülci, Sercan; Okumuş, Arif
2018-01-06
This study focuses on the geo-statistical assessment of spatial estimation models in forest crimes. Used widely in the assessment of crime and crime-dependent variables, geographic information system (GIS) helps the detection of forest crimes in rural regions. In this study, forest crimes (forest encroachment, illegal use, illegal timber logging, etc.) are assessed holistically and modeling was performed with ten different independent variables in GIS environment. The research areas are three Forest Enterprise Chiefs (Baskonus, Cinarpinar, and Hartlap) affiliated to Kahramanmaras Forest Regional Directorate in Kahramanmaras. An estimation model was designed using ordinary least squares (OLS) and geographically weighted regression (GWR) methods, which are often used in spatial association. Three different models were proposed in order to increase the accuracy of the estimation model. The use of variables with a variance inflation factor (VIF) value of lower than 7.5 in Model I and lower than 4 in Model II and dependent variables with significant robust probability values in Model III are associated with forest crimes. Afterwards, the model with the lowest corrected Akaike Information Criterion (AIC c ), and the highest R 2 value was selected as the comparison criterion. Consequently, Model III proved to be more accurate compared to other models. For Model III, while AIC c was 328,491 and R 2 was 0.634 for OLS-3 model, AIC c was 318,489 and R 2 was 0.741 for GWR-3 model. In this respect, the uses of GIS for combating forest crimes provide different scenarios and tangible information that will help take political and strategic measures.
ERIC Educational Resources Information Center
Combrinck, Celeste; Scherman, Vanessa; Maree, David
2016-01-01
This study describes how criterion-referenced feedback was produced from English language, mathematics and natural sciences monitoring assessments. The assessments were designed for grades 8 to 11 to give an overall indication of curriculum-standards attained in a given subject over the course of a year (N = 1113). The Rasch Item Map method was…
ERIC Educational Resources Information Center
Naji Qasem, Mamun Ali; Ahmad Gul, Showkeen Bilal
2014-01-01
The study was conducted to know the effect of items direction (positive or negative) on the factorial construction and criterion related validity in Likert scale. The descriptive survey research method was used for the study and the sample consisted of 510 undergraduate students selected by used random sampling technique. A scale developed by…
ERIC Educational Resources Information Center
Li, Zhi; Feng, Hui-Hsien; Saricaoglu, Aysel
2017-01-01
This classroom-based study employs a mixed-methods approach to exploring both short-term and long-term effects of Criterion feedback on ESL students' development of grammatical accuracy. The results of multilevel growth modeling indicate that Criterion feedback helps students in both intermediate-high and advanced-low levels reduce errors in eight…
Harris, Janet L; Booth, Andrew; Cargo, Margaret; Hannes, Karin; Harden, Angela; Flemming, Kate; Garside, Ruth; Pantoja, Tomas; Thomas, James; Noyes, Jane
2018-05-01
This paper updates previous Cochrane guidance on question formulation, searching, and protocol development, reflecting recent developments in methods for conducting qualitative evidence syntheses to inform Cochrane intervention reviews. Examples are used to illustrate how decisions about boundaries for a review are formed via an iterative process of constructing lines of inquiry and mapping the available information to ascertain whether evidence exists to answer questions related to effectiveness, implementation, feasibility, appropriateness, economic evidence, and equity. The process of question formulation allows reviewers to situate the topic in relation to how it informs and explains effectiveness, using the criterion of meaningfulness, appropriateness, feasibility, and implementation. Questions related to complex questions and interventions can be structured by drawing on an increasingly wide range of question frameworks. Logic models and theoretical frameworks are useful tools for conceptually mapping the literature to illustrate the complexity of the phenomenon of interest. Furthermore, protocol development may require iterative question formulation and searching. Consequently, the final protocol may function as a guide rather than a prescriptive route map, particularly in qualitative reviews that ask more exploratory and open-ended questions. Copyright © 2017 Elsevier Inc. All rights reserved.
Shear velocity criterion for incipient motion of sediment
Simoes, Francisco J.
2014-01-01
The prediction of incipient motion has had great importance to the theory of sediment transport. The most commonly used methods are based on the concept of critical shear stress and employ an approach similar, or identical, to the Shields diagram. An alternative method that uses the movability number, defined as the ratio of the shear velocity to the particle’s settling velocity, was employed in this study. A large amount of experimental data were used to develop an empirical incipient motion criterion based on the movability number. It is shown that this approach can provide a simple and accurate method of computing the threshold condition for sediment motion.
[A new method of investigation of "child's" behavior (infant-mother attachment) of newborn rats].
Stovolosov, I S; Dubynin, V A; Kamenskiĭ, A A
2010-01-01
A new method of studying "child's" (maternal bonding) behavior of newborn rats was developed. The efficiency of the method was proved in estimation of dopaminergic control of the infant-mother attachment. Selective D2-antagonist clebopride applied in subthreshold for motor activity doses caused a decrease in aspiration of pups to be in contact with a dam. On the basis of features analyzed (latent periods and expression of various behavioral components), the integrated criterion for the estimation of "child's" reactions was suggested. Application of this criterion made it possible to neutralize high individual variability of the behavior typical of newborns.
Thermal Signature Identification System (TheSIS)
NASA Technical Reports Server (NTRS)
Merritt, Scott; Bean, Brian
2015-01-01
We characterize both nonlinear and high order linear responses of fiber-optic and optoelectronic components using spread spectrum temperature cycling methods. This Thermal Signature Identification System (TheSIS) provides much more detail than conventional narrowband or quasi-static temperature profiling methods. This detail allows us to match components more thoroughly, detect subtle reversible shifts in performance, and investigate the cause of instabilities or irreversible changes. In particular, we create parameterized models of athermal fiber Bragg gratings (FBGs), delay line interferometers (DLIs), and distributed feedback (DFB) lasers, then subject the alternative models to selection via the Akaike Information Criterion (AIC). Detailed pairing of components, e.g. FBGs, is accomplished by means of weighted distance metrics or norms, rather than on the basis of a single parameter, such as center wavelength.
Lindström, Martin; Axén, Elin; Lindström, Christine; Beckman, Anders; Moghaddassi, Mahnaz; Merlo, Juan
2006-12-01
The aim of this study was to investigate the influence of contextual (social capital and administrative/neo-materialist) and individual factors on lack of access to a regular doctor. The 2000 public health survey in Scania is a cross-sectional study. A total of 13,715 persons answered a postal questionnaire, which is 59% of the random sample. A multilevel logistic regression model, with individuals at the first level and municipalities at the second, was performed. The effect (intra-class correlations, cross-level modification and odds ratios) of individual and municipality (social capital and health care district) factors on lack of access to a regular doctor was analysed using simulation method. The Deviance Information Criterion (DIC) was used as information criterion for the models. The second level municipality variance in lack of access to a regular doctor is substantial even in the final models with all individual and contextual variables included. The model that results in the largest reduction in DIC is the model including age, sex and individual social participation (which is a network aspect of social capital), but the models which include administrative and social capital second level factors also reduced the DIC values. This study suggests that both administrative health care district and social capital may partly explain the individual's self reported lack of access to a regular doctor.
Ng, Edmond S-W; Diaz-Ordaz, Karla; Grieve, Richard; Nixon, Richard M; Thompson, Simon G; Carpenter, James R
2016-10-01
Multilevel models provide a flexible modelling framework for cost-effectiveness analyses that use cluster randomised trial data. However, there is a lack of guidance on how to choose the most appropriate multilevel models. This paper illustrates an approach for deciding what level of model complexity is warranted; in particular how best to accommodate complex variance-covariance structures, right-skewed costs and missing data. Our proposed models differ according to whether or not they allow individual-level variances and correlations to differ across treatment arms or clusters and by the assumed cost distribution (Normal, Gamma, Inverse Gaussian). The models are fitted by Markov chain Monte Carlo methods. Our approach to model choice is based on four main criteria: the characteristics of the data, model pre-specification informed by the previous literature, diagnostic plots and assessment of model appropriateness. This is illustrated by re-analysing a previous cost-effectiveness analysis that uses data from a cluster randomised trial. We find that the most useful criterion for model choice was the deviance information criterion, which distinguishes amongst models with alternative variance-covariance structures, as well as between those with different cost distributions. This strategy for model choice can help cost-effectiveness analyses provide reliable inferences for policy-making when using cluster trials, including those with missing data. © The Author(s) 2013.
A Group Action Method for Construction of Strong Substitution Box
NASA Astrophysics Data System (ADS)
Jamal, Sajjad Shaukat; Shah, Tariq; Attaullah, Atta
2017-06-01
In this paper, the method to develop cryptographically strong substitution box is presented which can be used in multimedia security and data hiding techniques. The algorithm of construction depends on the action of a projective general linear group over the set of units of the finite commutative ring. The strength of substitution box and ability to create confusion is assessed with different available analyses. Moreover, the ability of resistance against malicious attacks is also evaluated. The substitution box is examined by bit independent criterion, strict avalanche criterion, nonlinearity test, linear approximation probability test and differential approximation probability test. This substitution box is equated with well-recognized substitution boxes such as AES, Gray, APA, S8, prime of residue, Xyi and Skipjack. The comparison shows encouraging results about the strength of the proposed box. The majority logic criterion is also calculated to analyze the strength and its practical implementation.
Measures and Interpretations of Vigilance Performance: Evidence Against the Detection Criterion
NASA Technical Reports Server (NTRS)
Balakrishnan, J. D.
1998-01-01
Operators' performance in a vigilance task is often assumed to depend on their choice of a detection criterion. When the signal rate is low this criterion is set high, causing the hit and false alarm rates to be low. With increasing time on task the criterion presumably tends to increase even further, thereby further decreasing the hit and false alarm rates. Virtually all of the empirical evidence for this simple interpretation is based on estimates of the bias measure Beta from signal detection theory. In this article, I describe a new approach to studying decision making that does not require the technical assumptions of signal detection theory. The results of this new analysis suggest that the detection criterion is never biased toward either response, even when the signal rate is low and the time on task is long. Two modifications of the signal detection theory framework are considered to account for this seemingly paradoxical result. The first assumes that the signal rate affects the relative sizes of the variances of the information distributions; the second assumes that the signal rate affects the logic of the operator's stopping rule. Actual or potential applications of this research include the improved training and performance assessment of operators in areas such as product quality control, air traffic control, and medical and clinical diagnosis.
Event-based cluster synchronization of coupled genetic regulatory networks
NASA Astrophysics Data System (ADS)
Yue, Dandan; Guan, Zhi-Hong; Li, Tao; Liao, Rui-Quan; Liu, Feng; Lai, Qiang
2017-09-01
In this paper, the cluster synchronization of coupled genetic regulatory networks with a directed topology is studied by using the event-based strategy and pinning control. An event-triggered condition with a threshold consisting of the neighbors' discrete states at their own event time instants and a state-independent exponential decay function is proposed. The intra-cluster states information and extra-cluster states information are involved in the threshold in different ways. By using the Lyapunov function approach and the theories of matrices and inequalities, we establish the cluster synchronization criterion. It is shown that both the avoidance of continuous transmission of information and the exclusion of the Zeno behavior are ensured under the presented triggering condition. Explicit conditions on the parameters in the threshold are obtained for synchronization. The stability criterion of a single GRN is also given under the reduced triggering condition. Numerical examples are provided to validate the theoretical results.
Sampling in the light of Wigner distribution.
Stern, Adrian; Javidi, Bahram
2004-03-01
We propose a new method for analysis of the sampling and reconstruction conditions of real and complex signals by use of the Wigner domain. It is shown that the Wigner domain may provide a better understanding of the sampling process than the traditional Fourier domain. For example, it explains how certain non-bandlimited complex functions can be sampled and perfectly reconstructed. On the basis of observations in the Wigner domain, we derive a generalization to the Nyquist sampling criterion. By using this criterion, we demonstrate simple preprocessing operations that can adapt a signal that does not fulfill the Nyquist sampling criterion. The preprocessing operations demonstrated can be easily implemented by optical means.
NASA Astrophysics Data System (ADS)
Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping
2018-05-01
Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.
Meissner, Christian A; Tredoux, Colin G; Parker, Janat F; MacLin, Otto H
2005-07-01
Many eyewitness researchers have argued for the application of a sequential alternative to the traditional simultaneous lineup, given its role in decreasing false identifications of innocent suspects (sequential superiority effect). However, Ebbesen and Flowe (2002) have recently noted that sequential lineups may merely bring about a shift in response criterion, having no effect on discrimination accuracy. We explored this claim, using a method that allows signal detection theory measures to be collected from eyewitnesses. In three experiments, lineup type was factorially combined with conditions expected to influence response criterion and/or discrimination accuracy. Results were consistent with signal detection theory predictions, including that of a conservative criterion shift with the sequential presentation of lineups. In a fourth experiment, we explored the phenomenological basis for the criterion shift, using the remember-know-guess procedure. In accord with previous research, the criterion shift in sequential lineups was associated with a reduction in familiarity-based responding. It is proposed that the relative similarity between lineup members may create a context in which fluency-based processing is facilitated to a greater extent when lineup members are presented simultaneously.
Establishment of Application Guidance for OTC non-Kampo Crude Drug Extract Products in Japan
Somekawa, Layla; Maegawa, Hikoichiro; Tsukada, Shinsuke; Nakamura, Takatoshi
2017-01-01
Currently, there are no standardized regulatory systems for herbal medicinal products worldwide. Communication and sharing of knowledge between different regulatory systems will lead to mutual understanding and might help identify topics which deserve further discussion in the establishment of common standards. Regulatory information on traditional herbal medicinal products in Japan is updated by the establishment of Application Guidance for over-the-counter non-Kampo Crude Drug Extract Products. We would like to report on updated regulatory information on the new Application Guidance. Methods for comparison of Crude Drug Extract formulation and standard decoction and criteria for application and the key points to consider for each criterion are indicated in the guidance. Establishment of the guidance contributes to improvements in public health. We hope that the regulatory information about traditional herbal medicinal products in Japan will be of contribution to tackling the challenging task of regulating traditional herbal products worldwide. PMID:28894633
Prediction of Hot Tearing Using a Dimensionless Niyama Criterion
NASA Astrophysics Data System (ADS)
Monroe, Charles; Beckermann, Christoph
2014-08-01
The dimensionless form of the well-known Niyama criterion is extended to include the effect of applied strain. Under applied tensile strain, the pressure drop in the mushy zone is enhanced and pores grow beyond typical shrinkage porosity without deformation. This porosity growth can be expected to align perpendicular to the applied strain and to contribute to hot tearing. A model to capture this coupled effect of solidification shrinkage and applied strain on the mushy zone is derived. The dimensionless Niyama criterion can be used to determine the critical liquid fraction value below which porosity forms. This critical value is a function of alloy properties, solidification conditions, and strain rate. Once a dimensionless Niyama criterion value is obtained from thermal and mechanical simulation results, the corresponding shrinkage and deformation pore volume fractions can be calculated. The novelty of the proposed method lies in using the critical liquid fraction at the critical pressure drop within the mushy zone to determine the onset of hot tearing. The magnitude of pore growth due to shrinkage and deformation is plotted as a function of the dimensionless Niyama criterion for an Al-Cu alloy as an example. Furthermore, a typical hot tear "lambda"-shaped curve showing deformation pore volume as a function of alloy content is produced for two Niyama criterion values.
Aragón-Noriega, Eugenio Alberto
2013-09-01
Growth models of marine animals, for fisheries and/or aquaculture purposes, are based on the popular von Bertalanffy model. This tool is mostly used because its parameters are used to evaluate other fisheries models, such as yield per recruit; nevertheless, there are other alternatives (such as Gompertz, Logistic, Schnute) not yet used by fishery scientists, that may result useful depending on the studied species. The penshell Atrina maura, has been studied for fisheries or aquaculture supplies, but its individual growth has not yet been studied before. The aim of this study was to model the absolute growth of the penshell A. maura using length-age data. For this, five models were assessed to obtain growth parameters: von Bertalanffy, Gompertz, Logistic, Schnute case 1 and Schnute and Richards. The criterion used to select the best models was the Akaike information criterion, as well as the residual squared sum and R2 adjusted. To get the average asymptotic length, the multi model inference approach was used. According to Akaike information criteria, the Gompertz model better described the absolute growth of A. maura. Following the multi model inference approach the average asymptotic shell length was 218.9 mm (IC 212.3-225.5) of shell length. I concluded that the use of the multi model approach and the Akaike information criteria represented the most robust method for growth parameter estimation of A. maura and the von Bertalanffy growth model should not be selected a priori as the true model to obtain the absolute growth in bivalve mollusks like in the studied species in this paper.
New segmentation-based tone mapping algorithm for high dynamic range image
NASA Astrophysics Data System (ADS)
Duan, Weiwei; Guo, Huinan; Zhou, Zuofeng; Huang, Huimin; Cao, Jianzhong
2017-07-01
The traditional tone mapping algorithm for the display of high dynamic range (HDR) image has the drawback of losing the impression of brightness, contrast and color information. To overcome this phenomenon, we propose a new tone mapping algorithm based on dividing the image into different exposure regions in this paper. Firstly, the over-exposure region is determined using the Local Binary Pattern information of HDR image. Then, based on the peak and average gray of the histogram, the under-exposure and normal-exposure region of HDR image are selected separately. Finally, the different exposure regions are mapped by differentiated tone mapping methods to get the final result. The experiment results show that the proposed algorithm achieve the better performance both in visual quality and objective contrast criterion than other algorithms.
A Shot Number Based Approach to Performance Analysis in Table Tennis
Yoshida, Kazuto; Yamada, Koshi
2017-01-01
Abstract The current study proposes a novel approach that improves the conventional performance analysis in table tennis by introducing the concept of frequency, or the number of shots, of each shot number. The improvements over the conventional method are as follows: better accuracy of the evaluation of skills and tactics of players, additional insights into scoring and returning skills and ease of understanding the results with a single criterion. The performance analysis of matches played at the 2012 Summer Olympics in London was conducted using the proposed method. The results showed some effects of the shot number and gender differences in table tennis. Furthermore, comparisons were made between Chinese players and players from other countries, what threw light on the skills and tactics of the Chinese players. The present findings demonstrate that the proposed method provides useful information and has some advantages over the conventional method. PMID:28210334
Diederich, A; Schreier, M
2010-09-01
In order to accomplish broad acceptance of priority setting in healthcare, a public debate seems essential, in particular, including the preferences of the general public. In Germany, objections to public involvement are to some extent based on the perception that individuals have an inherent personal bias and cannot represent interests other than their own. The following excerpt from a more comprehensive study reports on the acceptance of personal responsibility as a criterion for prioritizing. A mixed-methods design is used for combining a qualitative interview study and a quantitative survey representative of the German public. Both the interview study and the survey demonstrate that behavior that is harmful to one's health is generally accepted as a criterion for posteriorizing patients, mostly regardless of self interest. In addition, the interview study shows reasons for acceptance or refusal of the self-inflicted behavior criterion.
van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter
2015-08-07
Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.
Feature combinations and the divergence criterion
NASA Technical Reports Server (NTRS)
Decell, H. P., Jr.; Mayekar, S. M.
1976-01-01
Classifying large quantities of multidimensional remotely sensed agricultural data requires efficient and effective classification techniques and the construction of certain transformations of a dimension reducing, information preserving nature. The construction of transformations that minimally degrade information (i.e., class separability) is described. Linear dimension reducing transformations for multivariate normal populations are presented. Information content is measured by divergence.
Orbital Evasive Target Tracking and Sensor Management
2012-03-30
maximize the total information gain in the observer-to-target assignment. We compare the information based approach to the game theoretic criterion where...tracking with multiple space borne observers. The results indicate that the game theoretic approach is more effective than the information based approach in...sensor management is to maximize the total information gain in the observer-to-target assignment. We compare the information based approach to the game
Dolejsi, Erich; Bodenstorfer, Bernhard; Frommlet, Florian
2014-01-01
The prevailing method of analyzing GWAS data is still to test each marker individually, although from a statistical point of view it is quite obvious that in case of complex traits such single marker tests are not ideal. Recently several model selection approaches for GWAS have been suggested, most of them based on LASSO-type procedures. Here we will discuss an alternative model selection approach which is based on a modification of the Bayesian Information Criterion (mBIC2) which was previously shown to have certain asymptotic optimality properties in terms of minimizing the misclassification error. Heuristic search strategies are introduced which attempt to find the model which minimizes mBIC2, and which are efficient enough to allow the analysis of GWAS data. Our approach is implemented in a software package called MOSGWA. Its performance in case control GWAS is compared with the two algorithms HLASSO and d-GWASelect, as well as with single marker tests, where we performed a simulation study based on real SNP data from the POPRES sample. Our results show that MOSGWA performs slightly better than HLASSO, where specifically for more complex models MOSGWA is more powerful with only a slight increase in Type I error. On the other hand according to our simulations GWASelect does not at all control the type I error when used to automatically determine the number of important SNPs. We also reanalyze the GWAS data from the Wellcome Trust Case-Control Consortium and compare the findings of the different procedures, where MOSGWA detects for complex diseases a number of interesting SNPs which are not found by other methods. PMID:25061809
Tateishi, Seiichiro; Watase, Mariko; Fujino, Yoshihisa; Mori, Koji
2016-01-01
In Japan, employee fitness for work is determined by annual medical examinations. It may be possible to reduce the variability in the results of work fitness determination, particularly for situation, if there is consensus among experts regarding consideration of limitation of work by means of a single parameter. Consensus building was attempted among 104 occupational physicians by employing a 3-round Delphi method. Among the medical examination parameters for which at least 50% of participants agreed in the 3rd round of the survey that the parameter would independently merit consideration for limitation of work, the values of the parameters proposed as criterion values that trigger consideration of limitation of work were sought. Parameters, along with their most frequently proposed criterion values, were defined in the study group meeting as parameters for which consensus was reached. Consensus was obtained for 8 parameters: systolic blood pressure 180 mmHg (86.6%), diastolic blood pressure 110 mmHg (85.9%), postprandial plasma glucose 300 mg/dl (76.9%), fasting plasma glucose 200 mg/dl (69.1%), Cre 2.0mg/dl (67.2%), HbA1c (JDS) 10% (62.3%), ALT 200 U/l (61.6%), and Hb 8 g/l (58.5%). To support physicians who give advice to employers about work-related measures based on the results of general medical examinations of employees, expert consensus information was obtained that can serve as background material for making judgements. It is expected that the use of this information will facilitate the ability to take appropriate measures after medical examination of employees.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204
ERIC Educational Resources Information Center
Todd, Richard D.; Huang, Hongyan; Henderson, Cynthia A.
2008-01-01
Background: To test whether the retrospective reporting of the age of onset impairment criterion for attention deficit/hyperactivity disorder (ADHD) required in the "Diagnostic and Statistical Manual of Mental Disorders-IV" (DSM-IV) complicates identification of new and known child and adolescent cases later in life. Methods: A birth-records-based…
Continuous control of chaos based on the stability criterion.
Yu, Hong Jie; Liu, Yan Zhu; Peng, Jian Hua
2004-06-01
A method of chaos control based on stability criterion is proposed in the present paper. This method can stabilize chaotic systems onto a desired periodic orbit by a small time-continuous perturbation nonlinear feedback. This method does not require linearization of the system around the stabilized orbit and only an approximate location of the desired periodic orbit is required which can be automatically detected in the control process. The control can be started at any moment by choosing appropriate perturbation restriction condition. It seems that more flexibility and convenience are the main advantages of this method. The discussions on control of attitude motion of a spacecraft, Rössler system, and two coupled Duffing oscillators are given as numerical examples.
Prince, Martin J; de Rodriguez, Juan Llibre; Noriega, L; Lopez, A; Acosta, Daisy; Albanese, Emiliano; Arizaga, Raul; Copeland, John RM; Dewey, Michael; Ferri, Cleusa P; Guerra, Mariella; Huang, Yueqin; Jacob, KS; Krishnamoorthy, ES; McKeigue, Paul; Sousa, Renata; Stewart, Robert J; Salas, Aquiles; Sosa, Ana Luisa; Uwakwa, Richard
2008-01-01
Background The criterion for dementia implicit in DSM-IV is widely used in research but not fully operationalised. The 10/66 Dementia Research Group sought to do this using assessments from their one phase dementia diagnostic research interview, and to validate the resulting algorithm in a population-based study in Cuba. Methods The criterion was operationalised as a computerised algorithm, applying clinical principles, based upon the 10/66 cognitive tests, clinical interview and informant reports; the Community Screening Instrument for Dementia, the CERAD 10 word list learning and animal naming tests, the Geriatric Mental State, and the History and Aetiology Schedule – Dementia Diagnosis and Subtype. This was validated in Cuba against a local clinician DSM-IV diagnosis and the 10/66 dementia diagnosis (originally calibrated probabilistically against clinician DSM-IV diagnoses in the 10/66 pilot study). Results The DSM-IV sub-criteria were plausibly distributed among clinically diagnosed dementia cases and controls. The clinician diagnoses agreed better with 10/66 dementia diagnosis than with the more conservative computerized DSM-IV algorithm. The DSM-IV algorithm was particularly likely to miss less severe dementia cases. Those with a 10/66 dementia diagnosis who did not meet the DSM-IV criterion were less cognitively and functionally impaired compared with the DSMIV confirmed cases, but still grossly impaired compared with those free of dementia. Conclusion The DSM-IV criterion, strictly applied, defines a narrow category of unambiguous dementia characterized by marked impairment. It may be specific but incompletely sensitive to clinically relevant cases. The 10/66 dementia diagnosis defines a broader category that may be more sensitive, identifying genuine cases beyond those defined by our DSM-IV algorithm, with relevance to the estimation of the population burden of this disorder. PMID:18577205
Persoskie, Alexander; Nguyen, Anh B.; Kaufman, Annette R.; Tworek, Cindy
2017-01-01
Beliefs about the relative harmfulness of one product compared to another (perceived relative harm) are central to research and regulation concerning tobacco and nicotine-containing products, but techniques for measuring such beliefs vary widely. We compared the validity of direct and indirect measures of perceived harm of e-cigarettes and smokeless tobacco (SLT) compared to cigarettes. On direct measures, participants explicitly compare the harmfulness of each product. On indirect measures, participants rate the harmfulness of each product separately, and ratings are compared. The U.S. Health Information National Trends Survey (HINTS-FDA-2015; N=3738) included direct measures of perceived harm of e-cigarettes and SLT compared to cigarettes. Indirect measures were created by comparing ratings of harm from e-cigarettes, SLT, and cigarettes on 3-point scales. Logistic regressions tested validity by assessing whether direct and indirect measures were associated with criterion variables including: ever-trying e-cigarettes, ever-trying snus, and SLT use status. Compared to the indirect measures, the direct measures of harm were more consistently associated with criterion variables. On direct measures, 26% of adults rated e-cigarettes as less harmful than cigarettes, and 11% rated SLT as less harmful than cigarettes. Direct measures appear to provide valid information about individuals’ harm beliefs, which may be used to inform research and tobacco control policy. Further validation research is encouraged. PMID:28073035
Davison, Kirsten K.; Austin, S. Bryn; Giles, Catherine; Cradock, Angie L.; Lee, Rebekka M.; Gortmaker, Steven L.
2017-01-01
Interest in evaluating and improving children’s diets in afterschool settings has grown, necessitating the development of feasible yet valid measures for capturing children’s intake in such settings. This study’s purpose was to test the criterion validity and cost of three unobtrusive visual estimation methods compared to a plate-weighing method: direct on-site observation using a 4-category rating scale and off-site rating of digital photographs taken on-site using 4- and 10-category scales. Participants were 111 children in grades 1–6 attending four afterschool programs in Boston, MA in December 2011. Researchers observed and photographed 174 total snack meals consumed across two days at each program. Visual estimates of consumption were compared to weighed estimates (the criterion measure) using intra-class correlations. All three methods were highly correlated with the criterion measure, ranging from 0.92–0.94 for total calories consumed, 0.86–0.94 for consumption of pre-packaged beverages, 0.90–0.93 for consumption of fruits/vegetables, and 0.92–0.96 for consumption of grains. For water, which was not pre-portioned, coefficients ranged from 0.47–0.52. The photographic methods also demonstrated excellent inter-rater reliability: 0.84–0.92 for the 4-point and 0.92–0.95 for the 10-point scale. The costs of the methods for estimating intake ranged from $0.62 per observation for the on-site direct visual method to $0.95 per observation for the criterion measure. This study demonstrates that feasible, inexpensive methods can validly and reliably measure children’s dietary intake in afterschool settings. Improving precision in measures of children’s dietary intake can reduce the likelihood of spurious or null findings in future studies. PMID:25596895
A new criterion needed to evaluate reliability of digital protective relays
NASA Astrophysics Data System (ADS)
Gurevich, Vladimir
2012-11-01
There is a wide range of criteria and features for evaluating reliability in engineering; but as many as there are, only one of them has been chosen to evaluate reliability of Digital Protective Relays (DPR) in the technical documentation: Mean (operating) Time Between Failures (MTBF), which has gained universal currency and has been specified in technical manuals, information sheets, tender documentation as the key indicator of DPR reliability. But is the choice of this criterion indeed wise? The answer to this question is being sought by the author of this article.
Analysis of neighborhood behavior in lead optimization and array design.
Papadatos, George; Cooper, Anthony W J; Kadirkamanathan, Visakan; Macdonald, Simon J F; McLay, Iain M; Pickett, Stephen D; Pritchard, John M; Willett, Peter; Gillet, Valerie J
2009-02-01
Neighborhood behavior describes the extent to which small structural changes defined by a molecular descriptor are likely to lead to small property changes. This study evaluates two methods for the quantification of neighborhood behavior: the optimal diagonal method of Patterson et al. and the optimality criterion method of Horvath and Jeandenans. The methods are evaluated using twelve different types of fingerprint (both 2D and 3D) with screening data derived from several lead optimization projects at GlaxoSmithKline. The principal focus of the work is the design of chemical arrays during lead optimization, and the study hence considers not only biological activity but also important drug properties such as metabolic stability, permeability, and lipophilicity. Evidence is provided to suggest that the optimality criterion method may provide a better quantitative description of neighborhood behavior than the optimal diagonal method.
A Scheme for Obtaining Secure S-Boxes Based on Chaotic Baker's Map
NASA Astrophysics Data System (ADS)
Gondal, Muhammad Asif; Abdul Raheem; Hussain, Iqtadar
2014-09-01
In this paper, a method for obtaining cryptographically strong 8 × 8 substitution boxes (S-boxes) is presented. The method is based on chaotic baker's map and a "mini version" of a new block cipher with block size 8 bits and can be easily and efficiently performed on a computer. The cryptographic strength of some 8 × 8 S-boxes randomly produced by the method is analyzed. The results show (1) all of them are bijective; (2) the nonlinearity of each output bit of them is usually about 100; (3) all of them approximately satisfy the strict avalanche criterion and output bits independence criterion; (4) they all have an almost equiprobable input/output XOR distribution.
NASA Astrophysics Data System (ADS)
Kellici, Tahsin F.; Ntountaniotis, Dimitrios; Vanioti, Marianna; Golic Grdadolnik, Simona; Simcic, Mihael; Michas, Georgios; Moutevelis-Minakakis, Panagiota; Mavromoustakos, Thomas
2017-02-01
During the synthesis of new pyrrolidinone analogs possessing biological activity it is intriguing to assign their absolute stereochemistry as it is well known that drug potency is influenced by the stereochemistry. The combination of J-coupling information with theoretical results was used in order to establish their total stereochemistry when the chiral center of the starting material has known absolute stereochemistry. The J-coupling can be used as a sole criterion for novel synthetic analogs to identify the right stereochemistry. This approach is extremely useful especially in the case of analogs whose 2D NOESY spectra cannot provide this information. Few synthetic examples are given to prove the significance of this approach.
Failure prediction of thin beryllium sheets used in spacecraft structures
NASA Technical Reports Server (NTRS)
Roschke, Paul N.; Mascorro, Edward; Papados, Photios; Serna, Oscar R.
1991-01-01
The primary objective of this study is to develop a method for prediction of failure of thin beryllium sheets that undergo complex states of stress. Major components of the research include experimental evaluation of strength parameters for cross-rolled beryllium sheet, application of the Tsai-Wu failure criterion to plate bending problems, development of a high order failure criterion, application of the new criterion to a variety of structures, and incorporation of both failure criteria into a finite element code. A Tsai-Wu failure model for SR-200 sheet material is developed from available tensile data, experiments carried out by NASA on two circular plates, and compression and off-axis experiments performed in this study. The failure surface obtained from the resulting criterion forms an ellipsoid. By supplementing experimental data used in the the two-dimensional criterion and modifying previously suggested failure criteria, a multi-dimensional failure surface is proposed for thin beryllium structures. The new criterion for orthotropic material is represented by a failure surface in six-dimensional stress space. In order to determine coefficients of the governing equation, a number of uniaxial, biaxial, and triaxial experiments are required. Details of these experiments and a complementary ultrasonic investigation are described in detail. Finally, validity of the criterion and newly determined mechanical properties is established through experiments on structures composed of SR200 sheet material. These experiments include a plate-plug arrangement under a complex state of stress and a series of plates with an out-of-plane central point load. Both criteria have been incorporated into a general purpose finite element analysis code. Numerical simulation incrementally applied loads to a structural component that is being designed and checks each nodal point in the model for exceedance of a failure criterion. If stresses at all locations do not exceed the failure criterion, the load is increased and the process is repeated. Failure results for the plate-plug and clamped plate tests are accurate to within 2 percent.
Reliability of unstable periodic orbit based control strategies in biological systems.
Mishra, Nagender; Hasse, Maria; Biswal, B; Singh, Harinder P
2015-04-01
Presence of recurrent and statistically significant unstable periodic orbits (UPOs) in time series obtained from biological systems is now routinely used as evidence for low dimensional chaos. Extracting accurate dynamical information from the detected UPO trajectories is vital for successful control strategies that either aim to stabilize the system near the fixed point or steer the system away from the periodic orbits. A hybrid UPO detection method from return maps that combines topological recurrence criterion, matrix fit algorithm, and stringent criterion for fixed point location gives accurate and statistically significant UPOs even in the presence of significant noise. Geometry of the return map, frequency of UPOs visiting the same trajectory, length of the data set, strength of the noise, and degree of nonstationarity affect the efficacy of the proposed method. Results suggest that establishing determinism from unambiguous UPO detection is often possible in short data sets with significant noise, but derived dynamical properties are rarely accurate and adequate for controlling the dynamics around these UPOs. A repeat chaos control experiment on epileptic hippocampal slices through more stringent control strategy and adaptive UPO tracking is reinterpreted in this context through simulation of similar control experiments on an analogous but stochastic computer model of epileptic brain slices. Reproduction of equivalent results suggests that far more stringent criteria are needed for linking apparent success of control in such experiments with possible determinism in the underlying dynamics.
Huang, Yawen; Shao, Ling; Frangi, Alejandro F
2018-03-01
Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors, such as patient discomfort, increased cost, prolonged scanning time, and scanner unavailability. In additionally, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. In this paper, we propose a weakly coupled and geometry co-regularized joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting the large amounts of training data is often impractical. Our learning stage requires only a few registered multi-modality image pairs as training data. To employ both paired images and a large set of unpaired data, a cross-modality image matching criterion is proposed. Then, we propose a unified model by integrating such a criterion into the joint dictionary learning and the observed common feature space for associating cross-modality data for the purpose of synthesis. Furthermore, two regularization terms are added to construct robust sparse representations. Our experimental results demonstrate superior performance of the proposed model over state-of-the-art methods.
Objective Model Selection for Identifying the Human Feedforward Response in Manual Control.
Drop, Frank M; Pool, Daan M; van Paassen, Marinus Rene M; Mulder, Max; Bulthoff, Heinrich H
2018-01-01
Realistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: "false-positive" feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results.
Reliability of unstable periodic orbit based control strategies in biological systems
NASA Astrophysics Data System (ADS)
Mishra, Nagender; Hasse, Maria; Biswal, B.; Singh, Harinder P.
2015-04-01
Presence of recurrent and statistically significant unstable periodic orbits (UPOs) in time series obtained from biological systems is now routinely used as evidence for low dimensional chaos. Extracting accurate dynamical information from the detected UPO trajectories is vital for successful control strategies that either aim to stabilize the system near the fixed point or steer the system away from the periodic orbits. A hybrid UPO detection method from return maps that combines topological recurrence criterion, matrix fit algorithm, and stringent criterion for fixed point location gives accurate and statistically significant UPOs even in the presence of significant noise. Geometry of the return map, frequency of UPOs visiting the same trajectory, length of the data set, strength of the noise, and degree of nonstationarity affect the efficacy of the proposed method. Results suggest that establishing determinism from unambiguous UPO detection is often possible in short data sets with significant noise, but derived dynamical properties are rarely accurate and adequate for controlling the dynamics around these UPOs. A repeat chaos control experiment on epileptic hippocampal slices through more stringent control strategy and adaptive UPO tracking is reinterpreted in this context through simulation of similar control experiments on an analogous but stochastic computer model of epileptic brain slices. Reproduction of equivalent results suggests that far more stringent criteria are needed for linking apparent success of control in such experiments with possible determinism in the underlying dynamics.
Morgado, José Mário T; Sánchez-Muñoz, Laura; Teodósio, Cristina G; Jara-Acevedo, Maria; Alvarez-Twose, Iván; Matito, Almudena; Fernández-Nuñez, Elisa; García-Montero, Andrés; Orfao, Alberto; Escribano, Luís
2012-04-01
Aberrant expression of CD2 and/or CD25 by bone marrow, peripheral blood or other extracutaneous tissue mast cells is currently used as a minor World Health Organization diagnostic criterion for systemic mastocytosis. However, the diagnostic utility of CD2 versus CD25 expression by mast cells has not been prospectively evaluated in a large series of systemic mastocytosis. Here we evaluate the sensitivity and specificity of CD2 versus CD25 expression in the diagnosis of systemic mastocytosis. Mast cells from a total of 886 bone marrow and 153 other non-bone marrow extracutaneous tissue samples were analysed by multiparameter flow cytometry following the guidelines of the Spanish Network on Mastocytosis at two different laboratories. The 'CD25+ and/or CD2+ bone marrow mast cells' World Health Organization criterion showed an overall sensitivity of 100% with 99.0% specificity for the diagnosis of systemic mastocytosis whereas CD25 expression alone presented a similar sensitivity (100%) with a slightly higher specificity (99.2%). Inclusion of CD2 did not improve the sensitivity of the test and it decreased its specificity. In tissues other than bone marrow, the mast cell phenotypic criterion revealed to be less sensitive. In summary, CD2 expression does not contribute to improve the diagnosis of systemic mastocytosis when compared with aberrant CD25 expression alone, which supports the need to update and replace the minor World Health Organization 'CD25+ and/or CD2+' mast cell phenotypic diagnostic criterion by a major criterion based exclusively on CD25 expression.
Inference of gene regulatory networks from time series by Tsallis entropy
2011-01-01
Background The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 ≤ q ≤ 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/. PMID:21545720
[GSH fermentation process modeling using entropy-criterion based RBF neural network model].
Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng
2008-05-01
The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.
NASA Astrophysics Data System (ADS)
Nasta, Paolo; Romano, Nunzio
2016-01-01
This study explores the feasibility of identifying the effective soil hydraulic parameterization of a layered soil profile by using a conventional unsteady drainage experiment leading to field capacity. The flux-based field capacity criterion is attained by subjecting the soil profile to a synthetic drainage process implemented numerically in the Soil-Water-Atmosphere-Plant (SWAP) model. The effective hydraulic parameterization is associated to either aggregated or equivalent parameters, the former being determined by the geometrical scaling theory while the latter is obtained through the inverse modeling approach. Outcomes from both these methods depend on information that is sometimes difficult to retrieve at local scale and rather challenging or virtually impossible at larger scales. The only knowledge of topsoil hydraulic properties, for example, as retrieved by a near-surface field campaign or a data assimilation technique, is often exploited as a proxy to determine effective soil hydraulic parameterization at the largest spatial scales. Comparisons of the effective soil hydraulic characterization provided by these three methods are conducted by discussing the implications for their use and accounting for the trade-offs between required input information and model output reliability. To better highlight the epistemic errors associated to the different effective soil hydraulic properties and to provide some more practical guidance, the layered soil profiles are then grouped by using the FAO textural classes. For the moderately heterogeneous soil profiles available, all three approaches guarantee a general good predictability of the actual field capacity values and provide adequate identification of the effective hydraulic parameters. Conversely, worse performances are encountered for the highly variable vertical heterogeneity, especially when resorting to the "topsoil-only" information. In general, the best performances are guaranteed by the equivalent parameters, which might be considered a reference for comparisons with other techniques. As might be expected, the information content of the soil hydraulic properties pertaining only to the uppermost soil horizon is rather inefficient and also not capable to map out the hydrologic behavior of the real vertical soil heterogeneity since the drainage process is significantly affected by profile layering in almost all cases.
Global Design Optimization for Fluid Machinery Applications
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa
2000-01-01
Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.
A study of finite mixture model: Bayesian approach on financial time series data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-07-01
Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.
Multivariate Time Series Decomposition into Oscillation Components.
Matsuda, Takeru; Komaki, Fumiyasu
2017-08-01
Many time series are considered to be a superposition of several oscillation components. We have proposed a method for decomposing univariate time series into oscillation components and estimating their phases (Matsuda & Komaki, 2017 ). In this study, we extend that method to multivariate time series. We assume that several oscillators underlie the given multivariate time series and that each variable corresponds to a superposition of the projections of the oscillators. Thus, the oscillators superpose on each variable with amplitude and phase modulation. Based on this idea, we develop gaussian linear state-space models and use them to decompose the given multivariate time series. The model parameters are estimated from data using the empirical Bayes method, and the number of oscillators is determined using the Akaike information criterion. Therefore, the proposed method extracts underlying oscillators in a data-driven manner and enables investigation of phase dynamics in a given multivariate time series. Numerical results show the effectiveness of the proposed method. From monthly mean north-south sunspot number data, the proposed method reveals an interesting phase relationship.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minjarez-Sosa, J. Adolfo, E-mail: aminjare@gauss.mat.uson.mx; Luque-Vasquez, Fernando
This paper deals with two person zero-sum semi-Markov games with a possibly unbounded payoff function, under a discounted payoff criterion. Assuming that the distribution of the holding times H is unknown for one of the players, we combine suitable methods of statistical estimation of H with control procedures to construct an asymptotically discount optimal pair of strategies.
ERIC Educational Resources Information Center
Daly-Smith, Andy J. W.; McKenna, Jim; Radley, Duncan; Long, Jonathan
2011-01-01
Objective: To investigate the value of additional days of active commuting for meeting a criterion of 300+ minutes of moderate-to-vigorous physical activity (MVPA; 60+ mins/day x 5) during the school week. Methods: Based on seven-day diaries supported by teachers, binary logistic regression analyses were used to predict achievement of MVPA…
Detection of fallen trees in ALS point clouds using a Normalized Cut approach trained by simulation
NASA Astrophysics Data System (ADS)
Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe
2015-07-01
Downed dead wood is regarded as an important part of forest ecosystems from an ecological perspective, which drives the need for investigating its spatial distribution. Based on several studies, Airborne Laser Scanning (ALS) has proven to be a valuable remote sensing technique for obtaining such information. This paper describes a unified approach to the detection of fallen trees from ALS point clouds based on merging short segments into whole stems using the Normalized Cut algorithm. We introduce a new method of defining the segment similarity function for the clustering procedure, where the attribute weights are learned from labeled data. Based on a relationship between Normalized Cut's similarity function and a class of regression models, we show how to learn the similarity function by training a classifier. Furthermore, we propose using an appearance-based stopping criterion for the graph cut algorithm as an alternative to the standard Normalized Cut threshold approach. We set up a virtual fallen tree generation scheme to simulate complex forest scenarios with multiple overlapping fallen stems. This simulated data is then used as a basis to learn both the similarity function and the stopping criterion for Normalized Cut. We evaluate our approach on 5 plots from the strictly protected mixed mountain forest within the Bavarian Forest National Park using reference data obtained via a manual field inventory. The experimental results show that our method is able to detect up to 90% of fallen stems in plots having 30-40% overstory cover with a correctness exceeding 80%, even in quite complex forest scenes. Moreover, the performance for feature weights trained on simulated data is competitive with the case when the weights are calculated using a grid search on the test data, which indicates that the learned similarity function and stopping criterion can generalize well on new plots.
Measurement of the lowest dosage of phenobarbital that can produce drug discrimination in rats
Overton, Donald A.; Stanwood, Gregg D.; Patel, Bhavesh N.; Pragada, Sreenivasa R.; Gordon, M. Kathleen
2009-01-01
Rationale Accurate measurement of the threshold dosage of phenobarbital that can produce drug discrimination (DD) may improve our understanding of the mechanisms and properties of such discrimination. Objectives Compare three methods for determining the threshold dosage for phenobarbital (D) versus no drug (N) DD. Methods Rats learned a D versus N DD in 2-lever operant training chambers. A titration scheme was employed to increase or decrease dosage at the end of each 18-day block of sessions depending on whether the rat had achieved criterion accuracy during the sessions just completed. Three criterion rules were employed, all based on average percent drug lever responses during initial links of the last 6 D and 6 N sessions of a block. The criteria were: D%>66 and N%<33; D%>50 and N%<50; (D%-N%)>33. Two squads of rats were trained, one immediately after the other. Results All rats discriminated drug versus no drug. In most rats, dosage decreased to low levels and then oscillated near the minimum level required to maintain criterion performance. The lowest discriminated dosage significantly differed under the three criterion rules. The squad that was trained 2nd may have benefited by partially duplicating the lever choices of the previous squad. Conclusions The lowest discriminated dosage is influenced by the criterion of discriminative control that is employed, and is higher than the absolute threshold at which discrimination entirely disappears. Threshold estimations closer to absolute threshold can be obtained when criteria are employed that are permissive, and that allow rats to maintain lever preferences. PMID:19082992
The ethical duty to preserve the quality of scientific information
NASA Astrophysics Data System (ADS)
Arattano, Massimo; Gatti, Albertina; Eusebio, Elisa
2016-04-01
The commitment to communicate and divulge the knowledge acquired during his/her professional activity is certainly one of the ethical duties of the geologist. However nowadays, in the Internet era, the spreading of knowledge involves potential risks that the geologist should be aware of. These risks require a careful analysis aimed to mitigate their effects. The Internet may in fact contribute to spread (e.g. through websites like Wikipedia) information badly or even incorrectly presented. The final result could be an impediment to the diffusion of knowledge and a reduction of its effectiveness, which is precisely the opposite of the goal that a geologist should pursue. Specific criteria aimed to recognize incorrect or inadequate information would be, therefore, extremely useful. Their development and application might avoid, or at least reduce, the above mentioned risk. Ideally, such criteria could be also used to develop specific algorithms to automatically verify the quality of information available all over the Internet. A possible criterion will be here presented for the quality control of knowledge and scientific information. An example of its application in the field of geology will be provided, to verify and correct a piece of information available on the Internet. The proposed criterion could be also used for the simplification of the scientific information and the increase of its informative efficacy.
Correlates of constipation in people with Parkinson's.
Gage, H; Kaye, J; Kimber, A; Storey, L; Egan, M; Qiao, Y; Trend, P
2011-02-01
To investigate clinical, demographic and dietary factors associated with constipation in a sample of community dwelling people with Parkinson's disease, recruited through a specialist outpatient clinic. Partners/carers provided a convenience control group. Participants completed a baseline questionnaire (background information, diet and exercise, activities of daily living: mobility and manual dexterity, health-related quality of life (SF-12), stool frequency and characteristics, extent of concern due to constipation, laxative taking), and a four-week stool diary. The Rome criterion was used to determine constipation status. Multiple regression methods were used to explore the correlates of constipation. Baseline data were provided by 121 people with Parkinson's, (54 controls), of whom 73% (25%) met the Rome criterion. Prospective diary data from 106 people with Parkinson's (43 controls) showed lower proportions: 35% (7%) meeting the Rome criterion. Among all study subjects, i.e. Parkinson's patients and controls taken together, the presence of constipation is predicted by having Parkinson's disease (p = .003; odds ratio 4.80, 95% CI 1.64-14.04) and mobility score (p = .04; odds ratio 1.15, 95% CI 1.01-1.31), but not by dietary factors. Amongst people with Parkinson's constipation is predicted by number of medications (p = .027). Laxative taking masks constipation, and is significantly associated with wearing protection against bowel incontinence (p = .009; odds ratio 4.80, 95% CI: 1.48-15.52). Constipation is disease-related, not a lifestyle factor. More research is needed on optimal management and laxative use. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Palsson, Olafur S. (Inventor); Harris, Randall L., Sr. (Inventor); Pope, Alan T. (Inventor)
2002-01-01
Apparatus and methods for modulating the control authority (i.e., control function) of a computer simulation or game input device (e.g., joystick, button control) using physiological information so as to affect the user's ability to impact or control the simulation or game with the input device. One aspect is to use the present invention, along with a computer simulation or game, to affect physiological state or physiological self-regulation according to some programmed criterion (e.g., increase, decrease, or maintain) in order to perform better at the game task. When the affected physiological state or physiological self-regulation is the target of self-regulation or biofeedback training, the simulation or game play reinforces therapeutic changes in the physiological signal(s).
Leckman, James F.; Denys, Damiaan; Simpson, H. Blair; Mataix-Cols, David; Hollander, Eric; Saxena, Sanjaya; Miguel, Euripedes C.; Rauch, Scott L.; Goodman, Wayne K.; Phillips, Katharine A.; Stein, Dan J.
2014-01-01
Background Since the publication of the DSM-IV in 1994, research on obsessive–compulsive disorder (OCD) has continued to expand. It is timely to reconsider the nosology of this disorder, assessing whether changes to diagnostic criteria as well as subtypes and specifiers may improve diagnostic validity and clinical utility. Methods The existing criteria were evaluated. Key issues were identified. Electronic databases of PubMed, ScienceDirect, and PsycINFO were searched for relevant studies. Results This review presents a number of options and preliminary recommendations to be considered for DSM-V. These include: (1) clarifying and simplifying the definition of obsessions and compulsions(criterion A); (2) possibly deleting the requirement that people recognize that their obsessions or compulsions are excessive or unreasonable (criterion B); (3) rethinking the clinical significance criterion (criterion C) and, in the interim, possibly adjusting what is considered “time-consuming” for OCD; (4) listing additional disorders to help with the differential diagnosis (criterion D); (5) rethinking the medical exclusion criterion (criterion E) and clarifying what is meant by a “general medical condition”; (6) revising the specifiers (i.e., clarifying that OCD can involve a range of insight, in addition to “poor insight,” and adding “tic-related OCD”); and (7) highlighting in the DSM-V text important clinical features of OCD that are not currently mentioned in the criteria (e.g., the major symptom dimensions). Conclusions A number of changes to the existing diagnostic criteria for OCD are proposed. These proposed criteria may change as the DSM-V process progresses. PMID:20217853
Lavoie, Jacques; Marchand, Geneviève; Cloutier, Yves; Lavoué, Jérôme
2011-08-01
Dust accumulation in the components of heating, ventilation, and air-conditioning (HVAC) systems is a potential source of contaminants. To date, very little information is available on recognized methods for assessing dust buildup in these systems. The few existing methods are either objective in nature, involving numerical values, or subjective in nature, based on experts' judgments. An earlier project aimed at assessing different methods of sampling dust in ducts was carried out in the laboratories of the Institut de recherche Robert-Sauvé en santé et en sécurité du travail (IRSST). This laboratory study showed that all the sampling methods were practicable, provided that a specific surface-dust cleaning initiation criterion was used for each method. However, these conclusions were reached on the basis of ideal conditions in a laboratory using a reference dust. The objective of this present study was to validate these laboratory results in the field. To this end, the laboratory sampling templates were replicated in real ducts and the three sampling methods (the IRSST method, the method of the U.S. organization National Air Duct Cleaner Association [NADCA] and that of the French organization Association pour la Prévention et l'Étude de la Contamination [ASPEC]) were used simultaneously in a statistically representative number of systems. The air return and supply ducts were also compared. Cleaning initiation criteria under real conditions were found to be 6.0 mg/100 cm(2) using the IRSST method, 2.0 mg/100 cm(2) using the NADCA method, and 23 mg/100 cm(2) using the ASPEC method. In the laboratory study, the criteria using the same methods were 6.0 for the IRSST method, 2.0 for the NADCA method, and 3.0 for the ASPEC method. The laboratory criteria for the IRSST and NADCA methods were therefore validated in the field. The ASPEC criterion was the only one to change. The ASPEC method therefore allows for the most accurate evaluation of dust accumulation in HVAC ductwork. We therefore recommend using the latter method to objectively assess dust accumulation levels in HVAC ductwork.
Fatigue crack identification method based on strain amplitude changing
NASA Astrophysics Data System (ADS)
Guo, Tiancai; Gao, Jun; Wang, Yonghong; Xu, Youliang
2017-09-01
Aiming at the difficulties in identifying the location and time of crack initiation in the castings of helicopter transmission system during fatigue tests, by introducing the classification diagnostic criteria of similar failure mode to find out the similarity of fatigue crack initiation among castings, an engineering method and quantitative criterion for detecting fatigue cracks based on strain amplitude changing is proposed. This method is applied on the fatigue test of a gearbox housing, whose results indicates: during the fatigue test, the system alarms when SC strain meter reaches the quantitative criterion. The afterwards check shows that a fatigue crack less than 5mm is found at the corresponding location of SC strain meter. The test result proves that the method can provide accurate test data for strength life analysis.
Detection of Orbital Debris Collision Risks for the Automated Transfer Vehicle
NASA Technical Reports Server (NTRS)
Peret, L.; Legendre, P.; Delavault, S.; Martin, T.
2007-01-01
In this paper, we present a general collision risk assessment method, which has been applied through numerical simulations to the Automated Transfer Vehicle (ATV) case. During ATV ascent towards the International Space Station, close approaches between the ATV and objects of the USSTRACOM catalog will be monitored through collision rosk assessment. Usually, collision risk assessment relies on an exclusion volume or a probability threshold method. Probability methods are more effective than exclusion volumes but require accurate covariance data. In this work, we propose to use a criterion defined by an adaptive exclusion area. This criterion does not require any probability calculation but is more effective than exclusion volume methods as demonstrated by our numerical experiments. The results of these studies, when confirmed and finalized, will be used for the ATV operations.
Flat-fielding of Solar Hα Observations Based on the Maximum Correntropy Criterion
NASA Astrophysics Data System (ADS)
Xu, Gao-Gui; Zheng, Sheng; Lin, Gang-Hua; Wang, Xiao-Fan
2016-08-01
The flat-field CCD calibration method of Kuhn et al. (KLL) is an efficient method for flat-fielding. However, since it depends on the minimum of the sum of squares error (SSE), its solution is sensitive to noise, especially non-Gaussian noise. In this paper, a new algorithm is proposed to determine the flat field. The idea is to change the criterion of gain estimate from SSE to the maximum correntropy. The result of a test on simulated data demonstrates that our method has a higher accuracy and a faster convergence than KLL’s and Chae’s. It has been found that the method effectively suppresses noise, especially in the case of typical non-Gaussian noise. And the computing time of our algorithm is the shortest.
Brittle failure of rock: A review and general linear criterion
NASA Astrophysics Data System (ADS)
Labuz, Joseph F.; Zeng, Feitao; Makhnenko, Roman; Li, Yuan
2018-07-01
A failure criterion typically is phenomenological since few models exist to theoretically derive the mathematical function. Indeed, a successful failure criterion is a generalization of experimental data obtained from strength tests on specimens subjected to known stress states. For isotropic rock that exhibits a pressure dependence on strength, a popular failure criterion is a linear equation in major and minor principal stresses, independent of the intermediate principal stress. A general linear failure criterion called Paul-Mohr-Coulomb (PMC) contains all three principal stresses with three material constants: friction angles for axisymmetric compression ϕc and extension ϕe and isotropic tensile strength V0. PMC provides a framework to describe a nonlinear failure surface by a set of planes "hugging" the curved surface. Brittle failure of rock is reviewed and multiaxial test methods are summarized. Equations are presented to implement PMC for fitting strength data and determining the three material parameters. A piecewise linear approximation to a nonlinear failure surface is illustrated by fitting two planes with six material parameters to form either a 6- to 12-sided pyramid or a 6- to 12- to 6-sided pyramid. The particular nature of the failure surface is dictated by the experimental data.
Upgrades to the REA method for producing probabilistic climate change projections
NASA Astrophysics Data System (ADS)
Xu, Ying; Gao, Xuejie; Giorgi, Filippo
2010-05-01
We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3
Retrieving relevant factors with exploratory SEM and principal-covariate regression: A comparison.
Vervloet, Marlies; Van den Noortgate, Wim; Ceulemans, Eva
2018-02-12
Behavioral researchers often linearly regress a criterion on multiple predictors, aiming to gain insight into the relations between the criterion and predictors. Obtaining this insight from the ordinary least squares (OLS) regression solution may be troublesome, because OLS regression weights show only the effect of a predictor on top of the effects of other predictors. Moreover, when the number of predictors grows larger, it becomes likely that the predictors will be highly collinear, which makes the regression weights' estimates unstable (i.e., the "bouncing beta" problem). Among other procedures, dimension-reduction-based methods have been proposed for dealing with these problems. These methods yield insight into the data by reducing the predictors to a smaller number of summarizing variables and regressing the criterion on these summarizing variables. Two promising methods are principal-covariate regression (PCovR) and exploratory structural equation modeling (ESEM). Both simultaneously optimize reduction and prediction, but they are based on different frameworks. The resulting solutions have not yet been compared; it is thus unclear what the strengths and weaknesses are of both methods. In this article, we focus on the extents to which PCovR and ESEM are able to extract the factors that truly underlie the predictor scores and can predict a single criterion. The results of two simulation studies showed that for a typical behavioral dataset, ESEM (using the BIC for model selection) in this regard is successful more often than PCovR. Yet, in 93% of the datasets PCovR performed equally well, and in the case of 48 predictors, 100 observations, and large differences in the strengths of the factors, PCovR even outperformed ESEM.
Soo, Jhy-Charm; Lee, Eun Gyung; Lee, Larry A.; Kashon, Michael L.; Harper, Martin
2015-01-01
Lee et al. (Evaluation of pump pulsation in respirable size-selective sampling: part I. Pulsation measurements. Ann Occup Hyg 2014a;58:60–73) introduced an approach to measure pump pulsation (PP) using a real-world sampling train, while the European Standards (EN) (EN 1232-1997 and EN 12919-1999) suggest measuring PP using a resistor in place of the sampler. The goal of this study is to characterize PP according to both EN methods and to determine the relationship of PP between the published method (Lee et al., 2014a) and the EN methods. Additional test parameters were investigated to determine whether the test conditions suggested by the EN methods were appropriate for measuring pulsations. Experiments were conducted using a factorial combination of personal sampling pumps (six medium- and two high-volumetric flow rate pumps), back pressures (six medium- and seven high-flow rate pumps), resistors (two types), tubing lengths between a pump and resistor (60 and 90 cm), and different flow rates (2 and 2.5 l min−1 for the medium- and 4.4, 10, and 11.2 l min−1 for the high-flow rate pumps). The selection of sampling pumps and the ranges of back pressure were based on measurements obtained in the previous study (Lee et al., 2014a). Among six medium-flow rate pumps, only the Gilian5000 and the Apex IS conformed to the 10% criterion specified in EN 1232-1997. Although the AirChek XR5000 exceeded the 10% limit, the average PP (10.9%) was close to the criterion. One high-flow rate pump, the Legacy (PP = 8.1%), conformed to the 10% criterion in EN 12919-1999, while the Elite12 did not (PP = 18.3%). Conducting supplemental tests with additional test parameters beyond those used in the two subject EN standards did not strengthen the characterization of PPs. For the selected test conditions, a linear regression model [PPEN = 0.014 + 0.375 × PPNIOSH (adjusted R2 = 0.871)] was developed to determine the PP relationship between the published method (Lee et al., 2014a) and the EN methods. The 25% PP criterion recommended by Lee et al. (2014a), average value derived from repetitive measurements, corresponds to 11% PPEN. The 10% pass/fail criterion in the EN Standards is not based on extensive laboratory evaluation and would unreasonably exclude at least one pump (i.e. AirChek XR5000 in this study) and, therefore, the more accurate criterion of average 11% from repetitive measurements should be substituted. This study suggests that users can measure PP using either a real-world sampling train or a resistor setup and obtain equivalent findings by applying the model herein derived. The findings of this study will be delivered to the consensus committees to be considered when those standards, including the EN 1232-1997, EN 12919-1999, and ISO 13137-2013, are revised. PMID:25053700
NASA Astrophysics Data System (ADS)
Kou, Jiaqing; Le Clainche, Soledad; Zhang, Weiwei
2018-01-01
This study proposes an improvement in the performance of reduced-order models (ROMs) based on dynamic mode decomposition to model the flow dynamics of the attractor from a transient solution. By combining higher order dynamic mode decomposition (HODMD) with an efficient mode selection criterion, the HODMD with criterion (HODMDc) ROM is able to identify dominant flow patterns with high accuracy. This helps us to develop a more parsimonious ROM structure, allowing better predictions of the attractor dynamics. The method is tested in the solution of a NACA0012 airfoil buffeting in a transonic flow, and its good performance in both the reconstruction of the original solution and the prediction of the permanent dynamics is shown. In addition, the robustness of the method has been successfully tested using different types of parameters, indicating that the proposed ROM approach is a tool promising for using in both numerical simulations and experimental data.
NASA Technical Reports Server (NTRS)
Bosi, F.; Pellegrino, S.
2017-01-01
A molecular formulation of the onset of plasticity is proposed to assess temperature and strain rate effects in anisotropic semi-crystalline rubbery films. The presented plane stress criterion is based on the strain rate-temperature superposition principle and the cooperative theory of yielding, where some parameters are assumed to be material constants, while others are considered to depend on specific modes of deformation. An orthotropic yield function is developed for a linear low density polyethylene thin film. Uniaxial and biaxial inflation experiments were carried out to determine the yield stress of the membrane via a strain recovery method. It is shown that the 3% offset method predicts the uniaxial elastoplastic transition with good accuracy. Both the tensile yield points along the two principal directions of the film and the biaxial yield stresses are found to obey the superposition principle. The proposed yield criterion is compared against experimental measurements, showing excellent agreement over a wide range of deformation rates and temperatures.
Predictability of Seasonal Rainfall over the Greater Horn of Africa
NASA Astrophysics Data System (ADS)
Ngaina, J. N.
2016-12-01
The El Nino-Southern Oscillation (ENSO) is a primary mode of climate variability in the Greater of Africa (GHA). The expected impacts of climate variability and change on water, agriculture, and food resources in GHA underscore the importance of reliable and accurate seasonal climate predictions. The study evaluated different model selection criteria which included the Coefficient of determination (R2), Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Fisher information approximation (FIA). A forecast scheme based on the optimal model was developed to predict the October-November-December (OND) and March-April-May (MAM) rainfall. The predictability of GHA rainfall based on ENSO was quantified based on composite analysis, correlations and contingency tables. A test for field-significance considering the properties of finiteness and interdependence of the spatial grid was applied to avoid correlations by chance. The study identified FIA as the optimal model selection criterion. However, complex model selection criteria (FIA followed by BIC) performed better compared to simple approach (R2 and AIC). Notably, operational seasonal rainfall predictions over the GHA makes of simple model selection procedures e.g. R2. Rainfall is modestly predictable based on ENSO during OND and MAM seasons. El Nino typically leads to wetter conditions during OND and drier conditions during MAM. The correlations of ENSO indices with rainfall are statistically significant for OND and MAM seasons. Analysis based on contingency tables shows higher predictability of OND rainfall with the use of ENSO indices derived from the Pacific and Indian Oceans sea surfaces showing significant improvement during OND season. The predictability based on ENSO for OND rainfall is robust on a decadal scale compared to MAM. An ENSO-based scheme based on an optimal model selection criterion can thus provide skillful rainfall predictions over GHA. This study concludes that the negative phase of ENSO (La Niña) leads to dry conditions while the positive phase of ENSO (El Niño) anticipates enhanced wet conditions
Sun, Min; Wong, David; Kronenfeld, Barry
2016-01-01
Despite conceptual and technology advancements in cartography over the decades, choropleth map design and classification fail to address a fundamental issue: estimates that are statistically indifferent may be assigned to different classes on maps or vice versa. Recently, the class separability concept was introduced as a map classification criterion to evaluate the likelihood that estimates in two classes are statistical different. Unfortunately, choropleth maps created according to the separability criterion usually have highly unbalanced classes. To produce reasonably separable but more balanced classes, we propose a heuristic classification approach to consider not just the class separability criterion but also other classification criteria such as evenness and intra-class variability. A geovisual-analytic package was developed to support the heuristic mapping process to evaluate the trade-off between relevant criteria and to select the most preferable classification. Class break values can be adjusted to improve the performance of a classification. PMID:28286426
Development of a percutaneous penetration predictive model by SR-FTIR.
Jungman, E; Laugel, C; Rutledge, D N; Dumas, P; Baillet-Guffroy, A
2013-01-30
This work focused on developing a new evaluation criterion of percutaneous penetration, in complement to Log Pow and MW and based on high spatial resolution Fourier transformed infrared (FTIR) microspectroscopy with a synchrotron source (SR-FTIR). Classic Franz cell experiments were run and after 22 h molecule distribution in skin was determined either by HPLC or by SR-FTIR. HPLC data served as reference. HPLC and SR-FTIR results were compared and a new predictive criterion based from SR-FTIR results, named S(index), was determined using a multi-block data analysis technique (ComDim). A predictive cartography of the distribution of molecules in the skin was built and compared to OECD predictive cartography. This new criterion S(index) and the cartography using SR-FTIR/HPLC results provides relevant information for risk analysis regarding prediction of percutaneous penetration and could be used to build a new mathematical model. Copyright © 2012 Elsevier B.V. All rights reserved.
Brookes, V J; Hernández-Jover, M; Neslo, R; Cowled, B; Holyoake, P; Ward, M P
2014-01-01
We describe stakeholder preference modelling using a combination of new and recently developed techniques to elicit criterion weights to incorporate into a multi-criteria decision analysis framework to prioritise exotic diseases for the pig industry in Australia. Australian pig producers were requested to rank disease scenarios comprising nine criteria in an online questionnaire. Parallel coordinate plots were used to visualise stakeholder preferences, which aided identification of two diverse groups of stakeholders - one group prioritised diseases with impacts on livestock, and the other group placed more importance on diseases with zoonotic impacts. Probabilistic inversion was used to derive weights for the criteria to reflect the values of each of these groups, modelling their choice using a weighted sum value function. Validation of weights against stakeholders' rankings for scenarios based on real diseases showed that the elicited criterion weights for the group who prioritised diseases with livestock impacts were a good reflection of their values, indicating that the producers were able to consistently infer impacts from the disease information in the scenarios presented to them. The highest weighted criteria for this group were attack rate and length of clinical disease in pigs, and market loss to the pig industry. The values of the stakeholders who prioritised zoonotic diseases were less well reflected by validation, indicating either that the criteria were inadequate to consistently describe zoonotic impacts, the weighted sum model did not describe stakeholder choice, or that preference modelling for zoonotic diseases should be undertaken separately from livestock diseases. Limitations of this study included sampling bias, as the group participating were not necessarily representative of all pig producers in Australia, and response bias within this group. The method used to elicit criterion weights in this study ensured value trade-offs between a range of potential impacts, and that the weights were implicitly related to the scale of measurement of disease criteria. Validation of the results of the criterion weights against real diseases - a step rarely used in MCDA - added scientific rigour to the process. The study demonstrated that these are useful techniques for elicitation of criterion weights for disease prioritisation by stakeholders who are not disease experts. Preference modelling for zoonotic diseases needs further characterisation in this context. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, X; Gao, H; Schuemann, J
2015-06-15
Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Mixture Rasch model for guessing group identification
NASA Astrophysics Data System (ADS)
Siow, Hoo Leong; Mahdi, Rasidah; Siew, Eng Ling
2013-04-01
Several alternative dichotomous Item Response Theory (IRT) models have been introduced to account for guessing effect in multiple-choice assessment. The guessing effect in these models has been considered to be itemrelated. In the most classic case, pseudo-guessing in the three-parameter logistic IRT model is modeled to be the same for all the subjects but may vary across items. This is not realistic because subjects can guess worse or better than the pseudo-guessing. Derivation from the three-parameter logistic IRT model improves the situation by incorporating ability in guessing. However, it does not model non-monotone function. This paper proposes to study guessing from a subject-related aspect which is guessing test-taking behavior. Mixture Rasch model is employed to detect latent groups. A hybrid of mixture Rasch and 3-parameter logistic IRT model is proposed to model the behavior based guessing from the subjects' ways of responding the items. The subjects are assumed to simply choose a response at random. An information criterion is proposed to identify the behavior based guessing group. Results show that the proposed model selection criterion provides a promising method to identify the guessing group modeled by the hybrid model.
Finding regions of interest in pathological images: an attentional model approach
NASA Astrophysics Data System (ADS)
Gómez, Francisco; Villalón, Julio; Gutierrez, Ricardo; Romero, Eduardo
2009-02-01
This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological images. This method is based on the cognitive process of visual selective attention that arises during a pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two components. The selected bottom-up information includes local low level features such as intensity, color, orientation and texture information. Top-down information is related to the anatomical and pathological structures known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm, inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally, a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49 images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a classical bottom-up model of attention.
One-way ANOVA based on interval information
NASA Astrophysics Data System (ADS)
Hesamian, Gholamreza
2016-08-01
This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.
Lee, Donghyun; Lee, Hojun; Choi, Munkee
2016-02-11
Internet search query data reflect the attitudes of the users, using which we can measure the past orientation to commit suicide. Examinations of past orientation often highlight certain predispositions of attitude, many of which can be suicide risk factors. To investigate the relationship between past orientation and suicide rate by examining Google search queries. We measured the past orientation using Google search query data by comparing the search volumes of the past year and those of the future year, across the 50 US states and the District of Columbia during the period from 2004 to 2012. We constructed a panel dataset with independent variables as control variables; we then undertook an analysis using multiple ordinary least squares regression and methods that leverage the Akaike information criterion and the Bayesian information criterion. It was found that past orientation had a positive relationship with the suicide rate (P ≤ .001) and that it improves the goodness-of-fit of the model regarding the suicide rate. Unemployment rate (P ≤ .001 in Models 3 and 4), Gini coefficient (P ≤ .001), and population growth rate (P ≤ .001) had a positive relationship with the suicide rate, whereas the gross state product (P ≤ .001) showed a negative relationship with the suicide rate. We empirically identified the positive relationship between the suicide rate and past orientation, which was measured by big data-driven Google search query.
Fong, Ted C T; Ho, Rainbow T H
2015-01-01
The aim of this study was to reexamine the dimensionality of the widely used 9-item Utrecht Work Engagement Scale using the maximum likelihood (ML) approach and Bayesian structural equation modeling (BSEM) approach. Three measurement models (1-factor, 3-factor, and bi-factor models) were evaluated in two split samples of 1,112 health-care workers using confirmatory factor analysis and BSEM, which specified small-variance informative priors for cross-loadings and residual covariances. Model fit and comparisons were evaluated by posterior predictive p-value (PPP), deviance information criterion, and Bayesian information criterion (BIC). None of the three ML-based models showed an adequate fit to the data. The use of informative priors for cross-loadings did not improve the PPP for the models. The 1-factor BSEM model with approximately zero residual covariances displayed a good fit (PPP>0.10) to both samples and a substantially lower BIC than its 3-factor and bi-factor counterparts. The BSEM results demonstrate empirical support for the 1-factor model as a parsimonious and reasonable representation of work engagement.
NASA Astrophysics Data System (ADS)
Darmon, David
2018-03-01
In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.
Multi-Criterion Preliminary Design of a Tetrahedral Truss Platform
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey
1995-01-01
An efficient method is presented for multi-criterion preliminary design and demonstrated for a tetrahedral truss platform. The present method requires minimal analysis effort and permits rapid estimation of optimized truss behavior for preliminary design. A 14-m-diameter, 3-ring truss platform represents a candidate reflector support structure for space-based science spacecraft. The truss members are divided into 9 groups by truss ring and position. Design variables are the cross-sectional area of all members in a group, and are either 1, 3 or 5 times the minimum member area. Non-structural mass represents the node and joint hardware used to assemble the truss structure. Taguchi methods are used to efficiently identify key points in the set of Pareto-optimal truss designs. Key points identified using Taguchi methods are the maximum frequency, minimum mass, and maximum frequency-to-mass ratio truss designs. Low-order polynomial curve fits through these points are used to approximate the behavior of the full set of Pareto-optimal designs. The resulting Pareto-optimal design curve is used to predict frequency and mass for optimized trusses. Performance improvements are plotted in frequency-mass (criterion) space and compared to results for uniform trusses. Application of constraints to frequency and mass and sensitivity to constraint variation are demonstrated.
Atmospheric correction for inland water based on Gordon model
NASA Astrophysics Data System (ADS)
Li, Yunmei; Wang, Haijun; Huang, Jiazhu
2008-04-01
Remote sensing technique is soundly used in water quality monitoring since it can receive area radiation information at the same time. But more than 80% radiance detected by sensors at the top of the atmosphere is contributed by atmosphere not directly by water body. Water radiance information is seriously confused by atmospheric molecular and aerosol scattering and absorption. A slight bias of evaluation for atmospheric influence can deduce large error for water quality evaluation. To inverse water composition accurately we have to separate water and air information firstly. In this paper, we studied on atmospheric correction methods for inland water such as Taihu Lake. Landsat-5 TM image was corrected based on Gordon atmospheric correction model. And two kinds of data were used to calculate Raleigh scattering, aerosol scattering and radiative transmission above Taihu Lake. Meanwhile, the influence of ozone and white cap were revised. One kind of data was synchronization meteorology data, and the other one was synchronization MODIS image. At last, remote sensing reflectance was retrieved from the TM image. The effect of different methods was analyzed using in situ measured water surface spectra. The result indicates that measured and estimated remote sensing reflectance were close for both methods. Compared to the method of using MODIS image, the method of using synchronization meteorology is more accurate. And the bias is close to inland water error criterion accepted by water quality inversing. It shows that this method is suitable for Taihu Lake atmospheric correction for TM image.
Blind equalization with criterion with memory nonlinearity
NASA Astrophysics Data System (ADS)
Chen, Yuanjie; Nikias, Chrysostomos L.; Proakis, John G.
1992-06-01
Blind equalization methods usually combat the linear distortion caused by a nonideal channel via a transversal filter, without resorting to the a priori known training sequences. We introduce a new criterion with memory nonlinearity (CRIMNO) for the blind equalization problem. The basic idea of this criterion is to augment the Godard [or constant modulus algorithm (CMA)] cost function with additional terms that penalize the autocorrelations of the equalizer outputs. Several variations of the CRIMNO algorithms are derived, with the variations dependent on (1) whether the empirical averages or the single point estimates are used to approximate the expectations, (2) whether the recent or the delayed equalizer coefficients are used, and (3) whether the weights applied to the autocorrelation terms are fixed or are allowed to adapt. Simulation experiments show that the CRIMNO algorithm, and especially its adaptive weight version, exhibits faster convergence speed than the Godard (or CMA) algorithm. Extensions of the CRIMNO criterion to accommodate the case of correlated inputs to the channel are also presented.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Dutra, L. V.; Mascarenhas, N. D. A.; Mitsuo, Fernando Augusta, II
1984-01-01
A study area near Ribeirao Preto in Sao Paulo state was selected, with predominance in sugar cane. Eight features were extracted from the 4 original bands of LANDSAT image, using low-pass and high-pass filtering to obtain spatial features. There were 5 training sites in order to acquire the necessary parameters. Two groups of four channels were selected from 12 channels using JM-distance and entropy criterions. The number of selected channels was defined by physical restrictions of the image analyzer and computacional costs. The evaluation was performed by extracting the confusion matrix for training and tests areas, with a maximum likelihood classifier, and by defining performance indexes based on those matrixes for each group of channels. Results show that in spatial features and supervised classification, the entropy criterion is better in the sense that allows a more accurate and generalized definition of class signature. On the other hand, JM-distance criterion strongly reduces the misclassification within training areas.
Buendía, Mateo; Cibrián, Rosa M.; Salvador, Rosario; Laguía, Manuel; Martín, Antonio; Gomar, Francisco
2006-01-01
New noninvasive techniques, amongst them structured light methods, have been applied to study rachis deformities, providing a way to evaluate external back deformities in the three planes of space. These methods are aimed at reducing the number of radiographic examinations necessary to diagnose and follow-up patients with scoliosis. By projecting a grid over the patient’s back, the corresponding software for image treatment provides a topography of the back in a color or gray scale. Visual inspection of back topographic images using this method immediately provides information about back deformity, but it is important to determine quantifier variables of the deformity to establish diagnostic criteria. In this paper, two topographic variables [deformity in the axial plane index (DAPI) and posterior trunk symmetry index (POTSI)] that quantify deformity in two different planes are analyzed. Although other authors have reported the POTSI variable, the DAPI variable proposed in this paper is innovative. The upper normality limit of these variables in a nonpathological group was determined. These two variables have different and complementary diagnostic characteristics, therefore we devised a combined diagnostic criterion: cases with normal DAPI and POTSI (DAPI ≤ 3.9% and POTSI ≤ 27.5%) were diagnosed as nonpathologic, but cases with high DAPI or POTSI were diagnosed as pathologic. When we used this criterion to analyze all the cases in the sample (56 nonpathologic and 30 with idiopathic scoliosis), we obtained 76.6% sensitivity, 91% specificity, and a positive predictive value of 82%. The interobserver, intraobserver, and interassay variability were studied by determining the variation coefficient. There was good correlation between topographic variables (DAPI and POTSI) and clinical variables (Cobb’s angle and vertebral rotation angle). PMID:16609858
Robust estimation of the proportion of treatment effect explained by surrogate marker information.
Parast, Layla; McDermott, Mary M; Tian, Lu
2016-05-10
In randomized treatment studies where the primary outcome requires long follow-up of patients and/or expensive or invasive obtainment procedures, the availability of a surrogate marker that could be used to estimate the treatment effect and could potentially be observed earlier than the primary outcome would allow researchers to make conclusions regarding the treatment effect with less required follow-up time and resources. The Prentice criterion for a valid surrogate marker requires that a test for treatment effect on the surrogate marker also be a valid test for treatment effect on the primary outcome of interest. Based on this criterion, methods have been developed to define and estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on the surrogate marker. These methods aim to identify useful statistical surrogates that capture a large proportion of the treatment effect. However, current methods to estimate this proportion usually require restrictive model assumptions that may not hold in practice and thus may lead to biased estimates of this quantity. In this paper, we propose a nonparametric procedure to estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on a potential surrogate marker and extend this procedure to a setting with multiple surrogate markers. We compare our approach with previously proposed model-based approaches and propose a variance estimation procedure based on a perturbation-resampling method. Simulation studies demonstrate that the procedure performs well in finite samples and outperforms model-based procedures when the specified models are not correct. We illustrate our proposed procedure using a data set from a randomized study investigating a group-mediated cognitive behavioral intervention for peripheral artery disease participants. Copyright © 2015 John Wiley & Sons, Ltd.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories
NASA Astrophysics Data System (ADS)
Hajdziona, Marta; Molski, Andrzej
2011-02-01
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
Decision making by urgency gating: theory and experimental support.
Thura, David; Beauregard-Racine, Julie; Fradet, Charles-William; Cisek, Paul
2012-12-01
It is often suggested that decisions are made when accumulated sensory information reaches a fixed accuracy criterion. This is supported by many studies showing a gradual build up of neural activity to a threshold. However, the proposal that this build up is caused by sensory accumulation is challenged by findings that decisions are based on information from a time window much shorter than the build-up process. Here, we propose that in natural conditions where the environment can suddenly change, the policy that maximizes reward rate is to estimate evidence by accumulating only novel information and then compare the result to a decreasing accuracy criterion. We suggest that the brain approximates this policy by multiplying an estimate of sensory evidence with a motor-related urgency signal and that the latter is primarily responsible for neural activity build up. We support this hypothesis using human behavioral data from a modified random-dot motion task in which motion coherence changes during each trial.
Ghisi, Gabriela Lima de Melo; Dos Santos, Rafaella Zulianello; Bonin, Christiani Batista Decker; Roussenq, Suellen; Grace, Sherry L; Oh, Paul; Benetti, Magnus
2014-01-01
To translate, culturally adapt and psychometrically validate the Information Needs in Cardiac Rehabilitation (INCR) tool to Portuguese. The identification of information needs is considered the first step to improve knowledge that ultimately could improve health outcomes. The Portuguese version generated was tested in 300 cardiac rehabilitation patients (CR) (34% women; mean age = 61.3 ± 2.1 years old). Test-retest reliability was assessed using intraclass correlation coefficient (ICC), the internal consistency using Cronbach's alpha, and the criterion validity was assessed with regard to patients' education and duration in CR. All 9 subscales were considered internally consistent (á > 0.7). Significant differences between mean total needs and educational level (p < 0.05) and duration in CR (p = 0.03) supported criterion validity. The overall mean (4.6 ± 0.4), as well as the means of the 9 subscales were high (emergency/safety was the greatest need). The Portuguese INCR was demonstrated to have sufficient reliability, consistency and validity. Copyright © 2014 Elsevier Inc. All rights reserved.
MSPocket: an orientation-independent algorithm for the detection of ligand binding pockets.
Zhu, Hongbo; Pisabarro, M Teresa
2011-02-01
Identification of ligand binding pockets on proteins is crucial for the characterization of protein functions. It provides valuable information for protein-ligand docking and rational engineering of small molecules that regulate protein functions. A major number of current prediction algorithms of ligand binding pockets are based on cubic grid representation of proteins and, thus, the results are often protein orientation dependent. We present the MSPocket program for detecting pockets on the solvent excluded surface of proteins. The core algorithm of the MSPocket approach does not use any cubic grid system to represent proteins and is therefore independent of protein orientations. We demonstrate that MSPocket is able to achieve an accuracy of 75% in predicting ligand binding pockets on a test dataset used for evaluating several existing methods. The accuracy is 92% if the top three predictions are considered. Comparison to one of the recently published best performing methods shows that MSPocket reaches similar performance with the additional feature of being protein orientation independent. Interestingly, some of the predictions are different, meaning that the two methods can be considered complementary and combined to achieve better prediction accuracy. MSPocket also provides a graphical user interface for interactive investigation of the predicted ligand binding pockets. In addition, we show that overlap criterion is a better strategy for the evaluation of predicted ligand binding pockets than the single point distance criterion. The MSPocket source code can be downloaded from http://appserver.biotec.tu-dresden.de/MSPocket/. MSPocket is also available as a PyMOL plugin with a graphical user interface.
a Hyperspectral Image Classification Method Using Isomap and Rvm
NASA Astrophysics Data System (ADS)
Chang, H.; Wang, T.; Fang, H.; Su, Y.
2018-04-01
Classification is one of the most significant applications of hyperspectral image processing and even remote sensing. Though various algorithms have been proposed to implement and improve this application, there are still drawbacks in traditional classification methods. Thus further investigations on some aspects, such as dimension reduction, data mining, and rational use of spatial information, should be developed. In this paper, we used a widely utilized global manifold learning approach, isometric feature mapping (ISOMAP), to address the intrinsic nonlinearities of hyperspectral image for dimension reduction. Considering the impropriety of Euclidean distance in spectral measurement, we applied spectral angle (SA) for substitute when constructed the neighbourhood graph. Then, relevance vector machines (RVM) was introduced to implement classification instead of support vector machines (SVM) for simplicity, generalization and sparsity. Therefore, a probability result could be obtained rather than a less convincing binary result. Moreover, taking into account the spatial information of the hyperspectral image, we employ a spatial vector formed by different classes' ratios around the pixel. At last, we combined the probability results and spatial factors with a criterion to decide the final classification result. To verify the proposed method, we have implemented multiple experiments with standard hyperspectral images compared with some other methods. The results and different evaluation indexes illustrated the effectiveness of our method.
Ercanli, İlker; Kahriman, Aydın
2015-03-01
We assessed the effect of stand structural diversity, including the Shannon, improved Shannon, Simpson, McIntosh, Margelef, and Berger-Parker indices, on stand aboveground biomass (AGB) and developed statistical prediction models for the stand AGB values, including stand structural diversity indices and some stand attributes. The AGB prediction model, including only stand attributes, accounted for 85 % of the total variance in AGB (R (2)) with an Akaike's information criterion (AIC) of 807.2407, Bayesian information criterion (BIC) of 809.5397, Schwarz Bayesian criterion (SBC) of 818.0426, and root mean square error (RMSE) of 38.529 Mg. After inclusion of the stand structural diversity into the model structure, considerable improvement was observed in statistical accuracy, including 97.5 % of the total variance in AGB, with an AIC of 614.1819, BIC of 617.1242, SBC of 633.0853, and RMSE of 15.8153 Mg. The predictive fitting results indicate that some indices describing the stand structural diversity can be employed as significant independent variables to predict the AGB production of the Scotch pine stand. Further, including the stand diversity indices in the AGB prediction model with the stand attributes provided important predictive contributions in estimating the total variance in AGB.
Soguero-Ruiz, Cristina; Hindberg, Kristian; Rojo-Alvarez, Jose Luis; Skrovseth, Stein Olav; Godtliebsen, Fred; Mortensen, Kim; Revhaug, Arthur; Lindsetmo, Rolv-Ole; Augestad, Knut Magne; Jenssen, Robert
2016-09-01
The free text in electronic health records (EHRs) conveys a huge amount of clinical information about health state and patient history. Despite a rapidly growing literature on the use of machine learning techniques for extracting this information, little effort has been invested toward feature selection and the features' corresponding medical interpretation. In this study, we focus on the task of early detection of anastomosis leakage (AL), a severe complication after elective surgery for colorectal cancer (CRC) surgery, using free text extracted from EHRs. We use a bag-of-words model to investigate the potential for feature selection strategies. The purpose is earlier detection of AL and prediction of AL with data generated in the EHR before the actual complication occur. Due to the high dimensionality of the data, we derive feature selection strategies using the robust support vector machine linear maximum margin classifier, by investigating: 1) a simple statistical criterion (leave-one-out-based test); 2) an intensive-computation statistical criterion (Bootstrap resampling); and 3) an advanced statistical criterion (kernel entropy). Results reveal a discriminatory power for early detection of complications after CRC (sensitivity 100%; specificity 72%). These results can be used to develop prediction models, based on EHR data, that can support surgeons and patients in the preoperative decision making phase.
NASA Astrophysics Data System (ADS)
Lehmann, Thomas M.
2002-05-01
Reliable evaluation of medical image processing is of major importance for routine applications. Nonetheless, evaluation is often omitted or methodically defective when novel approaches or algorithms are introduced. Adopted from medical diagnosis, we define the following criteria to classify reference standards: 1. Reliance, if the generation or capturing of test images for evaluation follows an exactly determined and reproducible protocol. 2. Equivalence, if the image material or relationships considered within an algorithmic reference standard equal real-life data with respect to structure, noise, or other parameters of importance. 3. Independence, if any reference standard relies on a different procedure than that to be evaluated, or on other images or image modalities than that used routinely. This criterion bans the simultaneous use of one image for both, training and test phase. 4. Relevance, if the algorithm to be evaluated is self-reproducible. If random parameters or optimization strategies are applied, reliability of the algorithm must be shown before the reference standard is applied for evaluation. 5. Significance, if the number of reference standard images that are used for evaluation is sufficient large to enable statistically founded analysis. We demand that a true gold standard must satisfy the Criteria 1 to 3. Any standard only satisfying two criteria, i.e., Criterion 1 and Criterion 2 or Criterion 1 and Criterion 3, is referred to as silver standard. Other standards are termed to be from plastic. Before exhaustive evaluation based on gold or silver standards is performed, its relevance must be shown (Criterion 4) and sufficient tests must be carried out to found statistical analysis (Criterion 5). In this paper, examples are given for each class of reference standards.
Sohn, Jae Ho; Duran, Rafael; Zhao, Yan; Fleckenstein, Florian; Chapiro, Julius; Sahu, Sonia P.; Schernthaner, Rüdiger E.; Qian, Tianchen; Lee, Howard; Zhao, Li; Hamilton, James; Frangakis, Constantine; Lin, MingDe; Salem, Riad; Geschwind, Jean-Francois
2018-01-01
Background & Aims There is debate over the best way to stage hepatocellular carcinoma (HCC). We attempted to validate the prognostic and clinical utility of the recently developed Hong Kong Liver Cancer (HKLC) staging system, a hepatitis B-based model, and compared data with that from the Barcelona Clinic Liver Cancer (BCLC) staging system in a North American population who underwent intra-arterial therapy (IAT). Methods We performed a retrospective analysis of data from 1009 patients with HCC who underwent intra-arterial therapy from 2000 through 2014. Most patients had hepatitis C or unresectable tumors; all patients underwent IAT, with or without resection, transplantation, and/or systemic chemotherapy. We calculated HCC stage for each patient using 5-stage HKLC (HKLC-5) and 9-stage HKLC (HKLC-9) system classifications, as well as the BCLC system. Survival information was collected up until end of 2014 at which point living or unconfirmed patients were censored. We compared performance of the BCLC, HKLC-5, and HKLC-9 systems in predicting patient outcomes using Kaplan-Meier estimates, calibration plots, c-statistic, Akaike information criterion, and the likelihood ratio test. Results Median overall survival time, calculated from first IAT until date of death or censorship, for the entire cohort (all stages) was 9.8 months. The BCLC and HKLC staging systems predicted patient survival times with significance (P<.001). HKLC-5 and HKLC-9 each demonstrated good calibration. The HKLC-5 system outperformed the BCLC system in predicting patient survival times (HKLC c=0.71, Akaike information criterion=6242; BCLC c=0.64, Akaike information criterion=6320), reducing error in predicting survival time (HKLC reduced error by 14%, BCLC reduced error by 12%), and homogeneity (HKLC χ2=201; P<.001; BCLC χ2=119; P<.001) and monotonicity (HKLC linear trend χ2=193; P<.001; BCLC linear trend χ2=111; P<.001). Small proportions of patients with HCC of stages IV or V, according to the HKLC system, survived for 6 months and 4 months, respectively. Conclusion In a retrospective analysis of patients who underwent IAT for unresectable HCC, we found the HKLC-5 staging system to have the best combination of performances in survival separation, calibration, and discrimination; it consistently outperformed the BCLC system in predicting survival times of patients. The HKLC system identified patients with HCC of stages IV and V who are unlikely to benefit from IAT. PMID:27847278
Role of optimization criterion in static asymmetric analysis of lumbar spine load.
Daniel, Matej
2011-10-01
A common method for load estimation in biomechanics is the inverse dynamics optimization, where the muscle activation pattern is found by minimizing or maximizing the optimization criterion. It has been shown that various optimization criteria predict remarkably similar muscle activation pattern and intra-articular contact forces during leg motion. The aim of this paper is to study the effect of the choice of optimization criterion on L4/L5 loading during static asymmetric loading. Upright standing with weight in one stretched arm was taken as a representative position. Musculoskeletal model of lumbar spine model was created from CT images of Visible Human Project. Several criteria were tested based on the minimization of muscle forces, muscle stresses, and spinal load. All criteria provide the same level of lumbar spine loading (difference is below 25%), except the criterion of minimum lumbar shear force which predicts unrealistically high spinal load and should not be considered further. Estimated spinal load and predicted muscle force activation pattern are in accordance with the intradiscal pressure measurements and EMG measurements. The L4/L5 spine loads 1312 N, 1674 N, and 1993 N were predicted for mass of weight in hand 2, 5, and 8 kg, respectively using criterion of mininum muscle stress cubed. As the optimization criteria do not considerably affect the spinal load, their choice is not critical in further clinical or ergonomic studies and computationally simpler criterion can be used.
NASA Astrophysics Data System (ADS)
von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2016-04-01
Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I. Horenko. On identification of nonstationary factor models and its application to atmospherical data analysis. J. Atm. Sci., 67:1559-1574, 2010. [2] P. Metzner, L. Putzig and I. Horenko. Analysis of persistent non-stationary time series and applications. CAMCoS, 7:175-229, 2012. [3] M. Uhlmann. Generation of a temporally well-resolved sequence of snapshots of the flow-field in turbulent plane channel flow. URL: http://www-turbul.ifh.unikarlsruhe.de/uhlmann/reports/produce.pdf, 2000. [4] Th. von Larcher, A. Beck, R. Klein, I. Horenko, P. Metzner, M. Waidmann, D. Igdalov, G. Gassner and C.-D. Munz. Towards a Framework for the Stochastic Modelling of Subgrid Scale Fluxes for Large Eddy Simulation. Meteorol. Z., 24:313-342, 2015.
Paul V. Ellefson; Calder M. Hibbard; Michael A. Kilgore; James E. Granskog
2005-01-01
This review looks at the Nationâs legal, institutional, and economic capacity to promote forest conservation and sustainable resource management. It focuses on 20 indicators of Criterion Seven of the so-called Montreal Process and involves an extensive search and synthesis of information from a variety of sources. It identifies ways to fill information gaps and improve...
Investigation of limit state criteria for amorphous metals
NASA Astrophysics Data System (ADS)
Comanici, A. M.; Sandovici, A.; Barsanescu, P. D.
2016-08-01
The name of amorphous metals is assigned to metals that have a non-crystalline structure, but they are also very similar to glass if we look into their properties. A very distinguished feature is the fact that amorphous metals, also known as metallic glasses, show a good electrical conductivity. The extension of the limit state criteria for different materials makes this type of alloy a choice to validate the new criterions. Using a new criterion developed for biaxial and triaxial state of stress, the results are investigated in order to determine the applicability of the mathematical model for these amorphous metals. Especially for brittle materials, it is extremely important to find suitable fracture criterion. Mohr-Coulomb criterion, which is permitting a linear failure envelope, is often used for very brittle materials. But for metallic glasses this criterion is not consistent with the experimental determinations. For metallic glasses, and other high-strength materials, Rui Tao Qu and Zhe Feng Zhang proposed a failure envelope modeling with an ellipse in σ-τ coordinates. In this paper this model is being developed for principal stresses space. It is also proposed a method for transforming σ-τ coordinates in principal stresses coordinates and the theoretical results are consistent with the experimental ones.
Brugha, T S; Cragg, D
1990-07-01
During the 23 years since the original work of Holmes & Rahe, research into stressful life events on human subjects has tended towards the development of longer and more complex inventories. The List of Threatening Experiences (LTE) of Brugha et al., by virtue of its brevity, overcomes difficulties of clinical application. In a study of 50 psychiatric patients and informants, the questionnaire version of the list (LTE-Q) was shown to have high test-retest reliability, and good agreement with informant information. Concurrent validity, based on the criterion of independently rated adversity derived from a semistructured life events interview, making use of the Life Events and Difficulties Scales (LEDS) method developed by Brown & Harris, showed both high specificity and sensitivity. The LTE-Q is particularly recommended for use in psychiatric, psychological and social studies in which other intervening variables such as social support, coping, and cognitive variables are of interest, and resources do not allow for the use of extensive interview measures of stress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Q; Cao, R; Pei, X
2015-06-15
Purpose: Three-dimensional dose verification can detect errors introduced by the treatment planning system (TPS) or differences between planned and delivered dose distribution during the treatment. The aim of the study is to extend a previous in-house developed three-dimensional dose reconstructed model in homogeneous phantom to situtions in which tissue inhomogeneities are present. Methods: The method was based on the portal grey images from an electronic portal imaging device (EPID) and the relationship between beamlets and grey-scoring voxels at the position of the EPID. The relationship was expressed in the form of grey response matrix that was quantified using thickness-dependence scattermore » kernels determined by series of experiments. From the portal grey-value distribution information measured by the EPID the two-dimensional incident fluence distribution was reconstructed based on the grey response matrix using a fast iterative algorithm. The accuracy of this approach was verified using a four-field intensity-modulated radiotherapy (IMRT) plan for the treatment of lung cancer in anthopomorphic phantom. Each field had between twenty and twenty-eight segments and was evaluated by comparing the reconstructed dose distribution with the measured dose. Results: The gamma-evaluation method was used with various evaluation criteria of dose difference and distance-to-agreement: 3%/3mm and 2%/2 mm. The dose comparison for all irradiated fields showed a pass rate of 100% with the criterion of 3%/3mm, and a pass rate of higher than 92% with the criterion of 2%/2mm. Conclusion: Our experimental results demonstrate that our method is capable of accurately reconstructing three-dimensional dose distribution in the presence of inhomogeneities. Using the method, the combined planning and treatment delivery process is verified, offing an easy-to-use tool for the verification of complex treatments.« less
Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms
NASA Astrophysics Data System (ADS)
Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan
2010-12-01
This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.
Data-Driven Learning of Q-Matrix
Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang
2013-01-01
The recent surge of interests in cognitive assessment has led to developments of novel statistical models for diagnostic classification. Central to many such models is the well-known Q-matrix, which specifies the item–attribute relationships. This article proposes a data-driven approach to identification of the Q-matrix and estimation of related model parameters. A key ingredient is a flexible T-matrix that relates the Q-matrix to response patterns. The flexibility of the T-matrix allows the construction of a natural criterion function as well as a computationally amenable algorithm. Simulations results are presented to demonstrate usefulness and applicability of the proposed method. Extension to handling of the Q-matrix with partial information is presented. The proposed method also provides a platform on which important statistical issues, such as hypothesis testing and model selection, may be formally addressed. PMID:23926363
Advanced Imaging Methods for Long-Baseline Optical Interferometry
NASA Astrophysics Data System (ADS)
Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.
2008-11-01
We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.
Crack propagation of brittle rock under high geostress
NASA Astrophysics Data System (ADS)
Liu, Ning; Chu, Weijiang; Chen, Pingzhi
2018-03-01
Based on fracture mechanics and numerical methods, the characteristics and failure criterions of wall rock cracks including initiation, propagation, and coalescence are analyzed systematically under different conditions. In order to consider the interaction among cracks, adopt the sliding model of multi-cracks to simulate the splitting failure of rock in axial compress. The reinforcement of bolts and shotcrete supporting to rock mass can control the cracks propagation well. Adopt both theory analysis and simulation method to study the mechanism of controlling the propagation. The best fixed angle of bolts is calculated. Then use ansys to simulate the crack arrest function of bolt to crack. Analyze the influence of different factors on stress intensity factor. The method offer more scientific and rational criterion to evaluate the splitting failure of underground engineering under high geostress.
Tight-binding study of stacking fault energies and the Rice criterion of ductility in the fcc metals
NASA Astrophysics Data System (ADS)
Mehl, Michael J.; Papaconstantopoulos, Dimitrios A.; Kioussis, Nicholas; Herbranson, M.
2000-02-01
We have used the Naval Research Laboratory (NRL) tight-binding (TB) method to calculate the generalized stacking fault energy and the Rice ductility criterion in the fcc metals Al, Cu, Rh, Pd, Ag, Ir, Pt, Au, and Pb. The method works well for all classes of metals, i.e., simple metals, noble metals, and transition metals. We compared our results with full potential linear-muffin-tin orbital and embedded atom method (EAM) calculations, as well as experiment, and found good agreement. This is impressive, since the NRL-TB approach only fits to first-principles full-potential linearized augmented plane-wave equations of state and band structures for cubic systems. Comparable accuracy with EAM potentials can be achieved only by fitting to the stacking fault energy.
Influence of the geomembrane on time-lapse ERT measurements for leachate injection monitoring.
Audebert, M; Clément, R; Grossin-Debattista, J; Günther, T; Touze-Foltz, N; Moreau, S
2014-04-01
Leachate recirculation is a key process in the operation of municipal waste landfills as bioreactors. To quantify the water content and to evaluate the leachate injection system, in situ methods are required to obtain spatially distributed information, usually electrical resistivity tomography (ERT). However, this method can present false variations in the observations due to several parameters. This study investigates the impact of the geomembrane on ERT measurements. Indeed, the geomembrane tends to be ignored in the inversion process in most previously conducted studies. The presence of the geomembrane can change the boundary conditions of the inversion models, which have classically infinite boundary conditions. Using a numerical modelling approach, the authors demonstrate that a minimum distance is required between the electrode line and the geomembrane to satisfy the good conditions of use of the classical inversion tools. This distance is a function of the electrode line length (i.e. of the unit electrode spacing) used, the array type and the orientation of the electrode line. Moreover, this study shows that if this criterion on the minimum distance is not satisfied, it is possible to significantly improve the inversion process by introducing the complex geometry and the geomembrane location into the inversion tools. These results are finally validated on a field data set gathered on a small municipal solid waste landfill cell where this minimum distance criterion cannot be satisfied. Copyright © 2014 Elsevier Ltd. All rights reserved.
[Epidemiology aspects intestinal parasitosis among the population of Baku].
Khalafli, Kh N
2009-03-01
The aim of this study was to determine the prevalence of intestinal parasitosis in Baku and to evaluate its association with socio-economic and environmental factors. In the research 424 residents of Baku were investigated. Intestinal helminths and protozoosis were revealed by means of Standard methods of investigation (A.A.Turdiev (1967), K.Kato, M.Miura (1954) and C.Graham (1941) in modification variants R.E.Cobanov et al. (1993)). Data were analyzed using Student's t criterion and Van der Varden's X criterion. Total cases of infectious-contagious disease was about of 42,5+/-2,4% of investigated persons; ascaris, trichocephalus, trichostrongylus was found in 19,1+/-1,9% (p>0,001). There were some persons infected with Taeniarhynchus saginatus (mainly women)- 0,9+/-0,4% (p>0,001). In children the frequency of accompanying diseases was 2,15-4,3 times (X=5,19, p<0,01) higher than, among the adults. It is well known that parasitic diseases are more common in communities with low socio economic conditions. Socio-economic factors associated with intestinal parasites among residents were investigated. The investigation showed that the current socio economic conditions have caused the increase of intestinal parasitosis. If left untreated, serious complications may occur due to parasitic infections. Therefore, public health care employee as well as the officers of municipality and government should cooperate to improve the conditions, and also people should be informed about the signs, symptoms and prevention methods of the parasitic diseases.
Presenting simulation results in a nested loop plot.
Rücker, Gerta; Schwarzer, Guido
2014-12-12
Statisticians investigate new methods in simulations to evaluate their properties for future real data applications. Results are often presented in a number of figures, e.g., Trellis plots. We had conducted a simulation study on six statistical methods for estimating the treatment effect in binary outcome meta-analyses, where selection bias (e.g., publication bias) was suspected because of apparent funnel plot asymmetry. We varied five simulation parameters: true treatment effect, extent of selection, event proportion in control group, heterogeneity parameter, and number of studies in meta-analysis. In combination, this yielded a total number of 768 scenarios. To present all results using Trellis plots, 12 figures were needed. Choosing bias as criterion of interest, we present a 'nested loop plot', a diagram type that aims to have all simulation results in one plot. The idea was to bring all scenarios into a lexicographical order and arrange them consecutively on the horizontal axis of a plot, whereas the treatment effect estimate is presented on the vertical axis. The plot illustrates how parameters simultaneously influenced the estimate. It can be combined with a Trellis plot in a so-called hybrid plot. Nested loop plots may also be applied to other criteria such as the variance of estimation. The nested loop plot, similar to a time series graph, summarizes all information about the results of a simulation study with respect to a chosen criterion in one picture and provides a suitable alternative or an addition to Trellis plots.
Reliability of unstable periodic orbit based control strategies in biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Nagender; Singh, Harinder P.; Hasse, Maria
2015-04-15
Presence of recurrent and statistically significant unstable periodic orbits (UPOs) in time series obtained from biological systems is now routinely used as evidence for low dimensional chaos. Extracting accurate dynamical information from the detected UPO trajectories is vital for successful control strategies that either aim to stabilize the system near the fixed point or steer the system away from the periodic orbits. A hybrid UPO detection method from return maps that combines topological recurrence criterion, matrix fit algorithm, and stringent criterion for fixed point location gives accurate and statistically significant UPOs even in the presence of significant noise. Geometry ofmore » the return map, frequency of UPOs visiting the same trajectory, length of the data set, strength of the noise, and degree of nonstationarity affect the efficacy of the proposed method. Results suggest that establishing determinism from unambiguous UPO detection is often possible in short data sets with significant noise, but derived dynamical properties are rarely accurate and adequate for controlling the dynamics around these UPOs. A repeat chaos control experiment on epileptic hippocampal slices through more stringent control strategy and adaptive UPO tracking is reinterpreted in this context through simulation of similar control experiments on an analogous but stochastic computer model of epileptic brain slices. Reproduction of equivalent results suggests that far more stringent criteria are needed for linking apparent success of control in such experiments with possible determinism in the underlying dynamics.« less
Luiselli, J K
2000-07-01
A 3-year-old child with multiple medical disorders and chronic food refusal was treated successfully using a program that incorporated antecedent control procedures combined with positive reinforcement. The antecedent manipulations included visual cueing of a criterion number of self-feeding responses that were required during meals to receive reinforcement and a gradual increase in the imposed criterion (demand fading) that was based on improved frequency of oral consumption. As evaluated in a changing criterion design, the child learned to feed himself as an outcome of treatment. One year following intervention, he was consuming a variety of foods and had gained weight. Advantages of antecedent control methods for the treatment of chronic food refusal are discussed.
Soo, Jhy-Charm; Lee, Eun Gyung; Lee, Larry A; Kashon, Michael L; Harper, Martin
2014-10-01
Lee et al. (Evaluation of pump pulsation in respirable size-selective sampling: part I. Pulsation measurements. Ann Occup Hyg 2014a;58:60-73) introduced an approach to measure pump pulsation (PP) using a real-world sampling train, while the European Standards (EN) (EN 1232-1997 and EN 12919-1999) suggest measuring PP using a resistor in place of the sampler. The goal of this study is to characterize PP according to both EN methods and to determine the relationship of PP between the published method (Lee et al., 2014a) and the EN methods. Additional test parameters were investigated to determine whether the test conditions suggested by the EN methods were appropriate for measuring pulsations. Experiments were conducted using a factorial combination of personal sampling pumps (six medium- and two high-volumetric flow rate pumps), back pressures (six medium- and seven high-flow rate pumps), resistors (two types), tubing lengths between a pump and resistor (60 and 90 cm), and different flow rates (2 and 2.5 l min(-1) for the medium- and 4.4, 10, and 11.2 l min(-1) for the high-flow rate pumps). The selection of sampling pumps and the ranges of back pressure were based on measurements obtained in the previous study (Lee et al., 2014a). Among six medium-flow rate pumps, only the Gilian5000 and the Apex IS conformed to the 10% criterion specified in EN 1232-1997. Although the AirChek XR5000 exceeded the 10% limit, the average PP (10.9%) was close to the criterion. One high-flow rate pump, the Legacy (PP=8.1%), conformed to the 10% criterion in EN 12919-1999, while the Elite12 did not (PP=18.3%). Conducting supplemental tests with additional test parameters beyond those used in the two subject EN standards did not strengthen the characterization of PPs. For the selected test conditions, a linear regression model [PPEN=0.014+0.375×PPNIOSH (adjusted R2=0.871)] was developed to determine the PP relationship between the published method (Lee et al., 2014a) and the EN methods. The 25% PP criterion recommended by Lee et al. (2014a), average value derived from repetitive measurements, corresponds to 11% PPEN. The 10% pass/fail criterion in the EN Standards is not based on extensive laboratory evaluation and would unreasonably exclude at least one pump (i.e. AirChek XR5000 in this study) and, therefore, the more accurate criterion of average 11% from repetitive measurements should be substituted. This study suggests that users can measure PP using either a real-world sampling train or a resistor setup and obtain equivalent findings by applying the model herein derived. The findings of this study will be delivered to the consensus committees to be considered when those standards, including the EN 1232-1997, EN 12919-1999, and ISO 13137-2013, are revised. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2014.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
NASA Astrophysics Data System (ADS)
Kartashev, A. L.; Vaulin, S. D.; Kartasheva, M. A.; Martynov, A. A.; Safonov, E. V.
2016-06-01
This article presents information about the main distinguishing features of microturbine power plants. The justification of the use of Francis turbine in microturbine power plants with rated power of 100 kW is given. Initial analytical engineering calculations of the turbine (without using computational fluid dynamics) with appropriate calculation methods are considered. The parametric study of nozzle blade and whole turbine stage using ANSYS CFX is descripted. The calculations determined the optimal geometry on the criterion of maximizing efficiency at total pressure ratio. The calculation results are presented in graphical form, as well as the velocity and pressure fields at the interscapular channels of nozzle unit and the impeller.
Chen, Jinsong; Liu, Lei; Shih, Ya-Chen T; Zhang, Daowen; Severini, Thomas A
2016-03-15
We propose a flexible model for correlated medical cost data with several appealing features. First, the mean function is partially linear. Second, the distributional form for the response is not specified. Third, the covariance structure of correlated medical costs has a semiparametric form. We use extended generalized estimating equations to simultaneously estimate all parameters of interest. B-splines are used to estimate unknown functions, and a modification to Akaike information criterion is proposed for selecting knots in spline bases. We apply the model to correlated medical costs in the Medical Expenditure Panel Survey dataset. Simulation studies are conducted to assess the performance of our method. Copyright © 2015 John Wiley & Sons, Ltd.
Modelling road accidents: An approach using structural time series
NASA Astrophysics Data System (ADS)
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
NASA Technical Reports Server (NTRS)
Fisher, Kevin; Chang, Chein-I
2009-01-01
Progressive band selection (PBS) reduces spectral redundancy without significant loss of information, thereby reducing hyperspectral image data volume and processing time. Used onboard a spacecraft, it can also reduce image downlink time. PBS prioritizes an image's spectral bands according to priority scores that measure their significance to a specific application. Then it uses one of three methods to select an appropriate number of the most useful bands. Key challenges for PBS include selecting an appropriate criterion to generate band priority scores, and determining how many bands should be retained in the reduced image. The image's Virtual Dimensionality (VD), once computed, is a reasonable estimate of the latter. We describe the major design details of PBS and test PBS in a land classification experiment.
Brown, Heidi Wendell; Wise, Meg E.; Westenberg, Danielle; Schmuhl, Nicholas B.; Brezoczky, Kelly Lewis; Rogers, Rebecca G.; Constantine, Melissa L.
2017-01-01
Introduction and hypothesis Fewer than 30% of women with accidental bowel leakage (ABL) seek care, despite the existence of effective, minimally invasive therapies. We developed and validated a condition-specific instrument to assess barriers to care-seeking for ABL in women. Methods Adult women with ABL completed an electronic survey about condition severity, patient activation, previous care-seeking, and demographics. The Barriers to Care-seeking for Accidental Bowel Leakage (BCABL) instrument contained 42 potential items completed at baseline and again 2 weeks later. Paired t tests evaluated test–retest reliability. Factor analysis evaluated factor structure and guided item retention. Cronbach’s alpha evaluated internal consistency. Within and across factor item means generated a summary BCABL score used to evaluate scale validity with six external criterion measures. Results Among 1,677 click-throughs, 736 (44%) entered the survey; 95% of eligible female respondents (427 out of 458) provided complete data. Fifty-three percent of respondents had previously sought care for their ABL; median age was 62 years (range 27–89); mean Vaizey score was 12.8 (SD = 5.0), indicating moderate to severe ABL. Test–retest reliability was excellent for all items. Factor extraction via oblique rotation resulted in the final structure of 16 items in six domains, within which internal consistency was high. All six external criterion measures correlated significantly with BCABL score. Conclusions The BCABL questionnaire, with 16 items mapping to six domains, has excellent criterion validity and test–retest reliability when administered electronically in women with ABL. The BCABL can be used to identify care-seeking barriers for ABL in different populations, inform targeted interventions, and measure their effectiveness. PMID:28236039
NASA Astrophysics Data System (ADS)
Mallast, U.; Gloaguen, R.; Geyer, S.; Rödiger, T.; Siebert, C.
2011-08-01
In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxiliary information and finally evaluated in terms of hydro-geological significance. Using the example of the western catchment of the Dead Sea (Israel/Palestine), the orientation and location of the differentiated lineaments are compared to characteristics of known structural features. We demonstrate that a strong correlation between lineaments and structural features exists. Using Euclidean distances between lineaments and wells provides an assessment criterion to evaluate the hydraulic significance of detected lineaments. Based on this analysis, we suggest that the statistical analysis of lineaments allows a delineation of flow-paths and thus significant information on groundwater movements. To validate the flow-paths we compare them to existing results of groundwater models that are based on well data.
The transition prediction toolkit: LST, SIT, PSE, DNS, and LES
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Chang, Chau-Lyan; Ng, Lian L.
1992-01-01
The e(sup N) method for predicting transition onset is an amplitude ratio criterion that is on the verge of full maturation for three-dimensional, compressible, real gas flows. Many of the components for a more sophisticated, absolute amplitude criterion are now emerging: receptivity theory, secondary instability theory, parabolized stability equations approaches, direct numerical simulation and large-eddy simulation. This paper will provide a description of each of these new theoretical tools and provide indications of their current status.
Simple proof of the quantum benchmark fidelity for continuous-variable quantum devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Namiki, Ryo
2011-04-15
An experimental success criterion for continuous-variable quantum teleportation and memory is to surpass the limit of the average fidelity achieved by classical measure-and-prepare schemes with respect to a Gaussian-distributed set of coherent states. We present an alternative proof of the classical limit based on the familiar notions of state-channel duality and partial transposition. The present method enables us to produce a quantum-domain criterion associated with a given set of measured fidelities.
Variable Selection with Prior Information for Generalized Linear Models via the Prior LASSO Method.
Jiang, Yuan; He, Yunxiao; Zhang, Heping
LASSO is a popular statistical tool often used in conjunction with generalized linear models that can simultaneously select variables and estimate parameters. When there are many variables of interest, as in current biological and biomedical studies, the power of LASSO can be limited. Fortunately, so much biological and biomedical data have been collected and they may contain useful information about the importance of certain variables. This paper proposes an extension of LASSO, namely, prior LASSO (pLASSO), to incorporate that prior information into penalized generalized linear models. The goal is achieved by adding in the LASSO criterion function an additional measure of the discrepancy between the prior information and the model. For linear regression, the whole solution path of the pLASSO estimator can be found with a procedure similar to the Least Angle Regression (LARS). Asymptotic theories and simulation results show that pLASSO provides significant improvement over LASSO when the prior information is relatively accurate. When the prior information is less reliable, pLASSO shows great robustness to the misspecification. We illustrate the application of pLASSO using a real data set from a genome-wide association study.
Dynamics of an HBV/HCV infection model with intracellular delay and cell proliferation
NASA Astrophysics Data System (ADS)
Zhang, Fengqin; Li, Jianquan; Zheng, Chongwu; Wang, Lin
2017-01-01
A new mathematical model of hepatitis B/C virus (HBV/HCV) infection which incorporates the proliferation of healthy hepatocyte cells and the latent period of infected hepatocyte cells is proposed and studied. The dynamics is analyzed via Pontryagin's method and a newly proposed alternative geometric stability switch criterion. Sharp conditions ensuring stability of the infection persistent equilibrium are derived by applying Pontryagin's method. Using the intracellular delay as the bifurcation parameter and applying an alternative geometric stability switch criterion, we show that the HBV/HCV infection model undergoes stability switches. Furthermore, numerical simulations illustrate that the intracellular delay can induce complex dynamics such as persistence bubbles and chaos.
Morphing continuum theory for turbulence: Theory, computation, and visualization.
Chen, James
2017-10-01
A high order morphing continuum theory (MCT) is introduced to model highly compressible turbulence. The theory is formulated under the rigorous framework of rational continuum mechanics. A set of linear constitutive equations and balance laws are deduced and presented from the Coleman-Noll procedure and Onsager's reciprocal relations. The governing equations are then arranged in conservation form and solved through the finite volume method with a second-order Lax-Friedrichs scheme for shock preservation. A numerical example of transonic flow over a three-dimensional bump is presented using MCT and the finite volume method. The comparison shows that MCT-based direct numerical simulation (DNS) provides a better prediction than Navier-Stokes (NS)-based DNS with less than 10% of the mesh number when compared with experiments. A MCT-based and frame-indifferent Q criterion is also derived to show the coherent eddy structure of the downstream turbulence in the numerical example. It should be emphasized that unlike the NS-based Q criterion, the MCT-based Q criterion is objective without the limitation of Galilean invariance.
Match-bounded String Rewriting Systems
NASA Technical Reports Server (NTRS)
Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes
2003-01-01
We introduce a new class of automated proof methods for the termination of rewriting systems on strings. The basis of all these methods is to show that rewriting preserves regular languages. To this end, letters are annotated with natural numbers, called match heights. If the minimal height of all positions in a redex is h+1 then every position in the reduct will get height h+1. In a match-bounded system, match heights are globally bounded. Using recent results on deleting systems, we prove that rewriting by a match-bounded system preserves regular languages. Hence it is decidable whether a given rewriting system has a given match bound. We also provide a sufficient criterion for the abence of a match-bound. The problem of existence of a match-bound is still open. Match-boundedness for all strings can be used as an automated criterion for termination, for match-bounded systems are terminating. This criterion can be strengthened by requiring match-boundedness only for a restricted set of strings, for instance the set of right hand sides of forward closures.
Building a maintenance policy through a multi-criterion decision-making model
NASA Astrophysics Data System (ADS)
Faghihinia, Elahe; Mollaverdi, Naser
2012-08-01
A major competitive advantage of production and service systems is establishing a proper maintenance policy. Therefore, maintenance managers should make maintenance decisions that best fit their systems. Multi-criterion decision-making methods can take into account a number of aspects associated with the competitiveness factors of a system. This paper presents a multi-criterion decision-aided maintenance model with three criteria that have more influence on decision making: reliability, maintenance cost, and maintenance downtime. The Bayesian approach has been applied to confront maintenance failure data shortage. Therefore, the model seeks to make the best compromise between these three criteria and establish replacement intervals using Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE II), integrating the Bayesian approach with regard to the preference of the decision maker to the problem. Finally, using a numerical application, the model has been illustrated, and for a visual realization and an illustrative sensitivity analysis, PROMETHEE GAIA (the visual interactive module) has been used. Use of PROMETHEE II and PROMETHEE GAIA has been made with Decision Lab software. A sensitivity analysis has been made to verify the robustness of certain parameters of the model.
Leão, William L.; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210
Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.
A globally optimal k-anonymity method for the de-identification of health data.
El Emam, Khaled; Dankar, Fida Kamal; Issa, Romeo; Jonker, Elizabeth; Amyot, Daniel; Cogo, Elise; Corriveau, Jean-Pierre; Walker, Mark; Chowdhury, Sadrul; Vaillancourt, Regis; Roffey, Tyson; Bottomley, Jim
2009-01-01
Explicit patient consent requirements in privacy laws can have a negative impact on health research, leading to selection bias and reduced recruitment. Often legislative requirements to obtain consent are waived if the information collected or disclosed is de-identified. The authors developed and empirically evaluated a new globally optimal de-identification algorithm that satisfies the k-anonymity criterion and that is suitable for health datasets. Authors compared OLA (Optimal Lattice Anonymization) empirically to three existing k-anonymity algorithms, Datafly, Samarati, and Incognito, on six public, hospital, and registry datasets for different values of k and suppression limits. Measurement Three information loss metrics were used for the comparison: precision, discernability metric, and non-uniform entropy. Each algorithm's performance speed was also evaluated. The Datafly and Samarati algorithms had higher information loss than OLA and Incognito; OLA was consistently faster than Incognito in finding the globally optimal de-identification solution. For the de-identification of health datasets, OLA is an improvement on existing k-anonymity algorithms in terms of information loss and performance.
Paiva, Carlos Eduardo; de Oliveira, Marco Antonio; Lucchetti, Giancarlo; Fregnani, José Humberto Tavares Guerreiro; Paiva, Bianca Sakamoto Ribeiro
2018-01-01
Objective To evaluate the prevalence and possible factors associated with the development of burnout among medical students in the first years of undergraduate school. Method A cross-sectional study was conducted at the Barretos School of Health Sciences, Dr. Paulo Prata. A total of 330 students in the first four years of medical undergraduate school were invited to participate in responding to the sociodemographic and Maslach Burnout Inventory-Student Survey (MBI-SS) questionnaires. The first-year group consisted of 150 students, followed by the second-, third-, and fourth-year groups, with 60 students each. Results Data from 265 students who answered at least the sociodemographic questionnaire and the MBI-SS were analyzed (response rate = 80.3%). One (n = 1, 0.3%) potential participant viewed the Informed Consent Form but did not agree to participate in the study. A total of 187 students (187/265, 70.6%) presented high levels of emotional exhaustion, 140 (140/265, 52.8%) had high cynicism, and 129 (129/265, 48.7%) had low academic efficacy. The two-dimensional criterion indicated that 119 (44.9%) students experienced burnout. Based on the three-dimensional criterion, 70 students (26.4%) presented with burnout. The year with the highest frequency of affected students for both criteria was the first year (p = 0.001). Personal attributes were able to explain 11% (ΔR = 0.11) of the variability of burnout under the two-dimensional criterion and 14.4% (R2 = 0.144) under the three-dimensional criterion. Conclusion This study showed a high prevalence of burnout among medical students in a private school using active teaching methodologies. In the first years of graduation, students’ personal attributes (optimism and self-perception of health) and school attributes (motivation and routine of the exhaustive study) were associated with higher levels of burnout. These findings reinforce the need to establish preventive measures focused on the personal attributes of first-year students, providing better performance, motivation, optimism, and empathy in the subsequent stages of the course. PMID:29513668
Optimal sensors placement and spillover suppression
NASA Astrophysics Data System (ADS)
Hanis, Tomas; Hromcik, Martin
2012-04-01
A new approach to optimal placement of sensors (OSP) in mechanical structures is presented. In contrast to existing methods, the presented procedure enables a designer to seek for a trade-off between the presence of desirable modes in captured measurements and the elimination of influence of those mode shapes that are not of interest in a given situation. An efficient numerical algorithm is presented, developed from an existing routine based on the Fischer information matrix analysis. We consider two requirements in the optimal sensor placement procedure. On top of the classical EFI approach, the sensors configuration should also minimize spillover of unwanted higher modes. We use the information approach to OSP, based on the effective independent method (EFI), and modify the underlying criterion to meet both of our requirements—to maximize useful signals and minimize spillover of unwanted modes at the same time. Performance of our approach is demonstrated by means of examples, and a flexible Blended Wing Body (BWB) aircraft case study related to a running European-level FP7 research project 'ACFA 2020—Active Control for Flexible Aircraft'.
Combining SVM and flame radiation to forecast BOF end-point
NASA Astrophysics Data System (ADS)
Wen, Hongyuan; Zhao, Qi; Xu, Lingfei; Zhou, Munchun; Chen, Yanru
2009-05-01
Because of complex reactions in Basic Oxygen Furnace (BOF) for steelmaking, the main end-point control methods of steelmaking have insurmountable difficulties. Aiming at these problems, a support vector machine (SVM) method for forecasting the BOF steelmaking end-point is presented based on flame radiation information. The basis is that the furnace flame is the performance of the carbon oxygen reaction, because the carbon oxygen reaction is the major reaction in the steelmaking furnace. The system can acquire spectrum and image data quickly in the steelmaking adverse environment. The structure of SVM and the multilayer feed-ward neural network are similar, but SVM model could overcome the inherent defects of the latter. The model is trained and forecasted by using SVM and some appropriate variables of light and image characteristic information. The model training process follows the structure risk minimum (SRM) criterion and the design parameter can be adjusted automatically according to the sampled data in the training process. Experimental results indicate that the prediction precision of the SVM model and the executive time both meet the requirements of end-point judgment online.
Smith-Ryan, Abbie E; Blue, Malia N M; Trexler, Eric T; Hirsch, Katie R
2018-03-01
Measurement of body composition to assess health risk and prevention is expanding. Accurate portable techniques are needed to facilitate use in clinical settings. This study evaluated the accuracy and repeatability of a portable ultrasound (US) in comparison with a four-compartment criterion for per cent body fat (%Fat) in overweight/obese adults. Fifty-one participants (mean ± SD; age: 37·2 ± 11·3 years; BMI: 31·6 ± 5·2 kg m -2 ) were measured for %Fat using US (GE Logiq-e) and skinfolds. A subset of 36 participants completed a second day of the same measurements, to determine reliability. US and skinfold %Fat were calculated using the seven-site Jackson-Pollock equation. The Wang 4C model was used as the criterion method for %Fat. Compared to a gold standard criterion, US %Fat (36·4 ± 11·8%; P = 0·001; standard error of estimate [SEE] = 3·5%) was significantly higher than the criterion (33·0 ± 8·0%), but not different than skinfolds (35·3 ± 5·9%; P = 0·836; SEE = 4·5%). US resulted in good reliability, with no significant differences from Day 1 (39·95 ± 15·37%) to Day 2 (40·01 ± 15·42%). Relative consistency was 0·96, and standard error of measure was 0·94%. Although US overpredicted %Fat compared to the criterion, a moderate SEE for US is suggestive of a practical assessment tool in overweight individuals. %Fat differences reported from these field-based techniques are less than reported by other single-measurement laboratory methods and therefore may have utility in a clinical setting. This technique may also accurately track changes. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Cysewski, Piotr; Przybyłek, Maciej
2017-09-30
New theoretical screening procedure was proposed for appropriate selection of potential cocrystal formers possessing the ability of enhancing dissolution rates of drugs. The procedure relies on the training set comprising 102 positive and 17 negative cases of cocrystals found in the literature. Despite the fact that the only available data were of qualitative character, performed statistical analysis using binary classification allowed to formulate quantitative criterions. Among considered 3679 molecular descriptors the relative value of lipoaffinity index, expressed as the difference between values calculated for active compound and excipient, has been found as the most appropriate measure suited for discrimination of positive and negative cases. Assuming 5% precision, the applied classification criterion led to inclusion of 70% positive cases in the final prediction. Since lipoaffinity index is a molecular descriptor computed using only 2D information about a chemical structure, its estimation is straightforward and computationally inexpensive. The inclusion of an additional criterion quantifying the cocrystallization probability leads to the following conjunction criterions H mix <-0.18 and ΔLA>3.61, allowing for identification of dissolution rate enhancers. The screening procedure was applied for finding the most promising coformers of such drugs as Iloperidone, Ritonavir, Carbamazepine and Enthenzamide. Copyright © 2017 Elsevier B.V. All rights reserved.
Satisfying the Einstein–Podolsky–Rosen criterion with massive particles
Peise, J.; Kruse, I.; Lange, K.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Hammerer, K.; Santos, L.; Smerzi, A.; Klempt, C.
2015-01-01
In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, where a measurement of one subsystem seemingly allows for a prediction of the second subsystem beyond the Heisenberg uncertainty relation. Up to now, continuous-variable EPR correlations have only been created with photons, while the demonstration of such strongly correlated states with massive particles is still outstanding. Here we report on the creation of an EPR-correlated two-mode squeezed state in an ultracold atomic ensemble. The state shows an EPR entanglement parameter of 0.18(3), which is 2.4 s.d. below the threshold 1/4 of the EPR criterion. We also present a full tomographic reconstruction of the underlying many-particle quantum state. The state presents a resource for tests of quantum nonlocality and a wide variety of applications in the field of continuous-variable quantum information and metrology. PMID:26612105
Satisfying the Einstein-Podolsky-Rosen criterion with massive particles.
Peise, J; Kruse, I; Lange, K; Lücke, B; Pezzè, L; Arlt, J; Ertmer, W; Hammerer, K; Santos, L; Smerzi, A; Klempt, C
2015-11-27
In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, where a measurement of one subsystem seemingly allows for a prediction of the second subsystem beyond the Heisenberg uncertainty relation. Up to now, continuous-variable EPR correlations have only been created with photons, while the demonstration of such strongly correlated states with massive particles is still outstanding. Here we report on the creation of an EPR-correlated two-mode squeezed state in an ultracold atomic ensemble. The state shows an EPR entanglement parameter of 0.18(3), which is 2.4 s.d. below the threshold 1/4 of the EPR criterion. We also present a full tomographic reconstruction of the underlying many-particle quantum state. The state presents a resource for tests of quantum nonlocality and a wide variety of applications in the field of continuous-variable quantum information and metrology.
Satisfying the Einstein-Podolsky-Rosen criterion with massive particles
NASA Astrophysics Data System (ADS)
Peise, J.; Kruse, I.; Lange, K.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Hammerer, K.; Santos, L.; Smerzi, A.; Klempt, C.
2015-11-01
In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, where a measurement of one subsystem seemingly allows for a prediction of the second subsystem beyond the Heisenberg uncertainty relation. Up to now, continuous-variable EPR correlations have only been created with photons, while the demonstration of such strongly correlated states with massive particles is still outstanding. Here we report on the creation of an EPR-correlated two-mode squeezed state in an ultracold atomic ensemble. The state shows an EPR entanglement parameter of 0.18(3), which is 2.4 s.d. below the threshold 1/4 of the EPR criterion. We also present a full tomographic reconstruction of the underlying many-particle quantum state. The state presents a resource for tests of quantum nonlocality and a wide variety of applications in the field of continuous-variable quantum information and metrology.
Evaluation of deconvolution modelling applied to numerical combustion
NASA Astrophysics Data System (ADS)
Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît
2018-01-01
A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.
Classification VIA Information-Theoretic Fusion of Vector-Magnetic and Acoustic Sensor Data
2007-04-01
10) where tBsBtBsBtBsBtsB zzyyxx, . (11) The operation in (10) may be viewed as a vector matched- filter on to estimate )(tB CPARv . In summary...choosing to maximize the classification information in Y are described in Section 3.2. A 3.2. Maximum mutual information ( MMI ) features We begin with a...review of several desirable properties of features that maximize a mutual information ( MMI ) criterion. Then we review a particular algorithm [2
Gary D. Grossman; Robert E Ratajczak; J. Todd Petty; Mark D. Hunter; James T. Peterson; Gael Grenouillet
2006-01-01
We used strong inference with Akaike's Information Criterion (AIC) to assess the processes capable of explaining long-term (1984-1995) variation in the per capita rate of change of mottled sculpin (Cottus bairdi) populations in the Coweeta Creek drainage (USA). We sampled two fourth- and one fifth-order sites (BCA [uppermost], BCB, and CC [lowermost])...
Criterion Validity Evidence for the easyCBM© CCSS Math Measures: Grades 6-8. Technical Report #1402
ERIC Educational Resources Information Center
Anderson, Daniel; Rowley, Brock; Alonzo, Julie; Tindal, Gerald
2012-01-01
The easyCBM© CCSS Math tests were developed to help inform teachers' instructional decisions by providing relevant information on students' mathematical skills, relative to the Common Core State Standards (CCSS). This technical report describes a study to explore the validity of the easyCBM© CCSS Math tests by evaluating the relation between…
NASA Astrophysics Data System (ADS)
Lehmann, Rüdiger; Lösler, Michael
2017-12-01
Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.
High blood Pressure in children and its correlation with three definitions of obesity in childhood
de Moraes, Leonardo Iezzi; Nicola, Thaís Coutinho; de Jesus, Julyanna Silva Araújo; Alves, Eduardo Roberty Badiani; Giovaninni, Nayara Paula Bernurdes; Marcato, Daniele Gasparini; Sampaio, Jéssica Dutra; Fuly, Jeanne Teixeira Bessa; Costalonga, Everlayny Fiorot
2014-01-01
Background Several authors have correlated the increase of cardiovascular risk with the nutritional status, however there are different criteria for the classification of overweight and obesity in children. Objectives To evaluate the performance of three nutritional classification criteria in children, as definers of the presence of obesity and predictors of high blood pressure in schoolchildren. Methods Eight hundred and seventeen children ranging 6 to 13 years old, enrolled in public schools in the municipality of Vila Velha (ES) were submitted to anthropometric evaluation and blood pressure measurement. The classification of the nutritional status was established by two international criteria (CDC/NCHS 2000 and IOTF 2000) and one Brazilian criterion (Conde e Monteiro 2006). Results The prevalence of overweight was higher when the criterion of Conde e Monteiro (27%) was used, and inferior by the IOTF (15%) criteria. High blood pressure was observed in 7.3% of children. It was identified a strong association between the presence of overweight and the occurrence of high blood pressure, regardless of the test used (p < 0.001). The test showing the highest sensitivity in predicting elevated BP was the Conde e Monteiro (44%), while the highest specificity (94%) and greater overall accuracy (63%), was the CDC criterion. Conclusions The prevalence of overweight in Brazilian children is higher when using the classification criterion of Conde e Monteiro, and lower when the criterion used is IOTF. The Brazilian classification criterion proved to be the most sensitive predictor of high BP risk in this sample. PMID:24676372
NASA Astrophysics Data System (ADS)
Golinko, I. M.; Kovrigo, Yu. M.; Kubrak, A. I.
2014-03-01
An express method for optimally tuning analog PI and PID controllers is considered. An integral quality criterion with minimizing the control output is proposed for optimizing control systems. The suggested criterion differs from existing ones in that the control output applied to the technological process is taken into account in a correct manner, due to which it becomes possible to maximally reduce the expenditure of material and/or energy resources in performing control of industrial equipment sets. With control organized in such manner, smaller wear and longer service life of control devices are achieved. A unimodal nature of the proposed criterion for optimally tuning a controller is numerically demonstrated using the methods of optimization theory. A functional interrelation between the optimal controller parameters and dynamic properties of a controlled plant is numerically determined for a single-loop control system. The results obtained from simulation of transients in a control system carried out using the proposed and existing functional dependences are compared with each other. The proposed calculation formulas differ from the existing ones by a simple structure and highly accurate search for the optimal controller tuning parameters. The obtained calculation formulas are recommended for being used by specialists in automation for design and optimization of control systems.
Time Series Decomposition into Oscillation Components and Phase Estimation.
Matsuda, Takeru; Komaki, Fumiyasu
2017-02-01
Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
Saha, Tulshi D; Chou, S Patricia; Grant, Bridget F
2006-07-01
Item response theory (IRT) was used to determine whether the DSM-IV diagnostic criteria for alcohol abuse and dependence are arrayed along a continuum of severity. Data came from a large nationally representative sample of the US population, 18 years and older. A two-parameter logistic IRT model was used to determine the severity and discrimination of each DSM-IV criterion. Differential criterion functioning (DCF) was also assessed across subgroups of the population defined by sex, age and race-ethnicity. All DSM-IV alcohol abuse and dependence criteria, except alcohol-related legal problems, formed a continuum of alcohol use disorder severity. Abuse and dependence criteria did not consistently tap the mildest or more severe end of the continuum respectively, and several criteria were identified as potentially redundant. The drinking in larger amounts or for longer than intended dependence criterion had the greatest discrimination and lowest severity than any other criterion. Although several criteria were found to function differentially between subgroups defined in terms of sex and age, there was evidence that the generalizability and validity of the criterion forming the continuum remained intact at the test score level. DSM-IV diagnostic criteria for alcohol abuse and dependence form a continuum of severity, calling into question the abuse-dependence distinction in the DSM-IV and the interpretation of abuse as a milder disorder than dependence. The criteria tapped the more severe end of the alcohol use disorder continuum, highlighting the need to identify other criteria capturing the mild to intermediate range of the severity. The drinking larger amounts or longer than intended dependence criterion may be a bridging criterion between drinking patterns that incur risk of alcohol use disorder at the milder end of the continuum, with tolerance, withdrawal, impaired control and serious social and occupational dysfunction at the more severe end of the alcohol use disorder continuum. Future IRT and other dimensional analyses hold great promise in informing revisions to categorical classifications and constructing new dimensional classifications of alcohol use disorders based on the DSM and the ICD.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories.
Hajdziona, Marta; Molski, Andrzej
2011-02-07
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
Model selection with multiple regression on distance matrices leads to incorrect inferences.
Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H
2017-01-01
In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.
Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A
2016-03-15
The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.
Monte Carlo simulations on marker grouping and ordering.
Wu, J; Jenkins, J; Zhu, J; McCarty, J; Watson, C
2003-08-01
Four global algorithms, maximum likelihood (ML), sum of adjacent LOD score (SALOD), sum of adjacent recombinant fractions (SARF) and product of adjacent recombinant fraction (PARF), and one approximation algorithm, seriation (SER), were used to compare the marker ordering efficiencies for correctly given linkage groups based on doubled haploid (DH) populations. The Monte Carlo simulation results indicated the marker ordering powers for the five methods were almost identical. High correlation coefficients were greater than 0.99 between grouping power and ordering power, indicating that all these methods for marker ordering were reliable. Therefore, the main problem for linkage analysis was how to improve the grouping power. Since the SER approach provided the advantage of speed without losing ordering power, this approach was used for detailed simulations. For more generality, multiple linkage groups were employed, and population size, linkage cutoff criterion, marker spacing pattern (even or uneven), and marker spacing distance (close or loose) were considered for obtaining acceptable grouping powers. Simulation results indicated that the grouping power was related to population size, marker spacing distance, and cutoff criterion. Generally, a large population size provided higher grouping power than small population size, and closely linked markers provided higher grouping power than loosely linked markers. The cutoff criterion range for achieving acceptable grouping power and ordering power differed for varying cases; however, combining all situations in this study, a cutoff criterion ranging from 50 cM to 60 cM was recommended for achieving acceptable grouping power and ordering power for different cases.
2010-01-01
Background Due to marginalization, trafficking violence, conflicts with the police and organic and social psychological problems associated with the drug, crack is one of the most devastating drugs currently in use. However, there is evidence that some users manage to stay alive and active while using crack cocaine for many years, despite the numerous adversities and risks involved with this behavior. In this context, the aim of the present study was to identify the strategies and tactics developed by crack users to deal with the risks associated with the culture of use by examining the survival strategies employed by long-term users. Method A qualitative research method was used involving semi-structured, in-depth interviews. Twenty-eight crack users fulfilling a pre-defined enrollment criterion were interviewed. This criterion was defined as the long-term use of crack (i.e., at least four years). The sample was selected using information provided by key informants and distributed across eight different supply chains. The interviews were literally transcribed and analyzed via content analysis techniques using NVivo-8 software. Results There was diversity in the sample with regard to economic and education levels. The average duration of crack use was 11.5 years. Respondents believed that the greatest risks of crack dependence were related to the drug's psychological effects (e.g., cravings and transient paranoid symptoms) and those arising from its illegality (e.g., clashes with the police and trafficking). Protection strategies focused on the control of the psychological effects, primarily through the consumption of alcohol and marijuana. To address the illegality of the drug, strategies were developed to deal with dealers and the police; these strategies were considered crucial for survival. Conclusions The strategies developed by the respondents focused on trying to protect themselves. They proved generally effective, though they involved risks of triggering additional problems (e.g., other dependencies) in the long term. PMID:21050465
Rahman, Md Rejaur; Shi, Z H; Chongfa, Cai
2014-11-01
This study was an attempt to analyse the regional environmental quality with the application of remote sensing, geographical information system, and spatial multiple criteria decision analysis and, to project a quantitative method applicable to identify the status of the regional environment of the study area. Using spatial multi-criteria evaluation (SMCE) approach with expert knowledge in this study, an integrated regional environmental quality index (REQI) was computed and classified into five levels of regional environment quality viz. worse, poor, moderate, good, and very good. During the process, a set of spatial criteria were selected (here, 15 criterions) together with the degree of importance of criteria in sustainability of the regional environment. Integrated remote sensing and GIS technique and models were applied to generate the necessary factors (criterions) maps for the SMCE approach. The ranking, along with expected value method, was used to standardize the factors and on the other hand, an analytical hierarchy process (AHP) was applied for calculating factor weights. The entire process was executed in the integrated land and water information system (ILWIS) software tool that supports SMCE. The analysis showed that the overall regional environmental quality of the area was at moderate level and was partly determined by elevation. Areas under worse and poor quality of environment indicated that the regional environmental status showed decline in these parts of the county. The study also revealed that the human activities, vegetation condition, soil erosion, topography, climate, and soil conditions have serious influence on the regional environment condition of the area. Considering the regional characteristics of environmental quality, priority, and practical needs for environmental restoration, the study area was further regionalized into four priority areas which may serve as base areas of decision making for the recovery, rebuilding, and protection of the environment.
Claerhout, Helena; De Prins, Martine; Mesotten, Dieter; Van den Berghe, Greet; Mathieu, Chantal; Van Eldere, Johan; Vanstapel, Florent
2016-01-01
We verified the analytical performance of strip-based handheld glucose meters (GM) for prescription use, in a comparative split-sample protocol using blood gas samples from a surgical intensive care unit (ICU). Freestyle Precision Pro (Abbott), StatStrip Connectivity Meter (Nova), ACCU-CHEK Inform II (Roche) were evaluated for recovery/linearity, imprecision/repeatability. The GMs and the ABL90 (Radiometer) blood gas analyzer (BGA) were tested for relative accuracy vs. the comparator hexokinase glucose-6-phosphate-dehydrogenase (HK/G6PDH) assay on a Cobas c702 analyzer (Roche). Recovery of spiked glucose was linear up to 19.3 mmol/L (347 mg/dL) with a slope of 0.91-0.94 for all GMs. Repeatability estimated by pooling duplicate measurements on samples below (n=9), in (n=51) or above (n=80) the 4.2-5.9 mM (74-106 mg/dL) range were for Freestyle Precision Pro: 4.2%, 4.0%, 3.6%; StatStrip Connectivity Meter: 4.0%, 4.3%, 4.5%; and ACCU-CHEK Inform II: 1.4%, 2.5%, 3.5%. GMs were in agreement with the comparator method. The BGA outperformed the GMs, with a MARD of 3.9% compared to 6.5%, 5.8% and 4.4% for the FreeStyle, StatStrip and ACCU-CHEK, respectively. Zero % of the BGA results deviated more than the FDA 10% criterion as compared to 9.4%, 3.7% and 2.2% for the FreeStyle, StatStrip and ACCU-CHEK, respectively. For all GMs, icodextrin did not interfere. Variation in the putative influence factors hematocrit and O2 tension could not explain observed differences with the comparator method. GMs quantified blood glucose in whole blood at about the 10% total error criterion, proposed by the FDA for prescription use.
Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.
Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David
2018-07-01
To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP modeling. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Perekhodtseva, E. V.
2012-04-01
The results of the probability forecast methods of summer storm and hazard wind over territories of Russia and Europe are submitted at this paper. These methods use the hydrodynamic-statistical model of these phenomena. The statistical model was developed for the recognition of the situation involving these phenomena. For this perhaps the samples of the values of atmospheric parameters (n=40) for the presence and for the absence of these phenomena of storm and hazard wind were accumulated. The compressing of the predictors space without the information losses was obtained by special algorithm (k=7< 24m/s, the values of 75% 29m/s or the area of the tornado and strong squalls. The evaluation of this probability forecast was provided by criterion of Brayer. The estimation was successful and was equal for the European part of Russia B=0,37. The application of the probability forecast of storm and hazard winds allows to mitigate the economic losses when the errors of the first and second kinds of storm wind categorical forecast are not so small. A lot of examples of the storm wind probability forecast are submitted at this report.
Klußmann, André; Gebhardt, Hansjürgen; Rieger, Monika; Liebers, Falk; Steinberg, Ulf
2012-01-01
Upper extremity musculoskeletal symptoms and disorders are common in the working population. The economic and social impact of such disorders is considerable. Long-time, dynamic repetitive exposure of the hand-arm system during manual handling operations (MHO) alone or in combination with static and postural effort are recognised as causes of musculoskeletal symptoms and disorders. The assessment of these manual work tasks is crucial to estimate health risks of exposed employees. For these work tasks, a new method for the assessment of the working conditions was developed and a validation study was performed. The results suggest satisfying criterion validity and moderate objectivity of the KIM-MHO draft 2007. The method was modified and evaluated again. It is planned to release a new version of KIM-MHO in spring 2012.
Open space suitability analysis for emergency shelter after an earthquake
NASA Astrophysics Data System (ADS)
Anhorn, J.; Khazai, B.
2014-06-01
In an emergency situation shelter space is crucial for people affected by natural hazards. Emergency planners in disaster relief and mass care can greatly benefit from a sound methodology that identifies suitable shelter areas and sites where shelter services need to be improved. A methodology to rank suitability of open spaces for contingency planning and placement of shelter in the immediate aftermath of a disaster is introduced. The Open Space Suitability Index (OSSI) uses the combination of two different measures: a qualitative evaluation criterion for the suitability and manageability of open spaces to be used as shelter sites, and a second quantitative criterion using a capacitated accessibility analysis based on network analysis. For the qualitative assessment, implementation issues, environmental considerations, and basic utility supply are the main categories to rank candidate shelter sites. Geographic Information System (GIS) is used to reveal spatial patterns of shelter demand. Advantages and limitations of this method are discussed on the basis of a case study in Kathmandu Metropolitan City (KMC). According to the results, out of 410 open spaces under investigation, 12.2% have to be considered not suitable (Category D and E) while 10.7% are Category A and 17.6% are Category B. Almost two third (59.5%) are fairly suitable (Category C).
Open space suitability analysis for emergency shelter after an earthquake
NASA Astrophysics Data System (ADS)
Anhorn, J.; Khazai, B.
2015-04-01
In an emergency situation shelter space is crucial for people affected by natural hazards. Emergency planners in disaster relief and mass care can greatly benefit from a sound methodology that identifies suitable shelter areas and sites where shelter services need to be improved. A methodology to rank suitability of open spaces for contingency planning and placement of shelter in the immediate aftermath of a disaster is introduced. The Open Space Suitability Index uses the combination of two different measures: a qualitative evaluation criterion for the suitability and manageability of open spaces to be used as shelter sites and another quantitative criterion using a capacitated accessibility analysis based on network analysis. For the qualitative assessment implementation issues, environmental considerations and basic utility supply are the main categories to rank candidate shelter sites. A geographic information system is used to reveal spatial patterns of shelter demand. Advantages and limitations of this method are discussed on the basis of an earthquake hazard case study in the Kathmandu Metropolitan City. According to the results, out of 410 open spaces under investigation, 12.2% have to be considered not suitable (Category D and E) while 10.7% are Category A and 17.6% are Category B. Almost two-thirds (59.55%) are fairly suitable (Category C).
A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.
Khelifi, Lazhar; Mignotte, Max
2017-08-01
Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.
Bai, Jing; Yang, Wei; Wang, Song; Guan, Rui-Hong; Zhang, Hui; Fu, Jing-Jing; Wu, Wei; Yan, Kun
2016-07-01
The purpose of this study was to explore the diagnostic value of the arrival time difference between lesions and surrounding lung tissue on contrast-enhanced sonography of subpleural pulmonary lesions. A total of 110 patients with subpleural pulmonary lesions who underwent both conventional and contrast-enhanced sonography and had a definite diagnosis were enrolled. After contrast agent injection, the arrival times in the lesion, lung, and chest wall were recorded. The arrival time differences between various tissues were also calculated. Statistical analysis showed a significant difference in the lesion arrival time, the arrival time difference between the lesion and lung, and the arrival time difference between the chest wall and lesion (all P < .001) for benign and malignant lesions. Receiver operating characteristic curve analysis revealed that the optimal diagnostic criterion was the arrival time difference between the lesion and lung, and that the best cutoff point was 2.5 seconds (later arrival signified malignancy). This new diagnostic criterion showed superior diagnostic accuracy (97.1%) compared to conventional diagnostic criteria. The individualized diagnostic method based on an arrival time comparison using contrast-enhanced sonography had high diagnostic accuracy (97.1%) with good feasibility and could provide useful diagnostic information for subpleural pulmonary lesions.
James Howard; Rebecca Westby; Kenneth Skog
2010-01-01
This report provides a wide range of specific and statistical information on forest products markets in terms of production, trade, prices and consumption, employment, and other factors influencing forest sustainability.
Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P
2007-02-08
Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori knowledge for partitioning sites. We recommend: (i) selection of models by using backward elimination rather than AIC or AICc, (ii) use a stringent cut-off, e.g., p = 0.0001, and (iii) conduct sensitivity analysis of results. With thoughtful application, fixed-effect codon models should provide a useful tool for large scale multi-gene analyses.
Baltzer, Pascal Andreas Thomas; Freiberg, Christian; Beger, Sebastian; Vag, Tibor; Dietzel, Matthias; Herzog, Aimee B; Gajda, Mieczyslaw; Camara, Oumar; Kaiser, Werner A
2009-09-01
Enhancement characteristics after administration of a contrast agent are regarded as a major criterion for differential diagnosis in magnetic resonance mammography (MRM). However, no consensus exists about the best measurement method to assess contrast enhancement kinetics. This systematic investigation was performed to compare visual estimation with manual region of interest (ROI) and computer-aided diagnosis (CAD) analysis for time curve measurements in MRM. A total of 329 patients undergoing surgery after MRM (1.5 T) were analyzed prospectively. Dynamic data were measured using visual estimation, including ROI as well as CAD methods, and classified depending on initial signal increase and delayed enhancement. Pathology revealed 469 lesions (279 malignant, 190 benign). Kappa agreement between the methods ranged from 0.78 to 0.81. Diagnostic accuracies of 74.4% (visual), 75.7% (ROI), and 76.6% (CAD) were found without statistical significant differences. According to our results, curve type measurements are useful as a diagnostic criterion in breast lesions irrespective of the method used.
Qin, Zong; Ji, Chuangang; Wang, Kai; Liu, Sheng
2012-10-08
In this paper, condition for uniform lighting generated by light emitting diode (LED) array was systematically studied. To take human vision effect into consideration, contrast sensitivity function (CSF) was novelly adopted as critical criterion for uniform lighting instead of conventionally used Sparrow's Criterion (SC). Through CSF method, design parameters including system thickness, LED pitch, LED's spatial radiation distribution and viewing condition can be analytically combined. In a specific LED array lighting system (LALS) with foursquare LED arrangement, different types of LEDs (Lambertian and Batwing type) and given viewing condition, optimum system thicknesses and LED pitches were calculated and compared with those got through SC method. Results show that CSF method can achieve more appropriate optimum parameters than SC method. Additionally, an abnormal phenomenon that uniformity varies with structural parameters non-monotonically in LALS with non-Lambertian LEDs was found and analyzed. Based on the analysis, a design method of LALS that can bring about better practicability, lower cost and more attractive appearance was summarized.
Mesh refinement in finite element analysis by minimization of the stiffness matrix trace
NASA Technical Reports Server (NTRS)
Kittur, Madan G.; Huston, Ronald L.
1989-01-01
Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.
Local Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
Methodological considerations of the GRADE method.
Malmivaara, Antti
2015-02-01
The GRADE method (Grading of Recommendations, Assessment, Development, and Evaluation) provides a tool for rating the quality of evidence for systematic reviews and clinical guidelines. This article aims to analyse conceptually how well grounded the GRADE method is, and to suggest improvements. The eight criteria for rating the quality of evidence as proposed by GRADE are here analysed in terms of each criterion's potential to provide valid information for grading evidence. Secondly, the GRADE method of allocating weights and summarizing the values of the criteria is considered. It is concluded that three GRADE criteria have an appropriate conceptual basis to be used as indicators of confidence in research evidence in systematic reviews: internal validity of a study, consistency of the findings, and publication bias. In network meta-analyses, the indirectness of evidence may also be considered. It is here proposed that the grade for the internal validity of a study could in some instances justifiably decrease the overall grade by three grades (e.g. from high to very low) instead of the up to two grade decrease, as suggested by the GRADE method.
Active learning methods for interactive image retrieval.
Gosselin, Philippe Henri; Cord, Matthieu
2008-07-01
Active learning methods have been considered with increased interest in the statistical learning community. Initially developed within a classification framework, a lot of extensions are now being proposed to handle multimedia applications. This paper provides algorithms within a statistical framework to extend active learning for online content-based image retrieval (CBIR). The classification framework is presented with experiments to compare several powerful classification techniques in this information retrieval context. Focusing on interactive methods, active learning strategy is then described. The limitations of this approach for CBIR are emphasized before presenting our new active selection process RETIN. First, as any active method is sensitive to the boundary estimation between classes, the RETIN strategy carries out a boundary correction to make the retrieval process more robust. Second, the criterion of generalization error to optimize the active learning selection is modified to better represent the CBIR objective of database ranking. Third, a batch processing of images is proposed. Our strategy leads to a fast and efficient active learning scheme to retrieve sets of online images (query concept). Experiments on large databases show that the RETIN method performs well in comparison to several other active strategies.
Systematic strategies for the third industrial accident prevention plan in Korea.
Kang, Young-sig; Yang, Sung-hwan; Kim, Tae-gu; Kim, Day-sung
2012-01-01
To minimize industrial accidents, it's critical to evaluate a firm's priorities for prevention factors and strategies since such evaluation provides decisive information for preventing industrial accidents and maintaining safety management. Therefore, this paper proposes the evaluation of priorities through statistical testing of prevention factors with a cause analysis in a cause and effect model. A priority matrix criterion is proposed to apply the ranking and for the objectivity of questionnaire results. This paper used regression method (RA), exponential smoothing method (ESM), double exponential smoothing method (DESM), autoregressive integrated moving average (ARIMA) model and proposed analytical function method (PAFM) to analyze trends of accident data that will lead to an accurate prediction. This paper standardized the questionnaire results of workers and managers in manufacturing and construction companies with less than 300 employees, located in the central Korean metropolitan areas where fatal accidents have occurred. Finally, a strategy was provided to construct safety management for the third industrial accident prevention plan and a forecasting method for occupational accident rates and fatality rates for occupational accidents per 10,000 people.
Discrete time modeling and stability analysis of TCP Vegas
NASA Astrophysics Data System (ADS)
You, Byungyong; Koo, Kyungmo; Lee, Jin S.
2007-12-01
This paper presents an analysis method for TCP Vegas network model with single link and single source. Some papers showed global stability of several network models, but those models are not a dual problem where dynamics both exist in sources and links such as TCP Vegas. Other papers studied TCP Vegas as a dual problem, but it did not fully derive an asymptotic stability region. Therefore we analyze TCP Vegas with Jury's criterion which is necessary and sufficient condition. So we use state space model in discrete time and by using Jury's criterion, we could find an asymptotic stability region of TCP Vegas network model. This result is verified by ns-2 simulation. And by comparing with other results, we could know our method performed well.
Optimization design of hydroturbine rotors according to the efficiency-strength criteria
NASA Astrophysics Data System (ADS)
Bannikov, D. V.; Yesipov, D. V.; Cherny, S. G.; Chirkov, D. V.
2010-12-01
The hydroturbine runner designing [1] is optimized by efficient methods for calculation of head loss in entire flow-through part of the turbine and deformation state of the blade. Energy losses are found at modelling of the spatial turbulent flow and engineering semi-empirical formulae. State of deformation is determined from the solution of the linear problem of elasticity for the isolated blade at hydrodynamic pressure with the method of boundary elements. With the use of the proposed system, the problem of the turbine runner design with the capacity of 640 MW providing the preset dependence of efficiency on the turbine work mode (efficiency criterion) is solved. The arising stresses do not exceed the critical value (strength criterion).
Methods of comparing associative models and an application to retrospective revaluation.
Witnauer, James E; Hutchings, Ryan; Miller, Ralph R
2017-11-01
Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.
Functional feature embedded space mapping of fMRI data.
Hu, Jin; Tian, Jie; Yang, Lei
2006-01-01
We have proposed a new method for fMRI data analysis which is called Functional Feature Embedded Space Mapping (FFESM). Our work mainly focuses on the experimental design with periodic stimuli which can be described by a number of Fourier coefficients in the frequency domain. A nonlinear dimension reduction technique Isomap is applied to the high dimensional features obtained from frequency domain of the fMRI data for the first time. Finally, the presence of activated time series is identified by the clustering method in which the information theoretic criterion of minimum description length (MDL) is used to estimate the number of clusters. The feasibility of our algorithm is demonstrated by real human experiments. Although we focus on analyzing periodic fMRI data, the approach can be extended to analyze non-periodic fMRI data (event-related fMRI) by replacing the Fourier analysis with a wavelet analysis.
Enriching plausible new hypothesis generation in PubMed.
Baek, Seung Han; Lee, Dahee; Kim, Minjoo; Lee, Jong Ho; Song, Min
2017-01-01
Most of earlier studies in the field of literature-based discovery have adopted Swanson's ABC model that links pieces of knowledge entailed in disjoint literatures. However, the issue concerning their practicability remains to be solved since most of them did not deal with the context surrounding the discovered associations and usually not accompanied with clinical confirmation. In this study, we aim to propose a method that expands and elaborates the existing hypothesis by advanced text mining techniques for capturing contexts. We extend ABC model to allow for multiple B terms with various biological types. We were able to concretize a specific, metabolite-related hypothesis with abundant contextual information by using the proposed method. Starting from explaining the relationship between lactosylceramide and arterial stiffness, the hypothesis was extended to suggest a potential pathway consisting of lactosylceramide, nitric oxide, malondialdehyde, and arterial stiffness. The experiment by domain experts showed that it is clinically valid. The proposed method is designed to provide plausible candidates of the concretized hypothesis, which are based on extracted heterogeneous entities and detailed relation information, along with a reliable ranking criterion. Statistical tests collaboratively conducted with biomedical experts provide the validity and practical usefulness of the method unlike previous studies. Applying the proposed method to other cases, it would be helpful for biologists to support the existing hypothesis and easily expect the logical process within it.
Organizational Productivity Measurement: The Development and Evaluation of an Integrated Approach.
1987-07-01
measurement and aggregation strategy also has applications in management r information systems, performance appraisal , and other situations where multiple...larger organizational units. The basic measurement and aggregation strategy also has applications in manage- "".". ment information systems, criterion...much has been written on the subject of organizational productiv- ity, there is little consensus concerning its definition ( Tuttle , 1983). Such a lack
Development of a high-performance noise-reduction filter for tomographic reconstruction
NASA Astrophysics Data System (ADS)
Kao, Chien-Min; Pan, Xiaochuan
2001-07-01
We propose a new noise-reduction method for tomographic reconstruction. The method incorporates a priori information on the source image for allowing the derivation of the energy spectrum of its ideal sinogram. In combination with the energy spectrum of the Poisson noise in the measured sinogram, we are able to derive a Wiener-like filter for effective suppression of the sinogram noise. The filtered backprojection (FBP) algorithm, with a ramp filter, is then applied to the filtered sinogram to produce tomographic images. The resulting filter has a closed-form expression in the frequency space and contains a single user-adjustable regularization parameter. The proposed method is hence simple to implement and easy to use. In contrast to the ad hoc apodizing windows, such as Hanning and Butterworth filters, that are commonly used in the conventional FBP reconstruction, the proposed filter is theoretically more rigorous as it is derived by basing upon an optimization criterion, subject to a known class of source image intensity distributions.
NASA Astrophysics Data System (ADS)
Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi
2017-10-01
When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.
Statistical analysis on the signals monitoring multiphase flow patterns in pipeline-riser system
NASA Astrophysics Data System (ADS)
Ye, Jing; Guo, Liejin
2013-07-01
The signals monitoring petroleum transmission pipeline in offshore oil industry usually contain abundant information about the multiphase flow on flow assurance which includes the avoidance of most undesirable flow pattern. Therefore, extracting reliable features form these signals to analyze is an alternative way to examine the potential risks to oil platform. This paper is focused on characterizing multiphase flow patterns in pipeline-riser system that is often appeared in offshore oil industry and finding an objective criterion to describe the transition of flow patterns. Statistical analysis on pressure signal at the riser top is proposed, instead of normal prediction method based on inlet and outlet flow conditions which could not be easily determined during most situations. Besides, machine learning method (least square supported vector machine) is also performed to classify automatically the different flow patterns. The experiment results from a small-scale loop show that the proposed method is effective for analyzing the multiphase flow pattern.
Corner-point criterion for assessing nonlinear image processing imagers
NASA Astrophysics Data System (ADS)
Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory
2017-10-01
Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to color imaging is proposed, with a discussion about the choice of the working color space depending on the type of image enhancement processing used.
1982-01-01
apparent coincidence that the same normalization should do for time and uncertainty with Kenneth Arrow, Michael Boskin, Frank Hahn, Hugh Rose, Amartya ... Sen , and John Wise at various times, and the possible relationship between the structure of a criterion function and an information tree such as that
Using macroinvertebrate response to inform sediment criteria development in mountain streams
The phrase biologically-based sediment criterion indicates that biological data is used to develop regional sediment criteria that will protect and maintain self-sustaining populations of native sediment-sensitive biota. To develop biologically-based sediment criteria we must qua...
Code of Federal Regulations, 2012 CFR
2012-07-01
... education community. (2) [Reserved] (Authority: 20 U.S.C. 1124(b)) [47 FR 14122, Apr. 1, 1982, as amended at... language at the undergraduate level. (b) The Secretary reviews each application for information that shows...
Code of Federal Regulations, 2013 CFR
2013-07-01
... education community. (2) [Reserved] (Authority: 20 U.S.C. 1124(b)) [47 FR 14122, Apr. 1, 1982, as amended at... language at the undergraduate level. (b) The Secretary reviews each application for information that shows...
Code of Federal Regulations, 2014 CFR
2014-07-01
... education community. (2) [Reserved] (Authority: 20 U.S.C. 1124(b)) [47 FR 14122, Apr. 1, 1982, as amended at... language at the undergraduate level. (b) The Secretary reviews each application for information that shows...
Code of Federal Regulations, 2011 CFR
2011-07-01
... education community. (2) [Reserved] (Authority: 20 U.S.C. 1124(b)) [47 FR 14122, Apr. 1, 1982, as amended at... language at the undergraduate level. (b) The Secretary reviews each application for information that shows...
Nowakowska, Marzena
2017-04-01
The development of the Bayesian logistic regression model classifying the road accident severity is discussed. The already exploited informative priors (method of moments, maximum likelihood estimation, and two-stage Bayesian updating), along with the original idea of a Boot prior proposal, are investigated when no expert opinion has been available. In addition, two possible approaches to updating the priors, in the form of unbalanced and balanced training data sets, are presented. The obtained logistic Bayesian models are assessed on the basis of a deviance information criterion (DIC), highest probability density (HPD) intervals, and coefficients of variation estimated for the model parameters. The verification of the model accuracy has been based on sensitivity, specificity and the harmonic mean of sensitivity and specificity, all calculated from a test data set. The models obtained from the balanced training data set have a better classification quality than the ones obtained from the unbalanced training data set. The two-stage Bayesian updating prior model and the Boot prior model, both identified with the use of the balanced training data set, outperform the non-informative, method of moments, and maximum likelihood estimation prior models. It is important to note that one should be careful when interpreting the parameters since different priors can lead to different models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul
2016-01-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257
Zimmerman, Mark; Dalrymple, Kristy; Chelminski, Iwona; Young, Diane; Galione, Janine N
2010-11-01
In DSM-IV, the diagnosis of social anxiety disorder (SAD) and specific phobia in adults requires that the person recognize that his or her fear of the phobic situation is excessive or unreasonable (criterion C). The DSM-5 Anxiety Disorders Work Group has proposed replacing this criterion because some patients with clinically significant phobic fears do not recognize the irrationality of their fears. In the present report from the Rhode Island Methods to Improve Diagnostic Assessment and Services project we determined the number of individuals who were not diagnosed with SAD and specific phobia because they did not recognize the excessiveness or irrationality of their fear. We interviewed 3,000 psychiatric outpatients and 1,800 candidates for bariatric surgery with a modified version of the Structured Clinical Interview for DSM-IV. In the SAD and specific phobia modules we suspended the skip-out that curtails the modules if criterion C is not met. Patients who met all DSM-IV criteria for SAD or specific phobia except criterion C were considered to have "modified" SAD or specific phobia. The lifetime rates of DSM-IV SAD and specific phobia were 30.5 and 11.8% in psychiatric patients and 11.7 and 10.2% in bariatric surgery candidates, respectively. Less than 1% of the patients in both samples were diagnosed with modified SAD or specific phobia. Few patients were excluded from a phobia diagnosis because of criterion C. We suggest that in DSM-5 this criterion be eliminated from the SAD and specific phobia criteria sets. © 2010 Wiley-Liss, Inc.
The role of Criterion A2 in the DSM-IV diagnosis of post-traumatic stress disorder
Karam, Elie George; Andrews, Gavin; Bromet, Evelyn; Petukhova, Maria; Ruscio, Ayelet Meron; Salamoun, Mariana; Sampson, Nancy; Stein, Dan J.; Alonso, Jordi; Andrade, Laura Helena; Angermeyer, Matthias; Demyttenaere, Koen; de Girolamo, Giovanni; de Graaf, Ron; Florescu, Silvia; Gureje, Oye; Kaminer, Debra; Kotov, Roman; Lee, Sing; Lepine, Jean Pierre; Mora, Maria Elena Medina; Browne, Mark A. Oakley; Posada-Villa, José; Sagar, Rajesh; Shalev, Arieh Y.; Takeshima, Tadashi; Tomov, Toma; Kessler, Ronald C.
2011-01-01
Background Controversy exists about the utility of DSM-IV post-traumatic stress disorder (PTSD) Criterion A2: that exposure to a potentially traumatic experience (PTE; PTSD Criterion A1) is accompanied by intense fear, helplessness, or horror. Methods Lifetime DSM-IV PTSD was assessed with the Composite International Diagnostic Interview in community surveys of 52,826 respondents across 21 countries in the World Mental Health Surveys. Results 37.6% of 28,490 representative PTEs reported by respondents met Criterion A2, a proportion higher than the proportions meeting other criteria (B-F; 5.4-9.6%). Conditional prevalence of meeting all other criteria for a diagnosis of PTSD given a PTE was significantly higher in the presence (9.7%) than absence (0.1%) of A2. However, as only 1.4% of respondents who met all other criteria failed A2, the estimated prevalence of PTSD increased only slightly (from 3.64% to 3.69%) when A2 was not required for diagnosis. PTSD with or without Criterion A2 did not differ in persistence or predicted consequences (subsequent suicidal ideation or secondary disorders) depending on presence-absence of A2. Furthermore, as A2 was by far the most commonly reported symptom of PTSD, initial assessment of A2 would be much less efficient than screening other criteria in quickly ruling out a large proportion of non-cases. Conclusion Removal of A2 from the DSM-IV criterion set would reduce the complexity of diagnosing PTSD while not substantially increasing the number of people who qualify for diagnosis. A2 should consequently be reconceptualized as a risk factor for PTSD rather than as a diagnostic requirement. PMID:20599189
Laukkanen, Sanna; Kangas, Annika; Kangas, Jyrki
2002-02-01
Voting theory has a lot in common with utility theory, and especially with group decision-making. An expected-utility-maximising strategy exists in voting situations, as well as in decision-making situations. Therefore, it is natural to utilise the achievements of voting theory also in group decision-making. Most voting systems are based on a single criterion or holistic preference information on decision alternatives. However, a voting scheme called multicriteria approval is specially developed for decision-making situations with multiple criteria. This study considers the voting theory from the group decision support point of view and compares it with some other methods applied to similar purposes in natural resource management. A case study is presented, where the approval voting approach is introduced to natural resources planning and tested in a forestry group decision-making process. Applying multicriteria approval method was found to be a potential approach for handling some challenges typical for forestry group decision support. These challenges include (i) utilising ordinal information in the evaluation of decision alternatives, (ii) being readily understandable for and treating equally all the stakeholders in possession of different levels of knowledge on the subject considered, (iii) fast and cheap acquisition of preference information from several stakeholders, and (iv) dealing with multiple criteria.
Mobile medical apps for patient education: a graded review of available dermatology apps.
Masud, Aisha; Shafi, Shahram; Rao, Babar K
2018-02-01
The utilization of mobile applications (apps) as educational resources for patients highlights the need for an objective method of evaluating the quality of health care-related mobile apps. In this study, a quantified rubric was developed to objectively grade publicly available dermatology mobile apps with the primary focus of patient education. The rubric included 5 criteria thought to be most important in evaluating the adequacy of these apps in relaying health information to patients: educational objectives, content, accuracy, design, and conflict of interest. A 4-point scale was applied to each criterion. The use of this objective rubric could have implications in the evaluation and recommendation of mobile health care apps as a vital educational resource for patients.
Labrecque, Michel; Drouin, Jean; Latulippe, Louis
1987-01-01
The physicians on staff at the Family Medicine Unit of the Medical Centre of Laval University evaluated the quality of medical treatment by a method of control involving objective criteria. This study is based on 88 entries in the medical records of patients who were seen for the dispensing of oral contraceptives. The information contained in these entries was compared to criteria published in the 1985 Canadian Report on Oral Contraceptives. On average, each record contained 60%-80% of the criteria, depending on the type of visit. For each criterion analysed separately, the proportion of entries corresponding to the norm varies between 6% and 95%. Overall, the quality of the entries is good. The standard to be attained is correspondence with the recommendations set out in the 1985 PMID:21263877
A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection
Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B
2015-01-01
Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050
Cai, Qianqian; Turner, Brett D; Sheng, Daichao; Sloan, Scott
2018-03-01
The kinetics of fluoride sorption by calcite in the presence of metal ions (Co, Mn, Cd and Ba) have been investigated and modelled using the intra-particle diffusion (IPD), pseudo-second order (PSO), and the Hill 4 and Hill 5 kinetic models. Model comparison using the Akaike Information Criterion (AIC), the Schwarz Bayseian Information Criterion (BIC) and the Bayes Factor allows direct comparison of model results irrespective of the number of model parameters. Information Criterion results indicate "very strong" evidence that the Hill 5 model was the best fitting model for all observed data due to its ability to fit sigmoidal data, with confidence contour analysis showing the model parameters were well constrained by the data. Kinetic results were used to determine the thickness of a calcite permeable reactive barrier required to achieve up to 99.9% fluoride removal at a groundwater flow of 0.1 m.day -1 . Fluoride removal half-life (t 0.5 ) values were found to increase in the order Ba ≈ stonedust (a 99% pure natural calcite) < Cd < Co < Mn. A barrier width of 0.97 ± 0.02 m was found to be required for the fluoride/calcite (stonedust) only system when using no factor of safety, whilst in the presence of Mn and Co, the width increased to 2.76 ± 0.28 and 19.83 ± 0.37 m respectively. In comparison, the PSO model predicted a required barrier thickness of ∼46.0, 62.6 & 50.3 m respectively for the fluoride/calcite, Mn and Co systems under the same conditions. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
[Examination of the criterion validity of the MMPI-2 Depression, Anxiety, and Anger Content scales].
Uluç, Sait
2008-01-01
Examination of the psychometric properties and content areas of the revised MMPI's (MMPI-2 [Minnesota Multiphasic Personality Inventory-2]) content scales is required. In this study the criterion-related validity of the MMPI-2 Depression, Anxiety, and Anger Content scales was examined using the following conceptually relevant scales: The Beck Depression Inventory (BDI), Beck Anxiety Inventory (BAI), and State Triad Anger Scale (STAS). MMPI-2 Depression, Anxiety, and Anger Content scales, and BDI, BAI, and STAS were administered to a sample of 196 students at Middle East Technical University (n= 196; 122 female, 74 male). Regression analyses were performed to determine if these conceptually relevant scales contributed significantly beyond the content scales. The MMPI-2 Depression Content Scale was compared to BDI, the MMPI-2 Anxiety Scale was compared to BAI, and the MMPI-2 Anger Content Scale was compared to STAS. The internal consistency of the MMPI-2 Depression Content Scale (alpha = 0.82), the MMPI-2 Anxiety Content Scale (alpha = 0.73), and the MMPI-2 Anger Content Scale (alpha = 0.72) was obtained. Criterion validity of the 3 analyzed content scales was demonstrated for both males and females. The findings indicated that (1) the MMPI-2 Depression Content Scale provides information about the general level of depression, (2) the MMPI-2 Anxiety Content Scale assesses subjective anxiety rather than somatic anxiety, and (3) the MMPI-2 Anger Content Scale may provide information about the potential to act out. The findings also provide further evidence that the 3 conceptually relevant scales aid in the interpretation of MMPI-2 scores by contributing additional information beyond the clinical scales.
The Validation of a Case-Based, Cumulative Assessment and Progressions Examination
Coker, Adeola O.; Copeland, Jeffrey T.; Gottlieb, Helmut B.; Horlen, Cheryl; Smith, Helen E.; Urteaga, Elizabeth M.; Ramsinghani, Sushma; Zertuche, Alejandra; Maize, David
2016-01-01
Objective. To assess content and criterion validity, as well as reliability of an internally developed, case-based, cumulative, high-stakes third-year Annual Student Assessment and Progression Examination (P3 ASAP Exam). Methods. Content validity was assessed through the writing-reviewing process. Criterion validity was assessed by comparing student scores on the P3 ASAP Exam with the nationally validated Pharmacy Curriculum Outcomes Assessment (PCOA). Reliability was assessed with psychometric analysis comparing student performance over four years. Results. The P3 ASAP Exam showed content validity through representation of didactic courses and professional outcomes. Similar scores on the P3 ASAP Exam and PCOA with Pearson correlation coefficient established criterion validity. Consistent student performance using Kuder-Richardson coefficient (KR-20) since 2012 reflected reliability of the examination. Conclusion. Pharmacy schools can implement internally developed, high-stakes, cumulative progression examinations that are valid and reliable using a robust writing-reviewing process and psychometric analyses. PMID:26941435
An Optimal Partial Differential Equations-based Stopping Criterion for Medical Image Denoising.
Khanian, Maryam; Feizi, Awat; Davari, Ali
2014-01-01
Improving the quality of medical images at pre- and post-surgery operations are necessary for beginning and speeding up the recovery process. Partial differential equations-based models have become a powerful and well-known tool in different areas of image processing such as denoising, multiscale image analysis, edge detection and other fields of image processing and computer vision. In this paper, an algorithm for medical image denoising using anisotropic diffusion filter with a convenient stopping criterion is presented. In this regard, the current paper introduces two strategies: utilizing the efficient explicit method due to its advantages with presenting impressive software technique to effectively solve the anisotropic diffusion filter which is mathematically unstable, proposing an automatic stopping criterion, that takes into consideration just input image, as opposed to other stopping criteria, besides the quality of denoised image, easiness and time. Various medical images are examined to confirm the claim.
A mesh gradient technique for numerical optimization
NASA Technical Reports Server (NTRS)
Willis, E. A., Jr.
1973-01-01
A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.