Science.gov

Sample records for 10-fold cross-validation approach

  1. Comprehensive Assessment of Emotional Disturbance: A Cross-Validation Approach

    ERIC Educational Resources Information Center

    Fisher, Emily S.; Doyon, Katie E.; Saldana, Enrique; Allen, Megan Redding

    2007-01-01

    Assessing a student for emotional disturbance is a serious and complex task given the stigma of the label and the ambiguities of the federal definition. One way that school psychologists can be more confident in their assessment results is to cross validate data from different sources using the RIOT approach (Review, Interview, Observe, Test).…

  2. Cross-Validation.

    ERIC Educational Resources Information Center

    Langmuir, Charles R.

    1954-01-01

    Cross-validation in relation to choosing the best tests and selecting the best items in tests is discussed. Cross-validation demonstrated whether a decision derived from one set of data is truly effective when this decision is applied to another independent, but relevant, sample of people. Cross-validation is particularly important after…

  3. Cross-Validation, Shrinkage, and Multiple Regression.

    ERIC Educational Resources Information Center

    Hynes, Kevin

    One aspect of multiple regression--the shrinkage of the multiple correlation coefficient on cross-validation is reviewed. The paper consists of four sections. In section one, the distinction between a fixed and a random multiple regression model is made explicit. In section two, the cross-validation paradigm and an explanation for the occurrence…

  4. Cross-Validation Without Doing Cross-Validation in Genome-Enabled Prediction

    PubMed Central

    Gianola, Daniel; Schön, Chris-Carolin

    2016-01-01

    Cross-validation of methods is an essential component of genome-enabled prediction of complex traits. We develop formulae for computing the predictions that would be obtained when one or several cases are removed in the training process, to become members of testing sets, but by running the model using all observations only once. Prediction methods to which the developments apply include least squares, best linear unbiased prediction (BLUP) of markers, or genomic BLUP, reproducing kernels Hilbert spaces regression with single or multiple kernel matrices, and any member of a suite of linear regression methods known as “Bayesian alphabet.” The approach used for Bayesian models is based on importance sampling of posterior draws. Proof of concept is provided by applying the formulae to a wheat data set representing 599 inbred lines genotyped for 1279 markers, and the target trait was grain yield. The data set was used to evaluate predictive mean-squared error, impact of alternative layouts on maximum likelihood estimates of regularization parameters, model complexity, and residual degrees of freedom stemming from various strengths of regularization, as well as two forms of importance sampling. Our results will facilitate carrying out extensive cross-validation without model retraining for most machines employed in genome-assisted prediction of quantitative traits. PMID:27489209

  5. Cross-validation pitfalls when selecting and assessing regression and classification models

    PubMed Central

    2014-01-01

    Background We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. Methods We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. Results We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. Conclusions We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error. PMID:24678909

  6. Cross-validated detection of crack initiation in aerospace materials

    NASA Astrophysics Data System (ADS)

    Vanniamparambil, Prashanth A.; Cuadra, Jefferson; Guclu, Utku; Bartoli, Ivan; Kontsos, Antonios

    2014-03-01

    A cross-validated nondestructive evaluation approach was employed to in situ detect the onset of damage in an Aluminum alloy compact tension specimen. The approach consisted of the coordinated use primarily the acoustic emission, combined with the infrared thermography and digital image correlation methods. Both tensile loads were applied and the specimen was continuously monitored using the nondestructive approach. Crack initiation was witnessed visually and was confirmed by the characteristic load drop accompanying the ductile fracture process. The full field deformation map provided by the nondestructive approach validated the formation of a pronounced plasticity zone near the crack tip. At the time of crack initiation, a burst in the temperature field ahead of the crack tip as well as a sudden increase of the acoustic recordings were observed. Although such experiments have been attempted and reported before in the literature, the presented approach provides for the first time a cross-validated nondestructive dataset that can be used for quantitative analyses of the crack initiation information content. It further allows future development of automated procedures for real-time identification of damage precursors including the rarely explored crack incubation stage in fatigue conditions.

  7. The RCRAS and legal insanity: a cross-validation study.

    PubMed

    Rogers, R; Seman, W; Wasyliw, O E

    1983-07-01

    Examined the RCRAS as an empirically based approach to insanity evaluations. Previous research has been encouraging with regard to the RCRAS' interrater reliability and construct validity. The present study, with a larger data base (N = 111), sought to cross-validate these findings. Results from five forensic centers established satisfactory reliability for the RCRAS (mean kappa r = .80 for decision variables for criminal responsibility) and differentiating patterns for four of the five scales between sane and insane patient-defendants. Results further suggested that the RCRAS was generalizable across age, sex, criminal behavior, and location of the forensic evaluation. These findings were discussed with respect to the potential clinical utility of the RCRAS.

  8. Cross validation in LASSO and its acceleration

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Kabashima, Yoshiyuki

    2016-05-01

    We investigate leave-one-out cross validation (CV) as a determinator of the weight of the penalty term in the least absolute shrinkage and selection operator (LASSO). First, on the basis of the message passing algorithm and a perturbative discussion assuming that the number of observations is sufficiently large, we provide simple formulas for approximately assessing two types of CV errors, which enable us to significantly reduce the necessary cost of computation. These formulas also provide a simple connection of the CV errors to the residual sums of squares between the reconstructed and the given measurements. Second, on the basis of this finding, we analytically evaluate the CV errors when the design matrix is given as a simple random matrix in the large size limit by using the replica method. Finally, these results are compared with those of numerical simulations on finite-size systems and are confirmed to be correct. We also apply the simple formulas of the first type of CV error to an actual dataset of the supernovae.

  9. Cross-validation of component models: a critical look at current methods.

    PubMed

    Bro, R; Kjeldahl, K; Smilde, A K; Kiers, H A L

    2008-03-01

    In regression, cross-validation is an effective and popular approach that is used to decide, for example, the number of underlying features, and to estimate the average prediction error. The basic principle of cross-validation is to leave out part of the data, build a model, and then predict the left-out samples. While such an approach can also be envisioned for component models such as principal component analysis (PCA), most current implementations do not comply with the essential requirement that the predictions should be independent of the entity being predicted. Further, these methods have not been properly reviewed in the literature. In this paper, we review the most commonly used generic PCA cross-validation schemes and assess how well they work in various scenarios.

  10. A cross-validation package driving Netica with python

    USGS Publications Warehouse

    Fienen, Michael N.; Plant, Nathaniel G.

    2014-01-01

    Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).

  11. A K-fold Averaging Cross-validation Procedure

    PubMed Central

    Jung, Yoonsuh; Hu, Jianhua

    2015-01-01

    Cross-validation type of methods have been widely used to facilitate model estimation and variable selection. In this work, we suggest a new K-fold cross validation procedure to select a candidate ‘optimal’ model from each hold-out fold and average the K candidate ‘optimal’ models to obtain the ultimate model. Due to the averaging effect, the variance of the proposed estimates can be significantly reduced. This new procedure results in more stable and efficient parameter estimation than the classical K-fold cross validation procedure. In addition, we show the asymptotic equivalence between the proposed and classical cross validation procedures in the linear regression setting. We also demonstrate the broad applicability of the proposed procedure via two examples of parameter sparsity regularization and quantile smoothing splines modeling. We illustrate the promise of the proposed method through simulations and a real data example.

  12. A Cross-Validation Study of the Posttraumatic Growth Inventory

    ERIC Educational Resources Information Center

    Sheikh, Alia I.; Marotta, Sylvia A.

    2005-01-01

    This article is a cross-validation of R. G. Tedeschi and L. G. Calhoun's (1996) original study of the development of the Posttraumatic Growth Inventory (PTGI). It describes several psychometric properties of scores on the PTGI in a sample of middle- to old-aged adults with a history of cardiovascular disease. The results did not support the…

  13. The Cross Validation of the Attitudes toward Mainstreaming Scale (ATMS).

    ERIC Educational Resources Information Center

    Berryman, Joan D.; Neal, W. R. Jr.

    1980-01-01

    Reliability and factorial validity of the Attitudes Toward Mainstreaming Scale was supported in a cross-validation study with teachers. Three factors emerged: learning capability, general mainstreaming, and traditional limiting disabilities. Factor intercorrelations varied from .42 to .55; correlations between total scores and individual factors…

  14. The Cross-Validational Accuracy of Sample Regressions.

    ERIC Educational Resources Information Center

    Rozeboom, William W.

    1981-01-01

    Browne's definitive but complex formulas for the cross-validational accuracy of an OSL-estimated regression equation in the random-effects sampling model are here reworked to achieve greater perspicuity and extended to include the fixed-effects sampling model. (Author)

  15. Fit-for-purpose bioanalytical cross-validation for LC-MS/MS assays in clinical studies.

    PubMed

    Xu, Xiaohui; Ji, Qin C; Jemal, Mohammed; Gleason, Carol; Shen, Jim X; Stouffer, Bruce; Arnold, Mark E

    2013-01-01

    The paradigm shift of globalized research and conducting clinical studies at different geographic locations worldwide to access broader patient populations has resulted in increased need of correlating bioanalytical results generated in multiple laboratories, often across national borders. Cross-validations of bioanalytical methods are often implemented to assure the equivalency of the bioanalytical results is demonstrated. Regulatory agencies, such as the US FDA and European Medicines Agency, have included the requirement of cross-validations in their respective bioanalytical validation guidance and guidelines. While those documents provide high-level expectations, the detailed implementation is at the discretion of each individual organization. At Bristol-Myers Squibb, we practice a fit-for-purpose approach for conducting cross-validations for small-molecule bioanalytical methods using LC-MS/MS. A step-by-step proposal on the overall strategy, procedures and technical details for conducting a successful cross-validation is presented herein. A case study utilizing the proposed cross-validation approach to rule out method variability as the potential cause for high variance observed in PK studies is also presented. PMID:23256474

  16. A leave-one-out cross-validation SAS macro for the identification of markers associated with survival.

    PubMed

    Rushing, Christel; Bulusu, Anuradha; Hurwitz, Herbert I; Nixon, Andrew B; Pang, Herbert

    2015-02-01

    A proper internal validation is necessary for the development of a reliable and reproducible prognostic model for external validation. Variable selection is an important step for building prognostic models. However, not many existing approaches couple the ability to specify the number of covariates in the model with a cross-validation algorithm. We describe a user-friendly SAS macro that implements a score selection method and a leave-one-out cross-validation approach. We discuss the method and applications behind this algorithm, as well as details of the SAS macro.

  17. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    SciTech Connect

    Pražnikar, Jure; Turk, Dušan

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. They utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.

  18. A Large-Scale Empirical Evaluation of Cross-Validation and External Test Set Validation in (Q)SAR.

    PubMed

    Gütlein, Martin; Helma, Christoph; Karwath, Andreas; Kramer, Stefan

    2013-06-01

    (Q)SAR model validation is essential to ensure the quality of inferred models and to indicate future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to accept the (Q)SAR model, and to approve its use in real world scenarios as alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model, in particular whether to employ variants of cross-validation or external test set validation, is still under discussion. In this paper, we empirically compare a k-fold cross-validation with external test set validation. To this end we introduce a workflow allowing to realistically simulate the common problem setting of building predictive models for relatively small datasets. The workflow allows to apply the built and validated models on large amounts of unseen data, and to compare the performance of the different validation approaches. The experimental results indicate that cross-validation produces higher performant (Q)SAR models than external test set validation, reduces the variance of the results, while at the same time underestimates the performance on unseen compounds. The experimental results reported in this paper suggest that, contrary to current conception in the community, cross-validation may play a significant role in evaluating the predictivity of (Q)SAR models.

  19. Cross-validation analysis of bias models in Bayesian multi-model projections of climate

    NASA Astrophysics Data System (ADS)

    Huttunen, J. M. J.; Räisänen, J.; Nissinen, A.; Lipponen, A.; Kolehmainen, V.

    2016-05-01

    Climate change projections are commonly based on multi-model ensembles of climate simulations. In this paper we consider the choice of bias models in Bayesian multimodel predictions. Buser et al. (Clim Res 44(2-3):227-241, 2010a) introduced a hybrid bias model which combines commonly used constant bias and constant relation bias assumptions. The hybrid model includes a weighting parameter which balances these bias models. In this study, we use a cross-validation approach to study which bias model or bias parameter leads to, in a specific sense, optimal climate change projections. The analysis is carried out for summer and winter season means of 2 m-temperatures spatially averaged over the IPCC SREX regions, using 19 model runs from the CMIP5 data set. The cross-validation approach is applied to calculate optimal bias parameters (in the specific sense) for projecting the temperature change from the control period (1961-2005) to the scenario period (2046-2090). The results are compared to the results of the Buser et al. (Clim Res 44(2-3):227-241, 2010a) method which includes the bias parameter as one of the unknown parameters to be estimated from the data.

  20. Cross-validating the Berlin Affective Word List.

    PubMed

    Võ, Melissa L H; Jacobs, Arthur M; Conrad, Markus

    2006-11-01

    We introduce the Berlin Affective Word List (BAWL) in order to provide researchers with a German database containing both emotional valence and imageability ratings for more than 2,200 German words. The BAWL was cross-validated using a forced choice valence decision task in which two distinct valence categories (negative or positive) had to be assigned to a highly controlled selection of 360 words according to varying emotional content (negative, neutral, or positive). The reaction time (RT) results corroborated the valence categories: Words that had been rated as "neutral" in the norms yielded maximum RTs. The BAWL is intended to help researchers create stimulus materials for a wide range of experiments dealing with the emotional processing of words. PMID:17393831

  1. Cross-validating a bidimensional mathematics anxiety scale.

    PubMed

    Haiyan Bai

    2011-03-01

    The psychometric properties of a 14-item bidimensional Mathematics Anxiety Scale-Revised (MAS-R) were empirically cross-validated with two independent samples consisting of 647 secondary school students. An exploratory factor analysis on the scale yielded strong construct validity with a clear two-factor structure. The results from a confirmatory factor analysis indicated an excellent model-fit (χ(2) = 98.32, df = 62; normed fit index = .92, comparative fit index = .97; root mean square error of approximation = .04). The internal consistency (.85), test-retest reliability (.71), interfactor correlation (.26, p < .001), and positive discrimination power indicated that MAS-R is a psychometrically reliable and valid instrument for measuring mathematics anxiety. Math anxiety, as measured by MAS-R, correlated negatively with student achievement scores (r = -.38), suggesting that MAS-R may be a useful tool for classroom teachers and other educational personnel tasked with identifying students at risk of reduced math achievement because of anxiety.

  2. Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…

  3. Splenectomy Causes 10-Fold Increased Risk of Portal Venous System Thrombosis in Liver Cirrhosis Patients

    PubMed Central

    Qi, Xingshun; Han, Guohong; Ye, Chun; Zhang, Yongguo; Dai, Junna; Peng, Ying; Deng, Han; Li, Jing; Hou, Feifei; Ning, Zheng; Zhao, Jiancheng; Zhang, Xintong; Wang, Ran; Guo, Xiaozhong

    2016-01-01

    Background Portal venous system thrombosis (PVST) is a life-threatening complication of liver cirrhosis. We conducted a retrospective study to comprehensively analyze the prevalence and risk factors of PVST in liver cirrhosis. Material/Methods All cirrhotic patients without malignancy admitted between June 2012 and December 2013 were eligible if they underwent contrast-enhanced CT or MRI scans. Independent predictors of PVST in liver cirrhosis were calculated in multivariate analyses. Subgroup analyses were performed according to the severity of PVST (any PVST, main portal vein [MPV] thrombosis >50%, and clinically significant PVST) and splenectomy. Odds ratios (ORs) and 95% confidence intervals (CIs) were reported. Results Overall, 113 cirrhotic patients were enrolled. The prevalence of PVST was 16.8% (19/113). Splenectomy (any PVST: OR=11.494, 95%CI=2.152–61.395; MPV thrombosis >50%: OR=29.987, 95%CI=3.247–276.949; clinically significant PVST: OR=40.415, 95%CI=3.895–419.295) and higher hemoglobin (any PVST: OR=0.974, 95%CI=0.953–0.996; MPV thrombosis >50%: OR=0.936, 95%CI=0.895–0.980; clinically significant PVST: OR=0.935, 95%CI=0.891–0.982) were the independent predictors of PVST. The prevalence of PVST was 13.3% (14/105) after excluding splenectomy. Higher hemoglobin was the only independent predictor of MPV thrombosis >50% (OR=0.952, 95%CI=0.909–0.997). No independent predictors of any PVST or clinically significant PVST were identified in multivariate analyses. Additionally, PVST patients who underwent splenectomy had a significantly higher proportion of clinically significant PVST but lower MELD score than those who did not undergo splenectomy. In all analyses, the in-hospital mortality was not significantly different between cirrhotic patient with and without PVST. Conclusions Splenectomy may increase by at least 10-fold the risk of PVST in liver cirrhosis independent of severity of liver dysfunction. PMID:27432511

  4. Comparison of cross-validation and bootstrap aggregating for building a seasonal streamflow forecast model

    NASA Astrophysics Data System (ADS)

    Schick, Simon; Rössler, Ole; Weingartner, Rolf

    2016-10-01

    Based on a hindcast experiment for the period 1982-2013 in 66 sub-catchments of the Swiss Rhine, the present study compares two approaches of building a regression model for seasonal streamflow forecasting. The first approach selects a single "best guess" model, which is tested by leave-one-out cross-validation. The second approach implements the idea of bootstrap aggregating, where bootstrap replicates are employed to select several models, and out-of-bag predictions provide model testing. The target value is mean streamflow for durations of 30, 60 and 90 days, starting with the 1st and 16th day of every month. Compared to the best guess model, bootstrap aggregating reduces the mean squared error of the streamflow forecast by seven percent on average. Thus, if resampling is anyway part of the model building procedure, bootstrap aggregating seems to be a useful strategy in statistical seasonal streamflow forecasting. Since the improved accuracy comes at the cost of a less interpretable model, the approach might be best suited for pure prediction tasks, e.g. as in operational applications.

  5. A Test and Cross-Validation of the Revised Two-Factor Study Process Questionnaire Factor Structure among Western University Students

    ERIC Educational Resources Information Center

    Immekus, Jason C.; Imbrie, P. K.

    2010-01-01

    The Revised Two-Factor Study Process Questionnaire (R-SPQ-2F) is a measure of university students' approach to learning. Original evaluation of the scale's psychometric properties was based on a sample of Hong Kong university students' scores. The purpose of this study was to test and cross-validate the R-SPQ-2F factor structure, based on separate…

  6. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    PubMed

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  7. The generalized cross-validation method applied to geophysical linear traveltime tomography

    NASA Astrophysics Data System (ADS)

    Bassrei, A.; Oliveira, N. P.

    2009-12-01

    The oil industry is the major user of Applied Geophysics methods for the subsurface imaging. Among different methods, the so-called seismic (or exploration seismology) methods are the most important. Tomography was originally developed for medical imaging and was introduced in exploration seismology in the 1980's. There are two main classes of geophysical tomography: those that use only the traveltimes between sources and receivers, which is a cinematic approach and those that use the wave amplitude itself, being a dynamic approach. Tomography is a kind of inverse problem, and since inverse problems are usually ill-posed, it is necessary to use some method to reduce their deficiencies. These difficulties of the inverse procedure are associated with the fact that the involved matrix is ill-conditioned. To compensate this shortcoming, it is appropriate to use some technique of regularization. In this work we make use of regularization with derivative matrices, also called smoothing. There is a crucial problem in regularization, which is the selection of the regularization parameter lambda. We use generalized cross validation (GCV) as a tool for the selection of lambda. GCV chooses the regularization parameter associated with the best average prediction for all possible omissions of one datum, corresponding to the minimizer of GCV function. GCV is used for an application in traveltime tomography, where the objective is to obtain the 2-D velocity distribution from the measured values of the traveltimes between sources and receivers. We present results with synthetic data, using a geological model that simulates different features, like a fault and a reservoir. The results using GCV are very good, including those contaminated with noise, and also using different regularization orders, attesting the feasibility of this technique.

  8. Cross-Validation of the Risk Matrix 2000 Sexual and Violent Scales

    ERIC Educational Resources Information Center

    Craig, Leam A.; Beech, Anthony; Browne, Kevin D.

    2006-01-01

    The predictive accuracy of the newly developed actuarial risk measures Risk Matrix 2000 Sexual/Violence (RMS, RMV) were cross validated and compared with two risk assessment measures (SVR-20 and Static-99) in a sample of sexual (n = 85) and nonsex violent (n = 46) offenders. The sexual offense reconviction rate for the sex offender group was 18%…

  9. The Employability of Psychologists in Academic Settings: A Cross-Validation.

    ERIC Educational Resources Information Center

    Quereshi, M. Y.

    1983-01-01

    Analyzed the curriculum vitae (CV) of 117 applicants for the position of assistant professor of psychology to yield four cross-validated factors. Comparisons of the results with those of four years ago indicated considerable stability of the factors. Scholarly publications remain an important factor. (JAC)

  10. A Cross-Validation Study of Police Recruit Performance as Predicted by the IPI and MMPI.

    ERIC Educational Resources Information Center

    Shusman, Elizabeth J.; And Others

    Validation and cross-validation studies were conducted using the Minnesota Multiphasic Personality Inventory (MMPI) and Inwald Personality Inventory (IPI) to predict job performance for 698 urban male police officers who completed a six-month training academy. Job performance criteria evaluated included absence, lateness, derelictions, negative…

  11. Cross-Validating Chinese Language Mental Health Recovery Measures in Hong Kong

    ERIC Educational Resources Information Center

    Bola, John; Chan, Tiffany Hill Ching; Chen, Eric HY; Ng, Roger

    2016-01-01

    Objectives: Promoting recovery in mental health services is hampered by a shortage of reliable and valid measures, particularly in Hong Kong. We seek to cross validate two Chinese language measures of recovery and one of recovery-promoting environments. Method: A cross-sectional survey of people recovering from early episode psychosis (n = 121)…

  12. Exact Analysis of Squared Cross-Validity Coefficient in Predictive Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2009-01-01

    In regression analysis, the notion of population validity is of theoretical interest for describing the usefulness of the underlying regression model, whereas the presumably more important concept of population cross-validity represents the predictive effectiveness for the regression equation in future research. It appears that the inference…

  13. A New Symptom Model for Autism Cross-Validated in an Independent Sample

    ERIC Educational Resources Information Center

    Boomsma, A.; Van Lang, N. D. J.; De Jonge, M. V.; De Bildt, A. A.; Van Engeland, H.; Minderaa, R. B.

    2008-01-01

    Background: Results from several studies indicated that a symptom model other than the DSM triad might better describe symptom domains of autism. The present study focused on a) investigating the stability of a new symptom model for autism by cross-validating it in an independent sample and b) examining the invariance of the model regarding three…

  14. Cross-Validation of FITNESSGRAM® Health-Related Fitness Standards in Hungarian Youth

    ERIC Educational Resources Information Center

    Laurson, Kelly R.; Saint-Maurice, Pedro F.; Karsai, István; Csányi, Tamás

    2015-01-01

    Purpose: The purpose of this study was to cross-validate FITNESSGRAM® aerobic and body composition standards in a representative sample of Hungarian youth. Method: A nationally representative sample (N = 405) of Hungarian adolescents from the Hungarian National Youth Fitness Study (ages 12-18.9 years) participated in an aerobic capacity assessment…

  15. Validity Evidence in Scale Development: The Application of Cross Validation and Classification-Sequencing Validation

    ERIC Educational Resources Information Center

    Acar, Tu¨lin

    2014-01-01

    In literature, it has been observed that many enhanced criteria are limited by factor analysis techniques. Besides examinations of statistical structure and/or psychological structure, such validity studies as cross validation and classification-sequencing studies should be performed frequently. The purpose of this study is to examine cross…

  16. Learning Disabilities Found in Association with French Immersion Programming: A Cross Validation.

    ERIC Educational Resources Information Center

    Trites, R. L.; Price, M. A.

    In the first study of this series, it was found that children who have difficulty in primary French immersion are distinct from children having a primary reading disability, minimal brain dysfunction, hyperactivity or primary emotional disturbance. The present study was undertaken in order to cross-validate the findings of the first study, to…

  17. Reliable Digit Span: A Systematic Review and Cross-Validation Study

    ERIC Educational Resources Information Center

    Schroeder, Ryan W.; Twumasi-Ankrah, Philip; Baade, Lyle E.; Marshall, Paul S.

    2012-01-01

    Reliable Digit Span (RDS) is a heavily researched symptom validity test with a recent literature review yielding more than 20 studies ranging in dates from 1994 to 2011. Unfortunately, limitations within some of the research minimize clinical generalizability. This systematic review and cross-validation study was conducted to address these…

  18. Validity and Cross-Validity of Metric and Nonmetric Multiple Regression.

    ERIC Educational Resources Information Center

    MacCallum, Robert C.; And Others

    1979-01-01

    Questions are raised concerning differences between traditional metric multiple regression, which assumes all variables to be measured on interval scales, and nonmetric multiple regression. The ordinal model is generally superior in fitting derivation samples but the metric technique fits better than the nonmetric in cross-validation samples.…

  19. Simulating California Reservoir Operation Using the Classification and Regression Tree Algorithm Combined with a Shuffled Cross-Validation Scheme

    NASA Astrophysics Data System (ADS)

    Yang, T.; Gao, X.; Sorooshian, S.; Li, X.

    2015-12-01

    The controlled outflows from a reservoir or dam are highly dependent on the decisions made by the reservoir operators, instead of a natural hydrological process. Difference exists between the natural upstream inflows to reservoirs, and the controlled outflows from reservoirs that supply the downstream users. With the decision maker's awareness of changing climate, reservoir management requires adaptable means to incorporate more information into decision making, such as the consideration of policy and regulation, environmental constraints, dry/wet conditions, etc. In this paper, a reservoir outflow simulation model is presented, which incorporates one of the well-developed data-mining models (Classification and Regression Tree) to predict the complicated human-controlled reservoir outflows and extract the reservoir operation patterns. A shuffled cross-validation approach is further implemented to improve model's predictive performance. An application study of 9 major reservoirs in California is carried out and the simulated results from different decision tree approaches are compared with observation, including original CART and Random Forest. The statistical measurements show that CART combined with the shuffled cross-validation scheme gives a better predictive performance over the other two methods, especially in simulating the peak flows. The results for simulated controlled outflow, storage changes and storage trajectories also show that the proposed model is able to consistently and reasonably predict the human's reservoir operation decisions. In addition, we found that the operation in the Trinity Lake, Oroville Lake and Shasta Lake are greatly influenced by policy and regulation, while low elevation reservoirs are more sensitive to inflow amount than others.

  20. Cross-Validation of easyCBM Reading Cut Scores in Oregon: 2009-2010. Technical Report #1108

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Irvin, P. Shawn; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    This technical report presents results from a cross-validation study designed to identify optimal cut scores when using easyCBM[R] reading tests in Oregon. The cross-validation study analyzes data from the 2009-2010 academic year for easyCBM[R] reading measures. A sample of approximately 2,000 students per grade, randomly split into two groups of…

  1. Methodology Review: Estimation of Population Validity and Cross-Validity, and the Use of Equal Weights in Prediction.

    ERIC Educational Resources Information Center

    Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.

    1997-01-01

    This review finds that formula-based procedures can be used in place of empirical validation for estimating population validity or in place of empirical cross-validation for estimating population cross-validity. Discusses conditions under which the equal weights procedure is a viable alternative. (SLD)

  2. Reliable Digit Span: a systematic review and cross-validation study.

    PubMed

    Schroeder, Ryan W; Twumasi-Ankrah, Philip; Baade, Lyle E; Marshall, Paul S

    2012-03-01

    Reliable Digit Span (RDS) is a heavily researched symptom validity test with a recent literature review yielding more than 20 studies ranging in dates from 1994 to 2011. Unfortunately, limitations within some of the research minimize clinical generalizability. This systematic review and cross-validation study was conducted to address these limitations, thus increasing the measure's clinical utility. Sensitivity and specificity rates were calculated for the ≤6 and ≤7 cutoffs when data were globally combined and divided by clinical groups. The cross-validation of specific diagnostic groups was consistent with the data reported in the literature. Overall, caution should be used when utilizing the ≤7 cutoff in all clinical groups and when utilizing the ≤6 cutoff in the following groups: cerebrovascular accident, severe memory disorders, mental retardation, borderline intellectual functioning, and English as a second language. Additional limitations and cautions are provided.

  3. How to avoid mismodelling in GLM-based fMRI data analysis: cross-validated Bayesian model selection.

    PubMed

    Soch, Joram; Haynes, John-Dylan; Allefeld, Carsten

    2016-11-01

    Voxel-wise general linear models (GLMs) are a standard approach for analyzing functional magnetic resonance imaging (fMRI) data. An advantage of GLMs is that they are flexible and can be adapted to the requirements of many different data sets. However, the specification of first-level GLMs leaves the researcher with many degrees of freedom which is problematic given recent efforts to ensure robust and reproducible fMRI data analysis. Formal model comparisons that allow a systematic assessment of GLMs are only rarely performed. On the one hand, too simple models may underfit data and leave real effects undiscovered. On the other hand, too complex models might overfit data and also reduce statistical power. Here we present a systematic approach termed cross-validated Bayesian model selection (cvBMS) that allows to decide which GLM best describes a given fMRI data set. Importantly, our approach allows for non-nested model comparison, i.e. comparing more than two models that do not just differ by adding one or more regressors. It also allows for spatially heterogeneous modelling, i.e. using different models for different parts of the brain. We validate our method using simulated data and demonstrate potential applications to empirical data. The increased use of model comparison and model selection should increase the reliability of GLM results and reproducibility of fMRI studies.

  4. Using cross-validation to evaluate predictive accuracy of survival risk classifiers based on high-dimensional data.

    PubMed

    Simon, Richard M; Subramanian, Jyothi; Li, Ming-Chung; Menezes, Supriya

    2011-05-01

    Developments in whole genome biotechnology have stimulated statistical focus on prediction methods. We review here methodology for classifying patients into survival risk groups and for using cross-validation to evaluate such classifications. Measures of discrimination for survival risk models include separation of survival curves, time-dependent ROC curves and Harrell's concordance index. For high-dimensional data applications, however, computing these measures as re-substitution statistics on the same data used for model development results in highly biased estimates. Most developments in methodology for survival risk modeling with high-dimensional data have utilized separate test data sets for model evaluation. Cross-validation has sometimes been used for optimization of tuning parameters. In many applications, however, the data available are too limited for effective division into training and test sets and consequently authors have often either reported re-substitution statistics or analyzed their data using binary classification methods in order to utilize familiar cross-validation. In this article we have tried to indicate how to utilize cross-validation for the evaluation of survival risk models; specifically how to compute cross-validated estimates of survival distributions for predicted risk groups and how to compute cross-validated time-dependent ROC curves. We have also discussed evaluation of the statistical significance of a survival risk model and evaluation of whether high-dimensional genomic data adds predictive accuracy to a model based on standard covariates alone.

  5. Variational cross-validation of slow dynamical modes in molecular kinetics

    PubMed Central

    Pande, Vijay S.

    2015-01-01

    Markov state models are a widely used method for approximating the eigenspectrum of the molecular dynamics propagator, yielding insight into the long-timescale statistical kinetics and slow dynamical modes of biomolecular systems. However, the lack of a unified theoretical framework for choosing between alternative models has hampered progress, especially for non-experts applying these methods to novel biological systems. Here, we consider cross-validation with a new objective function for estimators of these slow dynamical modes, a generalized matrix Rayleigh quotient (GMRQ), which measures the ability of a rank-m projection operator to capture the slow subspace of the system. It is shown that a variational theorem bounds the GMRQ from above by the sum of the first m eigenvalues of the system’s propagator, but that this bound can be violated when the requisite matrix elements are estimated subject to statistical uncertainty. This overfitting can be detected and avoided through cross-validation. These result make it possible to construct Markov state models for protein dynamics in a way that appropriately captures the tradeoff between systematic and statistical errors. PMID:25833563

  6. Simulating California reservoir operation using the classification and regression-tree algorithm combined with a shuffled cross-validation scheme

    NASA Astrophysics Data System (ADS)

    Yang, Tiantian; Gao, Xiaogang; Sorooshian, Soroosh; Li, Xin

    2016-03-01

    The controlled outflows from a reservoir or dam are highly dependent on the decisions made by the reservoir operators, instead of a natural hydrological process. Difference exists between the natural upstream inflows to reservoirs and the controlled outflows from reservoirs that supply the downstream users. With the decision maker's awareness of changing climate, reservoir management requires adaptable means to incorporate more information into decision making, such as water delivery requirement, environmental constraints, dry/wet conditions, etc. In this paper, a robust reservoir outflow simulation model is presented, which incorporates one of the well-developed data-mining models (Classification and Regression Tree) to predict the complicated human-controlled reservoir outflows and extract the reservoir operation patterns. A shuffled cross-validation approach is further implemented to improve CART's predictive performance. An application study of nine major reservoirs in California is carried out. Results produced by the enhanced CART, original CART, and random forest are compared with observation. The statistical measurements show that the enhanced CART and random forest overperform the CART control run in general, and the enhanced CART algorithm gives a better predictive performance over random forest in simulating the peak flows. The results also show that the proposed model is able to consistently and reasonably predict the expert release decisions. Experiments indicate that the release operation in the Oroville Lake is significantly dominated by SWP allocation amount and reservoirs with low elevation are more sensitive to inflow amount than others.

  7. Cross Validation for Selection of Cortical Interaction Models From Scalp EEG or MEG

    PubMed Central

    Cheung, Bing Leung Patrick; Nowak, Robert; Lee, Hyong Chol; van Drongelen, Wim; Van Veen, Barry D.

    2012-01-01

    A cross-validation (CV) method based on state-space framework is introduced for comparing the fidelity of different cortical interaction models to the measured scalp electroencephalogram (EEG) or magnetoencephalography (MEG) data being modeled. A state equation models the cortical interaction dynamics and an observation equation represents the scalp measurement of cortical activity and noise. The measured data are partitioned into training and test sets. The training set is used to estimate model parameters and the model quality is evaluated by computing test data innovations for the estimated model. Two CV metrics normalized mean square error and log-likelihood are estimated by averaging over different training/test partitions of the data. The effectiveness of this method of model selection is illustrated by comparing two linear modeling methods and two nonlinear modeling methods on simulated EEG data derived using both known dynamic systems and measured electrocorticography data from an epilepsy patient. PMID:22084038

  8. Error criteria for cross validation in the context of chaotic time series prediction.

    PubMed

    Lim, Teck Por; Puthusserypady, Sadasivan

    2006-03-01

    The prediction of a chaotic time series over a long horizon is commonly done by iterating one-step-ahead prediction. Prediction can be implemented using machine learning methods, such as radial basis function networks. Typically, cross validation is used to select prediction models based on mean squared error. The bias-variance dilemma dictates that there is an inevitable tradeoff between bias and variance. However, invariants of chaotic systems are unchanged by linear transformations; thus, the bias component may be irrelevant to model selection in the context of chaotic time series prediction. Hence, the use of error variance for model selection, instead of mean squared error, is examined. Clipping is introduced, as a simple way to stabilize iterated predictions. It is shown that using the error variance for model selection, in combination with clipping, may result in better models.

  9. A cross-validation of two differing measures of hypnotic depth.

    PubMed

    Pekala, Ronald J; Maurer, Ronald L

    2013-01-01

    Several sets of regression analyses were completed, attempting to predict 2 measures of hypnotic depth: the self-reported hypnotic depth score and hypnoidal state score from variables of the Phenomenology of Consciousness Inventory: Hypnotic Assessment Procedure (PCI-HAP). When attempting to predict self-reported hypnotic depth, an R of .78 with Study 1 participants shrank to an r of .72 with Study 2 participants, suggesting mild shrinkage for this more attributional measure of hypnotic depth. Attempting to predict hypnoidal state (an estimate of trance) using the same procedure, yielded an R of .56, that upon cross-validation shrank to an r of .48. These and other results suggest that, although there is some variance in common, the self-reported hypnotic depth score appears to be tapping a different construct from the hypnoidal state score.

  10. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  11. Cross-validation of the factor structure of the Aberrant Behavior Checklist for persons with mental retardation.

    PubMed

    Bihm, E M; Poindexter, A R

    1991-09-01

    The original factor structure of the Aberrant Behavior Checklist was cross-validated with an American sample of 470 persons with moderate to profound mental retardation, including nonambulatory individuals. The results of the factor analysis with varimax rotation essentially replicated previous findings, suggesting that the original five factors (Irritability, Lethargy, Stereotypic Behavior, Hyperactivity, and Inappropriate Speech) could be cross-validated by factor loadings of individual items. The original five scales continue to show high internal consistency. These factors are easily interpretable and should continue to provide valuable research and clinical information.

  12. Cross-validation and hypothesis testing in neuroimaging: An irenic comment on the exchange between Friston and Lindquist et al.

    PubMed

    Reiss, Philip T

    2015-08-01

    The "ten ironic rules for statistical reviewers" presented by Friston (2012) prompted a rebuttal by Lindquist et al. (2013), which was followed by a rejoinder by Friston (2013). A key issue left unresolved in this discussion is the use of cross-validation to test the significance of predictive analyses. This note discusses the role that cross-validation-based and related hypothesis tests have come to play in modern data analyses, in neuroimaging and other fields. It is shown that such tests need not be suboptimal and can fill otherwise-unmet inferential needs.

  13. Cross-validation of the PAI Negative Distortion Scale for feigned mental disorders: a research report.

    PubMed

    Rogers, Richard; Gillard, Nathan D; Wooley, Chelsea N; Kelsey, Katherine R

    2013-02-01

    A major strength of the Personality Assessment Inventory (PAI) is its systematic assessment of response styles, including feigned mental disorders. Recently, Mogge, Lepage, Bell, and Ragatz developed and provided the initial validation for the Negative Distortion Scale (NDS). Using rare symptoms as its detection strategy for feigning, the usefulness of NDS was examined via a known-groups comparison. The current study sought to cross-validate the NDS by implementing a between-subjects simulation design. Simulators were asked to feign total disability in an effort to secure unwarranted compensation from their insurance company. Even in an inpatient sample with severe Axis I disorders and concomitant impairment, the NDS proved effective as a rare-symptom strategy with low levels of item endorsement that remained mostly stable across genders. For construct validity, the NDS was moderately correlated with the Structured Interview of Reported Symptoms-Second Edition and other PAI feigning scales. For discriminant validity, it yielded a very large effect size (d = 1.81), surpassing the standard PAI feigning indicators. Utility estimates appeared to be promising for both ruling-out (low probability of feigning) and ruling-in (high probability of feigning) determinations at different base rates. Like earlier research, the data supported the creation of well-defined groups with indeterminate scores (i.e., the cut score ± 1 SEM) removed to avoid high rates of misclassifications for this narrow band.

  14. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    SciTech Connect

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards; New, Joshua Ryan

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-fold cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.

  15. Sound quality indicators for urban places in Paris cross-validated by Milan data.

    PubMed

    Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre

    2015-10-01

    A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.

  16. Enhancement of light propagation depth in skin: cross-validation of mathematical modeling methods.

    PubMed

    Kwon, Kiwoon; Son, Taeyoon; Lee, Kyoung-Joung; Jung, Byungjo

    2009-07-01

    Various techniques to enhance light propagation in skin have been studied in low-level laser therapy. In this study, three mathematical modeling methods for five selected techniques were implemented so that we could understand the mechanisms that enhance light propagation in skin. The five techniques included the increasing of the power and diameter of a laser beam, the application of a hyperosmotic chemical agent (HCA), and the whole and partial compression of the skin surface. The photon density profile of the five techniques was solved with three mathematical modeling methods: the finite element method (FEM), the Monte Carlo method (MCM), and the analytic solution method (ASM). We cross-validated the three mathematical modeling results by comparing photon density profiles and analyzing modeling error. The mathematical modeling results verified that the penetration depth of light can be enhanced if incident beam power and diameter, amount of HCA, or whole and partial skin compression is increased. In this study, light with wavelengths of 377 nm, 577 nm, and 633 nm was used.

  17. Automatic extraction of mutations from Medline and cross-validation with OMIM.

    PubMed

    Rebholz-Schuhmann, Dietrich; Marcel, Stephane; Albert, Sylvie; Tolle, Ralf; Casari, Georg; Kirsch, Harald

    2004-01-01

    Mutations help us to understand the molecular origins of diseases. Researchers, therefore, both publish and seek disease-relevant mutations in public databases and in scientific literature, e.g. Medline. The retrieval tends to be time-consuming and incomplete. Automated screening of the literature is more efficient. We developed extraction methods (called MEMA) that scan Medline abstracts for mutations. MEMA identified 24,351 singleton mutations in conjunction with a HUGO gene name out of 16,728 abstracts. From a sample of 100 abstracts we estimated the recall for the identification of mutation-gene pairs to 35% at a precision of 93%. Recall for the mutation detection alone was >67% with a precision rate of >96%. This shows that our system produces reliable data. The subset consisting of protein sequence mutations (PSMs) from MEMA was compared to the entries in OMIM (20,503 entries versus 6699, respectively). We found 1826 PSM-gene pairs to be in common to both datasets (cross-validated). This is 27% of all PSM-gene pairs in OMIM and 91% of those pairs from OMIM which co-occur in at least one Medline abstract. We conclude that Medline covers a large portion of the mutations known to OMIM. Another large portion could be artificially produced mutations from mutagenesis experiments. Access to the database of extracted mutation-gene pairs is available through the web pages of the EBI (refer to http://www.ebi. ac.uk/rebholz/index.html).

  18. Cross-validation of a Shortened Battery for the Assessment of Dysexecutive Disorders in Alzheimer Disease.

    PubMed

    Godefroy, Olivier; Martinaud, Olivier; Verny, Marc; Mosca, Chrystèle; Lenoir, Hermine; Bretault, Eric; Devendeville, Agnès; Diouf, Momar; Pere, Jean-Jacques; Bakchine, Serge; Delabrousse-Mayoux, Jean-Philippe; Roussel, Martine

    2016-01-01

    The frequency of executive disorders in mild-to-moderate Alzheimer disease (AD) has been demonstrated by the application of a comprehensive battery. The present study analyzed data from 2 recent multicenter studies based on the same executive battery. The objective was to derive a shortened battery by using the GREFEX population as a training dataset and by cross-validating the results in the REFLEX population. A total of 102 AD patients of the GREFEX study (MMSE=23.2±2.9) and 72 patients of the REFLEX study (MMSE=20.8±3.5) were included. Tests were selected and receiver operating characteristic curves were generated relative to the performance of 780 controls from the GREFEX study. Stepwise logistic regression identified 3 cognitive tests (Six Elements Task, categorical fluency and Trail Making Test B error) and behavioral disorders globally referred as global hypoactivity (P=0.0001, all). This shortened battery was as accurate as the entire GREFEX battery in diagnosing dysexecutive disorders in both training group and the validation group. Bootstrap procedure confirmed the stability of AUC. A shortened battery based on 3 cognitive tests and 3 behavioral domains provides a high diagnosis accuracy of executive disorders in mild-to-moderate AD.

  19. Cross-Validation of the Factor Structure of the Aberrant Behavior Checklist for Persons with Mental Retardation.

    ERIC Educational Resources Information Center

    Bihm, Elson M.; Poindexter, Ann R.

    1991-01-01

    The original factor structure of the Aberrant Behavior Checklist was cross-validated with a U.S. sample of 470 persons with moderate to profound mental retardation (27 percent nonambulatory). Results replicated previous findings, suggesting that the original five factors (irritability, lethargy, stereotypic behavior, hyperactivity, and…

  20. How Nonrecidivism Affects Predictive Accuracy: Evidence from a Cross-Validation of the Ontario Domestic Assault Risk Assessment (ODARA)

    ERIC Educational Resources Information Center

    Hilton, N. Zoe; Harris, Grant T.

    2009-01-01

    Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…

  1. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.

    2008-01-01

    Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

  2. A Cross-Validation of Paulson's Discriminant Function-Derived Scales for Identifying "At Risk" Child-Abusive Parents.

    ERIC Educational Resources Information Center

    Beal, Don; And Others

    1984-01-01

    When the six scales were cross-validated on an independent sample from the population of child-abusing parents, significant shrinkage in the accuracy of prediction was found. The use of the special subscales for identifying "at risk" parents in prenatal clinics, pediatric clinics, and mental health centers as originally suggested by Paulson and…

  3. Estimating the Coefficient of Cross-validity in Multiple Regression: A Comparison of Analytical and Empirical Methods.

    ERIC Educational Resources Information Center

    Kromrey, Jeffrey D.; Hines, Constance V.

    1996-01-01

    The accuracy of three analytical formulas for shrinkage estimation and four empirical techniques were investigated in a Monte Carlo study of the coefficient of cross-validity in multiple regression. Substantial statistical bias was evident for all techniques except the formula of M. W. Brown (1975) and multicross-validation. (SLD)

  4. Cross-Validational Studies of the Personality Correlates of the A-B Therapist "Type" Distinction Among Professionals and Nonprofessionals

    ERIC Educational Resources Information Center

    Berzins, Juris I.; And Others

    1972-01-01

    Research with the A-B therapist type'' variable has included many analogue studies in which A and B undergraduates have been assumed to be personologically similar to A and B professionals. The present study cross-validated the personality correlates of A-B status across five new samples. (Author)

  5. Short Forms of the Wechsler Memory Scale--Revised: Cross- Validation and Derivation of a Two-Subtest Form.

    ERIC Educational Resources Information Center

    van den Broek, Anneke; Golden, Charles J.; Loonstra, Ann; Ghinglia, Katheryne; Goldstein, Diane

    1998-01-01

    Indicated excellent cross-validations with correlation of 0.99 for past formulas (J. L. Woodard and B. N. Axelrod, 1995; B. N. Axelrod et al, 1996) for estimating the Wechsler Memory Scale- Revised General Memory and Delayed Recall Indexes. Over 85% of the estimated scores were within 10 points of actual scores. Age, education, diagnosis, and IQ…

  6. Cross-validation of a composite pain scale for preschool children within 24 hours of surgery.

    PubMed

    Suraseranivongse, S; Santawat, U; Kraiprasit, K; Petcharatana, S; Prakkamodom, S; Muntraporn, N

    2001-09-01

    This study was designed to cross-validate a composite measure of the pain scales CHEOPS (Children's Hospital of Eastern Ontario Pain Scale), OPS (Objective Pain Scale, simplified for parent use by replacing blood pressure measurement with observation of body language or posture), TPPPS (Toddler Preschool Postoperative Pain Scale) and FLACC (Face, Legs, Activity, Cry, Consolability) in 167 Thai children aged 1-5.5 yr. The pain scales were translated and tested for content, construct and concurrent validity, including inter-rater and intra-rater reliabilities. Discriminative validity in immediate and persistent pain for the age groups < or =3 and >3 yr were also studied. The children's behaviour was videotaped before and after surgery, before analgesia had been given in the post-anaesthesia care unit (PACU), and on the ward. Four observers then rated pain behaviour from rearranged videotapes. The decision to treat pain was based on routine practice and was made by a researcher unaware of the rating procedure. All tools had acceptable content validity and excellent inter-rater and intra-rater reliabilities (intraclass correlation >0.9 and >0.8 respectively). Construct validity was determined by the ability to differentiate the group with no pain before surgery and a high pain level after surgery, before analgesia (P<0.001). The positive correlations among all scales in the PACU and on the ward (r=0.621-0.827, P<0.0001) supported concurrent validity. Use of the kappa statistic indicated that CHEOPS yielded the best agreement with the routine decision to treat pain. The younger and older age groups both yielded very good agreement in the PACU but only moderate agreement on the ward. On the basis of data from this study, we recommend CHEOPS as a valid, reliable and practical tool. PMID:11517123

  7. Airborne environmental endotoxin: a cross-validation of sampling and analysis techniques.

    PubMed Central

    Walters, M; Milton, D; Larsson, L; Ford, T

    1994-01-01

    A standard method for measurement of airborne environmental endotoxin was developed and field tested in a fiberglass insulation-manufacturing facility. This method involved sampling with a capillary-pore membrane filter, extraction in buffer using a sonication bath, and analysis by the kinetic-Limulus assay with resistant-parallel-line estimation (KLARE). Cross-validation of the extraction and assay method was performed by comparison with methanolysis of samples followed by 3-hydroxy fatty acid (3-OHFA) analysis by gas chromatography-mass spectrometry. Direct methanolysis of filter samples and methanolysis of buffer extracts of the filters yielded similar 3-OHFA content (P = 0.72); the average difference was 2.1%. Analysis of buffer extracts for endotoxin content by the KLARE method and by gas chromatography-mass spectrometry for 3-OHFA content produced similar results (P = 0.23); the average difference was 0.88%. The source of endotoxin was gram-negative bacteria growing in recycled washwater used to clean the insulation-manufacturing equipment. The endotoxin and bacteria become airborne during spray cleaning operations. The types of 3-OHFAs in bacteria cultured from the washwater, present in the washwater and in the air, were similar. Virtually all of the bacteria cultured from air and water were gram negative composed mostly of two species, Deleya aesta and Acinetobacter johnsonii. Airborne countable bacteria correlated well with endotoxin (r2 = 0.64). Replicate sampling showed that results with the standard sampling, extraction, and Limulus assay by the KLARE method were highly reproducible (95% confidence interval for endotoxin measurement +/- 0.28 log10). These results demonstrate the accuracy, precision, and sensitivity of the standard procedure proposed for airborne environmental endotoxin. PMID:8161191

  8. Long-term Cross-validation of Everolimus Therapeutic Drug Monitoring Assays: The Zortracker Study

    PubMed Central

    Schniedewind, B; Niederlechner, S; Galinkin, JL; Johnson-Davis, KL; Christians, U; Meyer, EJ

    2015-01-01

    Background This ongoing academic collaboration was initiated for providing support to set up, validate, and maintain everolimus therapeutic drug monitoring (TDM) assays and to study long-term inter- laboratory performance. Methods This study was based on EDTA whole blood samples collected from transplant patients treated with everolimus in a prospective clinical trial. Samples were handled under controlled conditions during collection, storage, and were shipped on dry ice to minimize freeze-thaw cycles. For more than 1.5 years participating laboratories received a set of 3 blinded samples on a monthly basis. Among others, these samples included individual patient samples, patient sample pools to assess long-term performance and patient samples pools enriched with isolated everolimus metabolites. Results The results between LC-MS/MS and the everolimus Quantitative Microsphere System (QMS, Thermo Fisher) assay were comparable. The monthly inter-laboratory variability (CV%) for cross validation samples ranged from 6.5 – 23.2% (average of 14.8%) for LC-MS/MS and 4.2 – 26.4% (average of 11.1%) for laboratories using the QMS assay. A blinded long-term pool sample was sent to the laboratories for 13 months. The result was 5.31 ± 0.86 ng/mL (range 2.9–7.8 ng/mL) for the LC-MS/MS and 5.20 ± 0.54 ng/mL (range 4.0–6.8 ng/mL) for QMS laboratories. Conclusions Enrichment of patient sample pools with 5–25 ng/mL of purified everolimus metabolites (46-hydroxy everolimus and 39-O-desmethyl everolimus) did not affect the results of either LC-MS/MS or QMS assays. Both LC-MS/MS and QMS assays gave similar results and showed similar performance, albeit with a trend towards higher inter-laboratory variability among laboratories using LC-MS/MS than the QMS assay. PMID:25970506

  9. Cross-validation of species distribution models: removing spatial sorting bias and calibration with a null model.

    PubMed

    Hijmans, Robert J

    2012-03-01

    Species distribution models are usually evaluated with cross-validation. In this procedure evaluation statistics are computed from model predictions for sites of presence and absence that were not used to train (fit) the model. Using data for 226 species, from six regions, and two species distribution modeling algorithms (Bioclim and MaxEnt), I show that this procedure is highly sensitive to "spatial sorting bias": the difference between the geographic distance from testing-presence to training-presence sites and the geographic distance from testing-absence (or testing-background) to training-presence sites. I propose the use of pairwise distance sampling to remove this bias, and the use of a null model that only considers the geographic distance to training sites to calibrate cross-validation results for remaining bias. Model evaluation results (AUC) were strongly inflated: the null model performed better than MaxEnt for 45% and better than Bioclim for 67% of the species. Spatial sorting bias and area under the receiver-operator curve (AUC) values increased when using partitioned presence data and random-absence data instead of independently obtained presence-absence testing data from systematic surveys. Pairwise distance sampling removed spatial sorting bias, yielding null models with an AUC close to 0.5, such that AUC was the same as null model calibrated AUC (cAUC). This adjustment strongly decreased AUC values and changed the ranking among species. Cross-validation results for different species are only comparable after removal of spatial sorting bias and/or calibration with an appropriate null model.

  10. HR-MAS NMR Tissue Metabolomic Signatures Cross-Validated by Mass Spectrometry Distinguish Bladder Cancer from Benign Disease

    PubMed Central

    Tripathi, Pratima; Somashekar, Bagganahalli S; Ponnusamy, M.; Gursky, Amy; Dailey, Stephen; Kunju, Priya; Lee, Cheryl T.; Chinnaiyan, Arul M.; Rajendiran, Thekkelnaycke M.; Ramamoorthy, Ayyalusamy

    2013-01-01

    Effective diagnosis and surveillance of Bladder Cancer (BCa) is currently challenged by detection methods that are of poor sensitivity, particularly for low-grade tumors, resulting in unnecessary invasive procedures and economic burden. We performed HR-MAS NMR-based global metabolomic profiling and applied unsupervised principal component analysis (PCA) and hierarchical clustering performed on NMR dataset of bladder derived tissues and identified metabolic signatures that differentiate BCa from benign disease. A partial least-square discriminant analysis (PLS-DA) model (leave-one-out cross-validation) was used as diagnostic model to distinguish benign and BCa tissues. Receiver operating characteristic curve generated either from PC1 loadings of PCA or from predicted Y-values resulted in an area under curve of 0.97. Relative quantification of more than fifteen tissue metabolites derived from HR-MAS NMR showed significant differences (P < 0.001) between benign and BCa samples. Noticeably, striking metabolic signatures were observed even for early stage BCa tissues (Ta-T1) demonstrating the sensitivity in detecting BCa. With the goal of cross-validating metabolic signatures derived from HR-MAS NMR, we utilized the same tissue samples to analyze eight metabolites through gas chromatography-mass spectrometry (GC-MS)-targeted analysis, which undoubtedly complements HR-MAS NMR derived metabolomic information. Cross-validation through GC-MS clearly demonstrates the utility of straightforward, non-destructive and rapid HR-MAS NMR technique for clinical diagnosis of BCa with even greater sensitivity. In addition to its utility as a diagnostic tool, these studies will lead to a better understanding of aberrant metabolic pathways in cancer as well as the design and implementation of personalized cancer therapy through metabolic modulation. PMID:23731241

  11. Development and cross-validation of prediction equations for estimating resting energy expenditure in severely obese Caucasian children and adolescents.

    PubMed

    Lazzer, Stefano; Agosti, Fiorenza; De Col, Alessandra; Sartorio, Alessandro

    2006-11-01

    The objectives of the present study were to develop and cross-validate new equations for predicting resting energy expenditure (REE) in severely obese children and adolescents, and to determine the accuracy of new equations using the Bland-Altman method. The subjects of the study were 574 obese Caucasian children and adolescents (mean BMI z-score 3.3). REE was determined by indirect calorimetry and body composition by bioelectrical impedance analysis. Equations were derived by stepwise multiple regression analysis using a calibration cohort of 287 subjects and the equations were cross-validated in the remaining 287 subjects. Two new specific equations based on anthropometric parameters were generated as follows: (1) REE=(Sex x 892.68)-(Age x 115.93)+(Weight x 54.96)+(Stature x 1816.23)+1484.50 (R(2) 0.66; se 1028.97 kJ); (2) REE=(Sex x 909.12)-(Age x 107.48)+(fat-free mass x 68.39)+(fat mass x 55.19)+3631.23 (R(2) 0.66; se 1034.28 kJ). In the cross-validation group, mean predicted REE values were not significantly different from the mean measured REE for all children and adolescents, as well as for boys and for girls (difference <2 %) and the limits of agreement (+/-2 sd) were +2.06 and -1.77 MJ/d (NS). The new prediction equations allow an accurate estimation of REE in groups of severely obese children and adolescents. These equations might be useful for health care professionals and researchers when estimating REE in severely obese children and adolescents. PMID:17092390

  12. Rule-Out and Rule-In scales for the M test for malingering: a cross-validation.

    PubMed

    Smith, G P; Borum, R; Schinka, J A

    1993-01-01

    Previous research found the M test to have limited utility for the screening of malingering. Subsequently, Rogers et al. attempted to improve the test's discriminative ability by developing an alternative scoring procedure-Rule-In and Rule-Out scales. These scales showed promising results as a brief screener for malingering with hit rates as high as 95 percent. The present study cross-validated their proposed decision rules, but found lower rates of classification accuracy. The most conservative decision rule (i.e., to maximize detection of malingerers) only identified 72.7 percent of the malingerers with a false positive rate of 50.8 percent.

  13. Bayesian cross-validation for model evaluation and selection with application to the North American breeding survey

    USGS Publications Warehouse

    Link, William; Sauer, John

    2016-01-01

    The analysis of ecological data has changed in two important ways over the last fifteen years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion (BPIC) and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion (WAIC). We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using 3 large data sets from the North American Breeding Bird Survey.

  14. Cross-validation of the reduced form of the Food Craving Questionnaire-Trait using confirmatory factor analysis

    PubMed Central

    Iani, Luca; Barbaranelli, Claudio; Lombardo, Caterina

    2015-01-01

    Objective: The Food Craving Questionnaire-Trait (FCQ-T) is commonly used to assess habitual food cravings among individuals. Previous studies have shown that a brief version of this instrument (FCQ-T-r) has good reliability and validity. This article is the first to use Confirmatory factor analysis to examine the psychometric properties of the FCQ-T-r in a cross-validation study. Method: Habitual food cravings, as well as emotion regulation strategies, affective states, and disordered eating behaviors, were investigated in two independent samples of non-clinical adult volunteers (Sample 1: N = 368; Sample 2: N = 246). Confirmatory factor analyses were conducted to simultaneously test model fit statistics and dimensionality of the instrument. FCQ-T-r reliability was assessed by computing the composite reliability coefficient. Results: Analysis supported the unidimensional structure of the scale and fit indices were acceptable for both samples. The FCQ-T-r showed excellent reliability and moderate to high correlations with negative affect and disordered eating. Conclusion: Our results indicate that the FCQ-T-r scores can be reliably used to assess habitual cravings in an Italian non-clinical sample of adults. The robustness of these results is tested by a cross-validation of the model using two independent samples. Further research is required to expand on these findings, particularly in children and adolescents. PMID:25918510

  15. Bayesian cross-validation for model evaluation and selection, with application to the North American Breeding Bird Survey

    USGS Publications Warehouse

    Link, William; Sauer, John R.

    2016-01-01

    The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.

  16. Radioactive quality evaluation and cross validation of data from the HJ-1A/B satellites' CCD sensors.

    PubMed

    Zhang, Xin; Zhao, Xiang; Liu, Guodong; Kang, Qian; Wu, Donghai

    2013-01-01

    Data from multiple sensors are frequently used in Earth science to gain a more complete understanding of spatial information changes. Higher quality and mutual consistency are prerequisites when multiple sensors are jointly used. The HJ-1A/B satellites successfully launched on 6 September 2008. There are four charge-coupled device (CCD) sensors with uniform spatial resolutions and spectral range onboard the HJ-A/B satellites. Whether these data are keeping consistency is a major issue before they are used. This research aims to evaluate the data consistency and radioactive quality from the four CCDs. First, images of urban, desert, lake and ocean are chosen as the objects of evaluation. Second, objective evaluation variables, such as mean, variance and angular second moment, are used to identify image performance. Finally, a cross validation method are used to ensure the correlation of the data from the four HJ-1A/B CCDs and that which is gathered from the moderate resolution imaging spectro-radiometer (MODIS). The results show that the image quality of HJ-1A/B CCDs is stable, and the digital number distribution of CCD data is relatively low. In cross validation with MODIS, the root mean square errors of bands 1, 2 and 3 range from 0.055 to 0.065, and for band 4 it is 0.101. The data from HJ-1A/B CCD have better consistency.

  17. Specific binding of gibberellic acid by cytokinin-specific binding proteins: a new aspect of plant hormone-binding proteins with the PR-10 fold.

    PubMed

    Ruszkowski, Milosz; Sliwiak, Joanna; Ciesielska, Agnieszka; Barciszewski, Jakub; Sikorski, Michal; Jaskolski, Mariusz

    2014-07-01

    Pathogenesis-related proteins of class 10 (PR-10) are a family of plant proteins with the same fold characterized by a large hydrophobic cavity that allows them to bind various ligands, such as phytohormones. A subfamily with only ~20% sequence identity but with a conserved canonical PR-10 fold have previously been recognized as Cytokinin-Specific Binding Proteins (CSBPs), although structurally the binding mode of trans-zeatin (a cytokinin phytohormone) was found to be quite diversified. Here, it is shown that two CSBP orthologues from Medicago truncatula and Vigna radiata bind gibberellic acid (GA3), which is an entirely different phytohormone, in a conserved and highly specific manner. In both cases a single GA3 molecule is found in the internal cavity of the protein. The structural data derived from high-resolution crystal structures are corroborated by isothermal titration calorimetry (ITC), which reveals a much stronger interaction with GA3 than with trans-zeatin and pH dependence of the binding profile. As a conclusion, it is postulated that the CSBP subfamily of plant PR-10 proteins should be more properly linked with general phytohormone-binding properties and termed phytohormone-binding proteins (PhBP).

  18. The copper-mobilizing-potential of dissolved organic matter in soils varies 10-fold depending on soil incubation and extraction procedures.

    PubMed

    Amery, Fien; Degryse, Fien; Degeling, Wim; Smolders, Erik; Merckx, Roel

    2007-04-01

    Copper is mobilized in soil by dissolved organic matter (DOM) but the role of DOM quality in this process is unclear. A one-step resin-exchange method was developed to measure the Cu-Mobilizing-Potential (CuMP) of DOM at pCu 11.3 and pH 7.0, representing background values. The CuMP of DOM was measured in soil solutions of 13 uncontaminated soils with different DOM extraction methods. The CuMP, expressed per unit dissolved organic carbon (DOC), varied 10-fold and followed the order water extracts > 0.01 M CaCl2 extracts > pore water. Soil solutions, obtained from soils that were stored air-dry for a long time or were subjected to drying-wetting cycles, had elevated DOC concentration, but the DOM had a low CuMP. Prolonged soil incubations decreased the DOC concentration and increased the CuMP, suggesting that most of the initially elevated DOM is less humified and has lower Cu affinity than DOM remaining after incubation. A significant positive correlation between the specific UV-absorption of DOM (indicating aromaticity) and CuMP was found for all DOM samples (R(2) = 0.58). It is concluded that the DOC concentration in soil is an insufficient predictor for the Cu mobilization and that DOM samples isolated from air-dried soils are distinct from those of soils kept moist. PMID:17438775

  19. Validating clinical terminology structures: integration and cross-validation of Read Thesaurus and GALEN.

    PubMed Central

    Rogers, J. E.; Price, C.; Rector, A. L.; Solomon, W. D.; Smejko, N.

    1998-01-01

    A European pre-standard and an intermediate representation facilitated exchange of two independently authored compositional knowledge bases: one formal and automatically classified, the other manually classified. The exchange highlights different strengths and weaknesses in each approach, and offers a mechanism for partial, mutual quality assurance. PMID:9929338

  20. Bayes and Empirical Bayes Shrinkage Estimation of Regression Coefficients: A Cross-Validation Study.

    ERIC Educational Resources Information Center

    Nebebe, Fassil; Stroud, T. W. F.

    1988-01-01

    Bayesian and empirical Bayes approaches to shrinkage estimation of regression coefficients and uses of these in prediction (i.e., analyzing intelligence test data of children with learning problems) are investigated. The two methods are consistently better at predicting response variables than are either least squares or least absolute deviations.…

  1. The joint WAIS-III and WMS-III factor structure: development and cross-validation of a six-factor model of cognitive functioning.

    PubMed

    Tulsky, David S; Price, Larry R

    2003-06-01

    During the standardization of the Wechsler Adult Intelligence Scale (3rd ed.; WAIS-III) and the Wechsler Memory Scale (3rd ed.; WMS-III) the participants in the normative study completed both scales. This "co-norming" methodology set the stage for full integration of the 2 tests and the development of an expanded structure of cognitive functioning. Until now, however, the WAIS-III and WMS-III had not been examined together in a factor analytic study. This article presents a series of confirmatory factor analyses to determine the joint WAIS-III and WMS-III factor structure. Using a structural equation modeling approach, a 6-factor model that included verbal, perceptual, processing speed, working memory, auditory memory, and visual memory constructs provided the best model fit to the data. Allowing select subtests to load simultaneously on 2 factors improved model fit and indicated that some subtests are multifaceted. The results were then replicated in a large cross-validation sample (N = 858).

  2. Evidence cross-validation and Bayesian inference of MAST plasma equilibria

    NASA Astrophysics Data System (ADS)

    von Nessi, G. T.; Hole, M. J.; Svensson, J.; Appel, L.

    2012-01-01

    In this paper, current profiles for plasma discharges on the mega-ampere spherical tokamak are directly calculated from pickup coil, flux loop, and motional-Stark effect observations via methods based in the statistical theory of Bayesian analysis. By representing toroidal plasma current as a series of axisymmetric current beams with rectangular cross-section and inferring the current for each one of these beams, flux-surface geometry and q-profiles are subsequently calculated by elementary application of Biot-Savart's law. The use of this plasma model in the context of Bayesian analysis was pioneered by Svensson and Werner on the joint-European tokamak [Svensson and Werner,Plasma Phys. Controlled Fusion 50(8), 085002 (2008)]. In this framework, linear forward models are used to generate diagnostic predictions, and the probability distribution for the currents in the collection of plasma beams was subsequently calculated directly via application of Bayes' formula. In this work, we introduce a new diagnostic technique to identify and remove outlier observations associated with diagnostics falling out of calibration or suffering from an unidentified malfunction. These modifications enable a good agreement between Bayesian inference of the last-closed flux-surface with other corroborating data, such as that from force balance considerations using EFIT++ [Appel et al., "A unified approach to equilibrium reconstruction" Proceedings of the 33rd EPS Conference on Plasma Physics (Rome, Italy, 2006)]. In addition, this analysis also yields errors on the plasma current profile and flux-surface geometry as well as directly predicting the Shafranov shift of the plasma core.

  3. An improved systematic approach to predicting transcription factor target genes using support vector machine.

    PubMed

    Cui, Song; Youn, Eunseog; Lee, Joohyun; Maas, Stephan J

    2014-01-01

    Biological prediction of transcription factor binding sites and their corresponding transcription factor target genes (TFTGs) makes great contribution to understanding the gene regulatory networks. However, these approaches are based on laborious and time-consuming biological experiments. Numerous computational approaches have shown great potential to circumvent laborious biological methods. However, the majority of these algorithms provide limited performances and fail to consider the structural property of the datasets. We proposed a refined systematic computational approach for predicting TFTGs. Based on previous work done on identifying auxin response factor target genes from Arabidopsis thaliana co-expression data, we adopted a novel reverse-complementary distance-sensitive n-gram profile algorithm. This algorithm converts each upstream sub-sequence into a high-dimensional vector data point and transforms the prediction task into a classification problem using support vector machine-based classifier. Our approach showed significant improvement compared to other computational methods based on the area under curve value of the receiver operating characteristic curve using 10-fold cross validation. In addition, in the light of the highly skewed structure of the dataset, we also evaluated other metrics and their associated curves, such as precision-recall curves and cost curves, which provided highly satisfactory results.

  4. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data

  5. Cross-Validation of a Recently Published Equation Predicting Energy Expenditure to Run or Walk a Mile in Normal-Weight and Overweight Adults

    ERIC Educational Resources Information Center

    Morris, Cody E.; Owens, Scott G.; Waddell, Dwight E.; Bass, Martha A.; Bentley, John P.; Loftin, Mark

    2014-01-01

    An equation published by Loftin, Waddell, Robinson, and Owens (2010) was cross-validated using ten normal-weight walkers, ten overweight walkers, and ten distance runners. Energy expenditure was measured at preferred walking (normal-weight walker and overweight walkers) or running pace (distance runners) for 5 min and corrected to a mile. Energy…

  6. Cross-validation of generalised body composition equations with diverse young men and women: the Training Intervention and Genetics of Exercise Response (TIGER) Study

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Generalised skinfold equations developed in the 1970s are commonly used to estimate laboratory-measured percentage fat (BF%). The equations were developed on predominately white individuals using Siri's two-component percentage fat equation (BF%-GEN). We cross-validated the Jackson-Pollock (JP) gene...

  7. Accuracy of Population Validity and Cross-Validity Estimation: An Empirical Comparison of Formula-Based, Traditional Empirical, and Equal Weights Procedures.

    ERIC Educational Resources Information Center

    Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.

    1999-01-01

    Performed an empirical Monte Carlo study using predictor and criterion data from 84,808 U.S. Air Force enlistees. Compared formula-based, traditional empirical, and equal-weights procedures. Discusses issues for basic research on validation and cross-validation. (SLD)

  8. Predicting Chinese Children and Youth's Energy Expenditure Using ActiGraph Accelerometers: A Calibration and Cross-Validation Study

    ERIC Educational Resources Information Center

    Zhu, Zheng; Chen, Peijie; Zhuang, Jie

    2013-01-01

    Purpose: The purpose of this study was to develop and cross-validate an equation based on ActiGraph accelerometer GT3X output to predict children and youth's energy expenditure (EE) of physical activity (PA). Method: Participants were 367 Chinese children and youth (179 boys and 188 girls, aged 9 to 17 years old) who wore 1 ActiGraph GT3X…

  9. Quantification of rainfall prediction uncertainties using a cross-validation based technique. Methodology description and experimental validation.

    NASA Astrophysics Data System (ADS)

    Fraga, Ignacio; Cea, Luis; Puertas, Jerónimo; Salsón, Santiago; Petazzi, Alberto

    2016-04-01

    In this paper we present a new methodology to compute rainfall fields including the quantification of predictions uncertainties using raingauge network data. The proposed methodology comprises two steps. Firstly, the ordinary krigging technique is used to determine the estimated rainfall depth in every point of the study area. Then multiple equi-probable errors fields, which comprise both interpolation and measuring uncertainties, are added to the krigged field resulting in multiple rainfall predictions. To compute these error fields first the standard deviation of the krigging estimation is determined following the cross-validation based procedure described in Delrieu et al. (2014). Then, the standard deviation field is sampled using non-conditioned Gaussian random fields. The proposed methodology was applied to study 7 rain events in a 60x60 km area of the west coast of Galicia, in the Northwest of Spain. Due to its location at the junction between tropical and polar regions, the study area suffers from frequent intense rainfalls characterized by a great variability in terms of both space and time. Rainfall data from the tipping bucket raingauge network operated by MeteoGalicia were used to estimate the rainfall fields using the proposed methodology. The obtained predictions were then validated using rainfall data from 3 additional rain gauges installed within the CAPRI project (Probabilistic flood prediction with high resolution hydrologic models from radar rainfall estimates, funded by the Spanish Ministry of Economy and Competitiveness. Reference CGL2013-46245-R.). Results show that both the mean hyetographs and the peak intensities are correctly predicted. The computed hyetographs present a good fit to the experimental data and most of the measured values fall within the 95% confidence intervals. Also, most of the experimental values outside the confidence bounds correspond to time periods of low rainfall depths, where the inaccuracy of the measuring devices

  10. A semi-mechanism approach based on MRI and proteomics for prediction of conversion from mild cognitive impairment to Alzheimer’s disease

    PubMed Central

    Liu, Haochen; Zhou, Xiaoting; Jiang, Hao; He, Hua; Liu, Xiaoquan; Weiner, Michael W.; Aisen, Paul; Petersen, Ronald; Jack, Clifford R.; Jagust, William; Trojanowki, John Q.; Toga, Arthur W.; Beckett, Laurel; Green, Robert C.; Saykin, Andrew J.; Morris, John; Shaw, Leslie M.; Khachaturian, Zaven; Sorensen, Greg; Carrillo, Maria; Kuller, Lew; Raichle, Marc; Paul, Steven; Davies, Peter; Fillit, Howard; Hefti, Franz; Holtzman, Davie; Mesulam, M. Marcel; Potter, William; Snyder, Peter; Montine, Tom; Thomas, Ronald G.; Donohue, Michael; Walter, Sarah; Sather, Tamie; Jiminez, Gus; Balasubramanian, Archana B.; Mason, Jennifer; Sim, Iris; Harvey, Danielle; Bernstein, Matthew; Fox, Nick; Thompson, Paul; Schuff, Norbert; DeCArli, Charles; Borowski, Bret; Gunter, Jeff; Senjem, Matt; Vemuri, Prashanthi; Jones, David; Kantarci, Kejal; Ward, Chad; Koeppe, Robert A.; Foster, Norm; Reiman, Eric M.; Chen, Kewei; Mathis, Chet; Landau, Susan; Cairns, Nigel J.; Householder, Erin; Taylor-Reinwald, Lisa; Lee, Virginia; Korecka, Magdalena; Figurski, Michal; Crawford, Karen; Neu, Scott; Foroud, Tatiana M.; Potkin, Steven; Shen, Li; Faber, Kelley; Kim, Sungeun; Nho, Kwangsik; Thal, Lean; Frank, Richard; Hsiao, John; Kaye, Jeffrey; Quinn, Joseph; Silbert, Lisa; Lind, Betty; Carter, Raina; Dolen, Sara; Ances, Beau; Carroll, Maria; Creech, Mary L.; Franklin, Erin; Mintun, Mark A.; Schneider, Stacy; Oliver, Angela; Schneider, Lon S.; Pawluczyk, Sonia; Beccera, Mauricio; Teodoro, Liberty; Spann, Bryan M.; Brewer, James; Vanderswag, Helen; Fleisher, Adam; Marson, Daniel; Griffith, Randall; Clark, David; Geldmacher, David; Brockington, John; Roberson, Erik; Love, Marissa Natelson; Heidebrink, Judith L.; Lord, Joanne L.; Mason, Sara S.; Albers, Colleen S.; Knopman, David; Johnson, Kris; Grossman, Hillel; Mitsis, Effie; Shah, Raj C.; deToledo-Morrell, Leyla; Doody, Rachelle S.; Villanueva-Meyer, Javier; Chowdhury, Munir; Rountree, Susan; Dang, Mimi; Duara, Ranjan; Varon, Daniel; Greig, Maria T.; Roberts, Peggy; Stern, Yaakov; Honig, Lawrence S.; Bell, Karen L.; Albert, Marilyn; Onyike, Chiadi; D’Agostino II, Daniel; Kielb, Stephanie; Galvin, James E.; Cerbone, Brittany; Michel, Christina A.; Pogorelec, Dana M.; Rusinek, Henry; de Leon, Mony J.; Glodzik, Lidia; De Santi, Susan; Womack, Kyle; Mathews, Dana; Quiceno, Mary; Doraiswamy, P. Murali; Petrella, Jeffrey R.; Borges-Neto, Salvador; Wong, Terence Z.; Coleman, Edward; Levey, Allan I.; Lah, James J.; Cella, Janet S.; Burns, Jeffrey M.; Swerdlow, Russell H.; Brooks, William M.; Arnold, Steven E.; Karlawish, Jason H.; Wolk, David; Clark, Christopher M.; Apostolova, Liana; Tingus, Kathleen; Woo, Ellen; Silverman, Daniel H.S.; Lu, Po H.; Bartzokis, George; Smith, Charles D.; Jicha, Greg; Hardy, Peter; Sinha, Partha; Oates, Elizabeth; Conrad, Gary; Graff-Radford, Neill R; Parfitt, Francine; Kendall, Tracy; Johnson, Heather; Lopez, Oscar L.; Oakley, MaryAnn; Simpson, Donna M.; Farlow, Martin R.; Hake, Ann Marie; Matthews, Brandy R.; Brosch, Jared R.; Herring, Scott; Hunt, Cynthia; Porsteinsson, Anton P.; Goldstein, Bonnie S.; Martin, Kim; Makino, Kelly M.; Ismail, M. Saleem; Brand, Connie; Mulnard, Ruth A.; Thai, Gaby; Mc-Adams-Ortiz, Catherine; van Dyck, Christopher H.; Carson, Richard E.; MacAvoy, Martha G.; Varma, Pradeep; Chertkow, Howard; Bergman, Howard; Hosein, Chris; Black, Sandra; Stefanovic, Bojana; Caldwell, Curtis; Robin Hsiung, Ging-Yuek; Feldman, Howard; Mudge, Benita; Assaly, Michele; Finger, Elizabeth; Pasternack, Stephen; Rachisky, Irina; Trost, Dick; Kertesz, Andrew; Bernick, Charles; Munic, Donna; Lipowski, Kristine; Weintraub, MASandra; Bonakdarpour, Borna; Kerwin, Diana; Wu, Chuang-Kuo; Johnson, Nancy; Sadowsky, Carl; Villena, Teresa; Turner, Raymond Scott; Johnson, Kathleen; Reynolds, Brigid; Sperling, Reisa A.; Johnson, Keith A.; Marshall, Gad; Yesavage, Jerome; Taylor, Joy L.; Lane, Barton; Rosen, Allyson; Tinklenberg, Jared; Sabbagh, Marwan N.; Belden, Christine M.; Jacobson, Sandra A.; Sirrel, Sherye A.; Kowall, Neil; Killiany, Ronald; Budson, Andrew E.; Norbash, Alexander; Johnson, Patricia Lynn; Obisesan, Thomas O.; Wolday, Saba; Allard, Joanne; Lerner, Alan; Ogrocki, Paula; Tatsuoka, Curtis; Fatica, Parianne; Fletcher, Evan; Maillard, Pauline; Olichney, John; Carmichael, Owen; Kittur, Smita; Borrie, Michael; Lee, T-Y; Bartha, Rob; Johnson, Sterling; Asthana, Sanjay; Carlsson, Cynthia M.; Preda, Adrian; Nguyen, Dana; Tariot, Pierre; Burke, Anna; Trncic, Nadira; Fleisher, Adam; Reeder, Stephanie; Bates, Vernice; Capote, Horacio; Rainka, Michelle; Scharre, Douglas W.; Kataki, Maria; Adeli, Anahita; Zimmerman, Earl A.; Celmins, Dzintra; Brown, Alice D.; Pearlson, Godfrey D.; Blank, Karen; Anderson, Karen; Flashman, Laura A.; Seltzer, Marc; Hynes, Mary L.; Santulli, Robert B.; Sink, Kaycee M.; Gordineer, Leslie; Williamson, Jeff D.; Garg, Pradeep; Watkins, Franklin; Ott, Brian R.; Querfurth, Henry; Tremont, Geoffrey; Salloway, Stephen; Malloy, Paul; Correia, Stephen; Rosen, Howard J.; Miller, Bruce L.; Perry, David; Mintzer, Jacobo; Spicer, Kenneth; Bachman, David; Finger, Elizabether; Pasternak, Stephen; Rachinsky, Irina; Rogers, John; Drost, Dick; Pomara, Nunzio; Hernando, Raymundo; Sarrael, Antero; Schultz, Susan K.; Boles Ponto, Laura L.; Shim, Hyungsub; Smith, Karen Ekstam; Relkin, Norman; Chaing, Gloria; Lin, Michael; Ravdin, Lisa; Smith, Amanda; Raj, Balebail Ashok; Fargher, Kristin

    2016-01-01

    Mild cognitive impairment (MCI) is a precursor phase of Alzheimer’s disease (AD). As current treatments may be effective only at the early stages of AD, it is important to track MCI patients who will convert to AD. The aim of this study is to develop a high performance semi-mechanism based approach to predict the conversion from MCI to AD and improve our understanding of MCI-to-AD conversion mechanism. First, analysis of variance (ANOVA) test and lasso regression are employed to identify the markers related to the conversion. Then the Bayesian network based on selected markers is established to predict MCI-to-AD conversion. The structure of Bayesian network suggests that the conversion may start with fibrin clot formation, verbal memory impairment, eating pattern changing and hyperinsulinemia. The Bayesian network achieves a high 10-fold cross-validated prediction performance with 96% accuracy, 95% sensitivity, 65% specificity, area under the receiver operating characteristic curve of 0.82 on data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The semi-mechanism based approach provides not only high prediction performance but also clues of mechanism for MCI-to-AD conversion. PMID:27273250

  11. A semi-mechanism approach based on MRI and proteomics for prediction of conversion from mild cognitive impairment to Alzheimer's disease.

    PubMed

    Liu, Haochen; Zhou, Xiaoting; Jiang, Hao; He, Hua; Liu, Xiaoquan

    2016-06-07

    Mild cognitive impairment (MCI) is a precursor phase of Alzheimer's disease (AD). As current treatments may be effective only at the early stages of AD, it is important to track MCI patients who will convert to AD. The aim of this study is to develop a high performance semi-mechanism based approach to predict the conversion from MCI to AD and improve our understanding of MCI-to-AD conversion mechanism. First, analysis of variance (ANOVA) test and lasso regression are employed to identify the markers related to the conversion. Then the Bayesian network based on selected markers is established to predict MCI-to-AD conversion. The structure of Bayesian network suggests that the conversion may start with fibrin clot formation, verbal memory impairment, eating pattern changing and hyperinsulinemia. The Bayesian network achieves a high 10-fold cross-validated prediction performance with 96% accuracy, 95% sensitivity, 65% specificity, area under the receiver operating characteristic curve of 0.82 on data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The semi-mechanism based approach provides not only high prediction performance but also clues of mechanism for MCI-to-AD conversion.

  12. A semi-mechanism approach based on MRI and proteomics for prediction of conversion from mild cognitive impairment to Alzheimer's disease.

    PubMed

    Liu, Haochen; Zhou, Xiaoting; Jiang, Hao; He, Hua; Liu, Xiaoquan

    2016-01-01

    Mild cognitive impairment (MCI) is a precursor phase of Alzheimer's disease (AD). As current treatments may be effective only at the early stages of AD, it is important to track MCI patients who will convert to AD. The aim of this study is to develop a high performance semi-mechanism based approach to predict the conversion from MCI to AD and improve our understanding of MCI-to-AD conversion mechanism. First, analysis of variance (ANOVA) test and lasso regression are employed to identify the markers related to the conversion. Then the Bayesian network based on selected markers is established to predict MCI-to-AD conversion. The structure of Bayesian network suggests that the conversion may start with fibrin clot formation, verbal memory impairment, eating pattern changing and hyperinsulinemia. The Bayesian network achieves a high 10-fold cross-validated prediction performance with 96% accuracy, 95% sensitivity, 65% specificity, area under the receiver operating characteristic curve of 0.82 on data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The semi-mechanism based approach provides not only high prediction performance but also clues of mechanism for MCI-to-AD conversion. PMID:27273250

  13. The German Bight: Preparing for Sentinel-3 wit a Cross Validation of SAR and PLRM CryoSat-2 Altimeter Data

    NASA Astrophysics Data System (ADS)

    Fenoglio-Marc, L.; Buchhaupt, C.; Dinardo, S.; Scharroo, R.; Benveniste, J.; Becker, M.

    2015-12-01

    As preparatory work for Sentinel-3, we retrieve the three geophysical parameters: sea surface height (SSH), significant wave height (SWH) and wind speed at 10 meters height (U10) from CryoSat-2 data in our validation region in North Sea. The CryoSat-2 SAR echoes are processed with a coherent and an incoherent processing to generate SAR and PLRM data respectively. We derive precision and accuracy at 1 Hz in open ocean, at distances larger than 10 kilometres from the coast. A cross-validation of the SAR and PLRM altimeter data is performed to investigate the differences between the products. Look Up Tables (LUT) are applied in both schemes to correct for approximations applied in both retracking procedures. Additionally a numerical retracker is used in PLRM. The results are validated against in-situ and model data. The analysis is performed for a period of four years, from July 2010 to May 2014. The regional cross-validation analysis confirms the good consistency between PLRM and SAR data. Using LUT the agreement for the sea wave heights increases by 10%.

  14. Cross-validation of the osmotic pressure based on Pitzer model with air humidity osmometry at high concentration of ammonium sulfate solutions.

    PubMed

    Wang, Xiao-Lan; Zhan, Ting-Ting; Zhan, Xian-Cheng; Tan, Xiao-Ying; Qu, Xiao-You; Wang, Xin-Yue; Li, Cheng-Rong

    2014-01-01

    The osmotic pressure of ammonium sulfate solutions has been measured by the well-established freezing point osmometry in dilute solutions and we recently reported air humidity osmometry in a much wider range of concentration. Air humidity osmometry cross-validated the theoretical calculations of osmotic pressure based on the Pitzer model at high concentrations by two one-sided test (TOST) of equivalence with multiple testing corrections, where no other experimental method could serve as a reference for comparison. Although more strict equivalence criteria were established between the measurements of freezing point osmometry and the calculations based on the Pitzer model at low concentration, air humidity osmometry is the only currently available osmometry applicable to high concentration, serves as an economic addition to standard osmometry.

  15. Remote sensing and GIS-based landslide hazard analysis and cross-validation using multivariate logistic regression model on three test areas in Malaysia

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet

    2010-05-01

    This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross

  16. Body fat measurement by bioelectrical impedance and air displacement plethysmography: a cross-validation study to design bioelectrical impedance equations in Mexican adults

    PubMed Central

    Macias, Nayeli; Alemán-Mateo, Heliodoro; Esparza-Romero, Julián; Valencia, Mauro E

    2007-01-01

    Background The study of body composition in specific populations by techniques such as bio-impedance analysis (BIA) requires validation based on standard reference methods. The aim of this study was to develop and cross-validate a predictive equation for bioelectrical impedance using air displacement plethysmography (ADP) as standard method to measure body composition in Mexican adult men and women. Methods This study included 155 male and female subjects from northern Mexico, 20–50 years of age, from low, middle, and upper income levels. Body composition was measured by ADP. Body weight (BW, kg) and height (Ht, cm) were obtained by standard anthropometric techniques. Resistance, R (ohms) and reactance, Xc (ohms) were also measured. A random-split method was used to obtain two samples: one was used to derive the equation by the "all possible regressions" procedure and was cross-validated in the other sample to test predicted versus measured values of fat-free mass (FFM). Results and Discussion The final model was: FFM (kg) = 0.7374 * (Ht2 /R) + 0.1763 * (BW) - 0.1773 * (Age) + 0.1198 * (Xc) - 2.4658. R2 was 0.97; the square root of the mean square error (SRMSE) was 1.99 kg, and the pure error (PE) was 2.96. There was no difference between FFM predicted by the new equation (48.57 ± 10.9 kg) and that measured by ADP (48.43 ± 11.3 kg). The new equation did not differ from the line of identity, had a high R2 and a low SRMSE, and showed no significant bias (0.87 ± 2.84 kg). Conclusion The new bioelectrical impedance equation based on the two-compartment model (2C) was accurate, precise, and free of bias. This equation can be used to assess body composition and nutritional status in populations similar in anthropometric and physical characteristics to this sample. PMID:17697388

  17. SILAC-Pulse Proteolysis: A Mass Spectrometry-Based Method for Discovery and Cross-Validation in Proteome-Wide Studies of Ligand Binding

    NASA Astrophysics Data System (ADS)

    Adhikari, Jagat; Fitzgerald, Michael C.

    2014-12-01

    Reported here is the use of stable isotope labeling with amino acids in cell culture (SILAC) and pulse proteolysis (PP) for detection and quantitation of protein-ligand binding interactions on the proteomic scale. The incorporation of SILAC into PP enables the PP technique to be used for the unbiased detection and quantitation of protein-ligand binding interactions in complex biological mixtures (e.g., cell lysates) without the need for prefractionation. The SILAC-PP technique is demonstrated in two proof-of-principle experiments using proteins in a yeast cell lysate and two test ligands including a well-characterized drug, cyclosporine A (CsA), and a non-hydrolyzable adenosine triphosphate (ATP) analogue, adenylyl imidodiphosphate (AMP-PNP). The well-known tight-binding interaction between CsA and cyclophilin A was successfully detected and quantified in replicate analyses, and a total of 33 proteins from a yeast cell lysate were found to have AMP-PNP-induced stability changes. In control experiments, the method's false positive rate of protein target discovery was found to be in the range of 2.1% to 3.6%. SILAC-PP and the previously reported stability of protein from rates of oxidation (SPROX) technique both report on the same thermodynamic properties of proteins and protein-ligand complexes. However, they employ different probes and mass spectrometry-based readouts. This creates the opportunity to cross-validate SPROX results with SILAC-PP results, and vice-versa. As part of this work, the SILAC-PP results obtained here were cross-validated with previously reported SPROX results on the same model systems to help differentiate true positives from false positives in the two experiments.

  18. Improved GRACE regional mass balance estimates of the Greenland ice sheet cross-validated with the input-output method

    NASA Astrophysics Data System (ADS)

    Xu, Zheng; Schrama, Ernst J. O.; van der Wal, Wouter; van den Broeke, Michiel; Enderlin, Ellyn M.

    2016-04-01

    In this study, we use satellite gravimetry data from the Gravity Recovery and Climate Experiment (GRACE) to estimate regional mass change of the Greenland ice sheet (GrIS) and neighboring glaciated regions using a least squares inversion approach. We also consider results from the input-output method (IOM). The IOM quantifies the difference between the mass input and output of the GrIS by studying the surface mass balance (SMB) and the ice discharge (D). We use the Regional Atmospheric Climate Model version 2.3 (RACMO2.3) to model the SMB and derive the ice discharge from 12 years of high-precision ice velocity and thickness surveys. We use a simulation model to quantify and correct for GRACE approximation errors in mass change between different subregions of the GrIS, and investigate the reliability of pre-1990s ice discharge estimates, which are based on the modeled runoff. We find that the difference between the IOM and our improved GRACE mass change estimates is reduced in terms of the long-term mass change when using a reference discharge derived from runoff estimates in several subareas. In most regions our GRACE and IOM solutions are consistent with other studies, but differences remain in the northwestern GrIS. We validate the GRACE mass balance in that region by considering several different GIA models and mass change estimates derived from data obtained by the Ice, Cloud and land Elevation Satellite (ICESat). We conclude that the approximated mass balance between GRACE and IOM is consistent in most GrIS regions. The difference in the northwest is likely due to underestimated uncertainties in the IOM solutions.

  19. Geostatistical validation and cross-validation of magnetometric measurements of soil pollution with Potentially Toxic Elements in problematic areas

    NASA Astrophysics Data System (ADS)

    Fabijańczyk, Piotr; Zawadzki, Jarosław

    2016-04-01

    Field magnetometry is fast method that was previously effectively used to assess the potential soil pollution. One of the most popular devices that are used to measure the soil magnetic susceptibility on the soil surface is a MS2D Bartington. Single reading using MS2D device of soil magnetic susceptibility is low time-consuming but often characterized by considerable errors related to the instrument or environmental and lithogenic factors. In this connection, measured values of soil magnetic susceptibility have to be usually validated using more precise, but also much more expensive, chemical measurements. The goal of this study was to analyze validation methods of magnetometric measurements using chemical analyses of a concentration of elements in soil. Additionally, validation of surface measurements of soil magnetic susceptibility was performed using selected parameters of a distribution of magnetic susceptibility in a soil profile. Validation was performed using selected geostatistical measures of cross-correlation. The geostatistical approach was compared with validation performed using the classic statistics. Measurements were performed at selected areas located in the Upper Silesian Industrial Area in Poland, and in the selected parts of Norway. In these areas soil magnetic susceptibility was measured on the soil surface using a MS2D Bartington device and in the soil profile using MS2C Bartington device. Additionally, soil samples were taken in order to perform chemical measurements. Acknowledgment The research leading to these results has received funding from the Polish-Norwegian Research Programme operated by the National Centre for Research and Development under the Norwegian Financial Mechanism 2009-2014 in the frame of Project IMPACT - Contract No Pol-Nor/199338/45/2013.

  20. A multiscale decomposition approach to detect abnormal vasculature in the optic disc.

    PubMed

    Agurto, Carla; Yu, Honggang; Murray, Victor; Pattichis, Marios S; Nemeth, Sheila; Barriga, Simon; Soliz, Peter

    2015-07-01

    This paper presents a multiscale method to detect neovascularization in the optic disc (NVD) using fundus images. Our method is applied to a manually selected region of interest (ROI) containing the optic disc. All the vessels in the ROI are segmented by adaptively combining contrast enhancement methods with a vessel segmentation technique. Textural features extracted using multiscale amplitude-modulation frequency-modulation, morphological granulometry, and fractal dimension are used. A linear SVM is used to perform the classification, which is tested by means of 10-fold cross-validation. The performance is evaluated using 300 images achieving an AUC of 0.93 with maximum accuracy of 88%. PMID:25698545

  1. A Multiscale Decomposition Approach to Detect Abnormal Vasculature in the Optic Disc

    PubMed Central

    Agurto, Carla; Yu, Honggang; Murray, Victor; Pattichis, Marios S.; Nemeth, Sheila; Barriga, Simon; Soliz, Peter

    2015-01-01

    This paper presents a multiscale method to detect neovascularization in the optic disc (NVD) using fundus images. Our method is applied to a manually selected region of interest (ROI) containing the optic disc. All the vessels in the ROI are segmented by adaptively combining contrast enhancement methods with a vessel segmentation technique. Textural features extracted using multiscale amplitude-modulation frequency-modulation, morphological granulometry, and fractal dimension are used. A linear SVM is used to perform the classification, which is tested by means of 10-fold cross-validation. The performance is evaluated using 300 images achieving an AUC of 0.93 with maximum accuracy of 88%. PMID:25698545

  2. Sub-millimeter Signal Detection by GPS: Cross Validation using GIPSY and GAMIT Solutions for the Yucca Mountain Network

    NASA Astrophysics Data System (ADS)

    Hill, E.; Bennett, R. A.; Blewitt, G.; Davis, J. L.; Wernicke, B. P.

    2002-12-01

    A continuous and densely spaced GPS network has been installed at Yucca Mountain, southern Nevada, as part of the BARGEN array. It was funded by the Department of Energy to characterize strain at the proposed nuclear waste repository. Each GPS antenna is deep-mounted into solid bedrock and atmospheric effects in the desert climate of the region are relatively low, making this an ideal network to explore the potential precision of GPS. Due to the importance of obtaining an accurate and reliable set of velocity measurements at Yucca Mountain, two separate groups using entirely different methods have independently processed the GPS data from this network. The UNR group has utilized JPL's GIPSY-OASIS II, employing a precise point positioning technique, whereas the CfA group has used MIT's GAMIT software and a double-differencing approach. Comparison of the two sets of results for 28 stations and 2.8 years of data has revealed only small differences in horizontal velocity estimates, with formal errors for both groups less than 0.17 mm/yr and an RMS of residual velocity differences of 0.23 mm/yr. The two solutions are consistent with one another at the two sigma level. Relative horizontal velocities at stations within 40 km of Yucca Mountain itself are on the order of <0.5 mm/yr, with a smooth pattern of NNW shear. In order to obtain negligible differences in results both groups had to account for coseismic offsets caused by the 1999 Hector Mine earthquake. It was also necessary to perform ambiguity resolution in GIPSY. Without ambiguity resolution, the GIPSY results were significantly different to those produced by GAMIT. The data was processed in GIPSY on a line-by-line basis, relative to a station in the center of the Yucca Mountain network, to produce a regionally-referenced solution free of common mode signals. It was evident in both solutions that radome changes produce a measurable effect in the vertical component, giving an apparent vertical swell of

  3. A Cross-Validation Approach to Approximate Basis Function Selection of the Stall Flutter Response of a Rectangular Wing in a Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Vio, Gareth A.; Andrianne, Thomas; azak, Norizham Abudl; Dimitriadis, Grigorios

    2012-01-01

    The stall flutter response of a rectangular wing in a low speed wind tunnel is modelled using a nonlinear difference equation description. Static and dynamic tests are used to select a suitable model structure and basis function. Bifurcation criteria such as the Hopf condition and vibration amplitude variation with airspeed were used to ensure the model was representative of experimentally measured stall flutter phenomena. Dynamic test data were used to estimate model parameters and estimate an approximate basis function.

  4. Calibration and Cross-Validation of the ActiGraph wGT3X+ Accelerometer for the Estimation of Physical Activity Intensity in Children with Intellectual Disabilities

    PubMed Central

    McGarty, Arlene M.; Penpraze, Victoria; Melville, Craig A.

    2016-01-01

    Background Valid objective measurement is integral to increasing our understanding of physical activity and sedentary behaviours. However, no population-specific cut points have been calibrated for children with intellectual disabilities. Therefore, this study aimed to calibrate and cross-validate the first population-specific accelerometer intensity cut points for children with intellectual disabilities. Methods Fifty children with intellectual disabilities were randomly assigned to the calibration (n = 36; boys = 28, 9.53±1.08yrs) or cross-validation (n = 14; boys = 9, 9.57±1.16yrs) group. Participants completed a semi-structured school-based activity session, which included various activities ranging from sedentary to vigorous intensity. Direct observation (SOFIT tool) was used to calibrate the ActiGraph wGT3X+, which participants wore on the right hip. Receiver Operating Characteristic curve analyses determined the optimal cut points for sedentary, moderate, and vigorous intensity activity for the vertical axis and vector magnitude. Classification agreement was investigated using sensitivity, specificity, total agreement, and Cohen’s kappa scores against the criterion measure of SOFIT. Results The optimal (AUC = .87−.94) vertical axis cut points (cpm) were ≤507 (sedentary), 1008−2300 (moderate), and ≥2301 (vigorous), which demonstrated high sensitivity (81−88%) and specificity (81−85%). The optimal (AUC = .86−.92) vector magnitude cut points (cpm) of ≤1863 (sedentary), 2610−4214 (moderate), and ≥4215 (vigorous) demonstrated comparable, albeit marginally lower, accuracy than the vertical axis cut points (sensitivity = 80−86%; specificity = 77−82%). Classification agreement ranged from moderate to almost perfect (κ = .51−.85) with high sensitivity and specificity, and confirmed the trend that accuracy increased with intensity, and vertical axis cut points provide higher classification agreement than vector magnitude cut points

  5. FDDS: A Cross Validation Study.

    ERIC Educational Resources Information Center

    Sawyer, Judy Parsons

    The Family Drawing Depression Scale (FDDS) was created by Wright and McIntyre to provide a clear and reliable scoring method for the Kinetic Family Drawing as a procedure for detecting depression. A study was conducted to confirm the value of the FDDS as a systematic tool for interpreting family drawings with populations of depressed individuals.…

  6. Sediment transport patterns in the San Francisco Bay Coastal System from cross-validation of bedform asymmetry and modeled residual flux

    USGS Publications Warehouse

    Barnard, Patrick L.; Erikson, Li H.; Elias, Edwin P.L.; Dartnell, Peter

    2013-01-01

    The morphology of ~ 45,000 bedforms from 13 multibeam bathymetry surveys was used as a proxy for identifying net bedload sediment transport directions and pathways throughout the San Francisco Bay estuary and adjacent outer coast. The spatially-averaged shape asymmetry of the bedforms reveals distinct pathways of ebb and flood transport. Additionally, the region-wide, ebb-oriented asymmetry of 5% suggests net seaward-directed transport within the estuarine-coastal system, with significant seaward asymmetry at the mouth of San Francisco Bay (11%), through the northern reaches of the Bay (7-8%), and among the largest bedforms (21% for λ > 50 m). This general indication for the net transport of sand to the open coast strongly suggests that anthropogenic removal of sediment from the estuary, particularly along clearly defined seaward transport pathways, will limit the supply of sand to chronically eroding, open-coast beaches. The bedform asymmetry measurements significantly agree (up to ~ 76%) with modeled annual residual transport directions derived from a hydrodynamically-calibrated numerical model, and the orientation of adjacent, flow-sculpted seafloor features such as mega-flute structures, providing a comprehensive validation of the technique. The methods described in this paper to determine well-defined, cross-validated sediment transport pathways can be applied to estuarine-coastal systems globally where bedforms are present. The results can inform and improve regional sediment management practices to more efficiently utilize often limited sediment resources and mitigate current and future sediment supply-related impacts.

  7. Sediment transport patterns in the San Francisco Bay Coastal System from cross-validation of bedform asymmetry and modeled residual flux

    USGS Publications Warehouse

    Barnard, Patrick L.; Erikson, Li H.; Elias, Edwin P.L.; Dartnell, Peter; Barnard, P.L.; Jaffee, B.E.; Schoellhamer, D.H.

    2013-01-01

    The morphology of ~ 45,000 bedforms from 13 multibeam bathymetry surveys was used as a proxy for identifying net bedload sediment transport directions and pathways throughout the San Francisco Bay estuary and adjacent outer coast. The spatially-averaged shape asymmetry of the bedforms reveals distinct pathways of ebb and flood transport. Additionally, the region-wide, ebb-oriented asymmetry of 5% suggests net seaward-directed transport within the estuarine-coastal system, with significant seaward asymmetry at the mouth of San Francisco Bay (11%), through the northern reaches of the Bay (7–8%), and among the largest bedforms (21% for λ > 50 m). This general indication for the net transport of sand to the open coast strongly suggests that anthropogenic removal of sediment from the estuary, particularly along clearly defined seaward transport pathways, will limit the supply of sand to chronically eroding, open-coast beaches. The bedform asymmetry measurements significantly agree (up to ~ 76%) with modeled annual residual transport directions derived from a hydrodynamically-calibrated numerical model, and the orientation of adjacent, flow-sculpted seafloor features such as mega-flute structures, providing a comprehensive validation of the technique. The methods described in this paper to determine well-defined, cross-validated sediment transport pathways can be applied to estuarine-coastal systems globally where bedforms are present. The results can inform and improve regional sediment management practices to more efficiently utilize often limited sediment resources and mitigate current and future sediment supply-related impacts.

  8. Partial cross-validation of the Wechsler Memory Scale-Revised (WMS-R) General Memory-Attention/Concentration Malingering Index in a nonlitigating sample.

    PubMed

    Hilsabeck, Robin C; Thompson, Matthew D; Irby, James W; Adams, Russell L; Scott, James G; Gouvier, Wm Drew

    2003-01-01

    The Wechsler Memory Scale-Revised (WMS-R) malingering indices proposed by Mittenberg, Azrin, Millsaps, and Heilbronner [Psychol Assess 5 (1993) 34.] were partially cross-validated in a sample of 200 nonlitigants. Nine diagnostic categories were examined, including participants with traumatic brain injury (TBI), brain tumor, stroke/vascular, senile dementia of the Alzheimer's type (SDAT), epilepsy, depression/anxiety, medical problems, and no diagnosis. Results showed that the discriminant function using WMS-R subtests misclassified only 6.5% of the sample as malingering, with significantly higher misclassification rates of SDAT and stroke/vascular groups. The General Memory Index-Attention/Concentration Index (GMI-ACI) difference score misclassified only 8.5% of the sample as malingering when a difference score of greater than 25 points was used as the cutoff criterion. No diagnostic group was significantly more likely to be misclassified. Results support the utility of the GMI-ACI difference score, as well as the WMS-R subtest discriminant function score, in detecting malingering.

  9. Cross-validation of the structure of a transiently formed and low populated FF domain folding intermediate determined by relaxation dispersion NMR and CS-Rosetta.

    PubMed

    Barette, Julia; Velyvis, Algirdas; Religa, Tomasz L; Korzhnev, Dmitry M; Kay, Lewis E

    2012-06-14

    We have recently reported the atomic resolution structure of a low populated and transiently formed on-pathway folding intermediate of the FF domain from human HYPA/FBP11 [Korzhnev, D. M.; Religa, T. L.; Banachewicz, W.; Fersht, A. R.; Kay, L.E. Science 2011, 329, 1312-1316]. The structure was determined on the basis of backbone chemical shift and bond vector orientation restraints of the invisible intermediate state measured using relaxation dispersion nuclear magnetic resonance (NMR) spectroscopy that were subsequently input into the database structure determination program, CS-Rosetta. As a cross-validation of the structure so produced, we present here the solution structure of a mimic of the folding intermediate that is highly populated in solution, obtained from the wild-type domain by mutagenesis that destabilizes the native state. The relaxation dispersion/CS-Rosetta structures of the intermediate are within 2 Å of those of the mimic, with the nonnative interactions in the intermediate also observed in the mimic. This strongly confirms the structure of the FF domain folding intermediate, in particular, and validates the use of relaxation dispersion derived restraints in structural studies of invisible excited states, in general.

  10. A Novel Local Learning based Approach With Application to Breast Cancer Diagnosis

    SciTech Connect

    Xu, Songhua; Tourassi, Georgia

    2012-01-01

    The purpose of this study is to develop and evaluate a novel local learning-based approach for computer-assisted diagnosis of breast cancer. Our new local learning based algorithm using the linear logistic regression method as its base learner is described. Overall, our algorithm will perform its stochastic searching process until the total allowed computing time is used up by our random walk process in identifying the most suitable population subdivision scheme and their corresponding individual base learners. The proposed local learning-based approach was applied for the prediction of breast cancer given 11 mammographic and clinical findings reported by physicians using the BI-RADS lexicon. Our database consisted of 850 patients with biopsy confirmed diagnosis (290 malignant and 560 benign). We also compared the performance of our method with a collection of publicly available state-of-the-art machine learning methods. Predictive performance for all classifiers was evaluated using 10-fold cross validation and Receiver Operating Characteristics (ROC) analysis. Figure 1 reports the performance of 54 machine learning methods implemented in the machine learning toolkit Weka (version 3.0). We introduced a novel local learning-based classifier and compared it with an extensive list of other classifiers for the problem of breast cancer diagnosis. Our experiments show that the algorithm superior prediction performance outperforming a wide range of other well established machine learning techniques. Our conclusion complements the existing understanding in the machine learning field that local learning may capture complicated, non-linear relationships exhibited by real-world datasets.

  11. Methane cross-validation between three Fourier Transform Spectrometers: SCISAT ACE-FTS, GOSAT TANSO-FTS, and ground-based FTS measurements in the Canadian high Arctic

    NASA Astrophysics Data System (ADS)

    Holl, G.; Walker, K. A.; Conway, S.; Saitoh, N.; Boone, C. D.; Strong, K.; Drummond, J. R.

    2015-12-01

    We present cross-validation of remote sensing measurements of methane profiles in the Canadian high Arctic. Accurate and precise measurements of methane are essential to understand quantitatively its role in the climate system and in global change. Here, we show a cross-validation between three datasets: two from spaceborne instruments and one from a ground-based instrument. All are Fourier Transform Spectrometers (FTSs). We consider the Canadian SCISAT Atmospheric Chemistry Experiment (ACE)-FTS, a solar occultation infrared spectrometer operating since 2004, and the thermal infrared band of the Japanese Greenhouse Gases Observing Satellite (GOSAT) Thermal And Near infrared Sensor for carbon Observation (TANSO)-FTS, a nadir/off-nadir scanning FTS instrument operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker 125HR Fourier Transform Infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environment Atmospheric Research Laboratory (PEARL) Ridge Lab at Eureka, Nunavut (80° N, 86° W) since 2006. For each pair of instruments, measurements are collocated within 500 km and 24 h. An additional criterion based on potential vorticity values was found not to significantly affect differences between measurements. Profiles are regridded to a common vertical grid for each comparison set. To account for differing vertical resolutions, ACE-FTS measurements are smoothed to the resolution of either PEARL-FTS or TANSO-FTS, and PEARL-FTS measurements are smoothed to the TANSO-FTS resolution. Differences for each pair are examined in terms of profile and partial columns. During the period considered, the number of collocations for each pair is large enough to obtain a good sample size (from several hundred to tens of thousands depending on pair and configuration). Considering full profiles, the degrees of freedom for signal (DOFS) are between 0.2 and 0.7 for TANSO-FTS and between 1.5 and 3

  12. Methane cross-validation between three Fourier transform spectrometers: SCISAT ACE-FTS, GOSAT TANSO-FTS, and ground-based FTS measurements in the Canadian high Arctic

    NASA Astrophysics Data System (ADS)

    Holl, Gerrit; Walker, Kaley A.; Conway, Stephanie; Saitoh, Naoko; Boone, Chris D.; Strong, Kimberly; Drummond, James R.

    2016-05-01

    We present cross-validation of remote sensing measurements of methane profiles in the Canadian high Arctic. Accurate and precise measurements of methane are essential to understand quantitatively its role in the climate system and in global change. Here, we show a cross-validation between three data sets: two from spaceborne instruments and one from a ground-based instrument. All are Fourier transform spectrometers (FTSs). We consider the Canadian SCISAT Atmospheric Chemistry Experiment (ACE)-FTS, a solar occultation infrared spectrometer operating since 2004, and the thermal infrared band of the Japanese Greenhouse Gases Observing Satellite (GOSAT) Thermal And Near infrared Sensor for carbon Observation (TANSO)-FTS, a nadir/off-nadir scanning FTS instrument operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker 125HR Fourier transform infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environment Atmospheric Research Laboratory (PEARL) Ridge Laboratory at Eureka, Nunavut (80° N, 86° W) since 2006. For each pair of instruments, measurements are collocated within 500 km and 24 h. An additional collocation criterion based on potential vorticity values was found not to significantly affect differences between measurements. Profiles are regridded to a common vertical grid for each comparison set. To account for differing vertical resolutions, ACE-FTS measurements are smoothed to the resolution of either PEARL-FTS or TANSO-FTS, and PEARL-FTS measurements are smoothed to the TANSO-FTS resolution. Differences for each pair are examined in terms of profile and partial columns. During the period considered, the number of collocations for each pair is large enough to obtain a good sample size (from several hundred to tens of thousands depending on pair and configuration). Considering full profiles, the degrees of freedom for signal (DOFS) are between 0.2 and 0.7 for TANSO-FTS and

  13. Duration of opioid antagonism by nalmefene and naloxone in the dog. A nonparametric pharmacodynamic comparison based on generalized cross-validated spline estimation.

    PubMed

    Wilhelm, J A; Veng-Pedersen, P; Zakszewski, T B; Osifchin, E; Waters, S J

    1995-10-01

    The opioid antagonist nalmefene was compared in its pharmacodynamic properties to the structurally similar antagonist naloxone in a 2 x 2 cross-over study with 8 dogs. Opioid-induced respiratory depression was produced for ca. 7 hours with a constant rate intravenous infusion of 30 micrograms/kg/hr fentanyl and quantified using noninvasive transcutaneous pCO2 recordings. Upon reaching a pseudo-steady state of respiratory depression at 2 hours post fentanyl infusion initiation, the animals then received either nalmefene (12 micrograms/kg/hr) or naloxone (48 micrograms/kg/hr) for 30 minutes. The pharmacodynamic pCO2 responses produced by the combined agonist/antagonist regimen were fitted with a cubic spline function using a generalized cross-validation technique. Various quantities that describe the onset, duration and relative potency of each antagonist were determined directly from the estimated response curves in a model-independent, nonparametric way. The 2 antagonists were compared in terms of these quantities using a statistical model that considers carry-over effects typically arising from a possible development of tolerance. The results indicate that nalmefene: 1. is approximately 4-fold more potent than naloxone, 2. has an onset of reversal as rapid as naloxone, and 3. has a significantly longer (2-fold) pharmacodynamic duration of action than does naloxone. The mean time required for the agonist to regain 30% or 50% of its effect present at the start of the antagonist infusion was 66 and 112 minutes and 37 and 55 minutes for nalmefene and naloxone, respectively. Early, effective pharmacodynamic screening of new drug compounds is a valuable way of accelerating the drug discovery process and reducing escalating drug development costs. This study exemplifies a novel, endpoint oriented pharmacodynamic comparison procedure that can be done expeditiously before starting the time consuming development and validation of a drug level assay, and before engaging in

  14. Revealing latent value of clinically acquired CTs of traumatic brain injury through multi-atlas segmentation in a retrospective study of 1,003 with external cross-validation

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Kelly, Patrick D.; Asman, Andrew J.; Kang, Hakmook; Patel, Mayur B.; Landman, Bennett A.

    2015-03-01

    Medical imaging plays a key role in guiding treatment of traumatic brain injury (TBI) and for diagnosing intracranial hemorrhage; most commonly rapid computed tomography (CT) imaging is performed. Outcomes for patients with TBI are variable and difficult to predict upon hospital admission. Quantitative outcome scales (e.g., the Marshall classification) have been proposed to grade TBI severity on CT, but such measures have had relatively low value in staging patients by prognosis. Herein, we examine a cohort of 1,003 subjects admitted for TBI and imaged clinically to identify potential prognostic metrics using a "big data" paradigm. For all patients, a brain scan was segmented with multi-atlas labeling, and intensity/volume/texture features were computed in a localized manner. In a 10-fold crossvalidation approach, the explanatory value of the image-derived features is assessed for length of hospital stay (days), discharge disposition (five point scale from death to return home), and the Rancho Los Amigos functional outcome score (Rancho Score). Image-derived features increased the predictive R2 to 0.38 (from 0.18) for length of stay, to 0.51 (from 0.4) for discharge disposition, and to 0.31 (from 0.16) for Rancho Score (over models consisting only of non-imaging admission metrics, but including positive/negative radiological CT findings). This study demonstrates that high volume retrospective analysis of clinical imaging data can reveal imaging signatures with prognostic value. These targets are suited for follow-up validation and represent targets for future feature selection efforts. Moreover, the increase in prognostic value would improve staging for intervention assessment and provide more reliable guidance for patients.

  15. Determination of snow avalanche return periods using a tree-ring based reconstruction in the French Alps: cross validation with the predictions of a statistical-dynamical model

    NASA Astrophysics Data System (ADS)

    Schläppy, Romain; Eckert, Nicolas; Jomelli, Vincent; Grancher, Delphine; Brunstein, Daniel; Stoffel, Markus; Naaim, Mohamed

    2013-04-01

    rare events, i.e. to the tail of the local runout distance distribution. Furthermore, a good agreement exists with the statistical-numerical model's prediction, i.e. a 10-40 m difference for return periods ranging between 10 and 300 years, which is rather small with regards to the uncertainty levels to be considered in avalanche probabilistic modeling and dendrochronological reconstructions. It is important to note that such a cross validation on independent extreme predictions has never been undertaken before. It suggest that i) dendrochronological reconstruction can provide valuable information for anticipating future extreme avalanche events in the context of risk management, and, in turn, that ii) the statistical-numerical model, while properly calibrated, can be used with reasonable confidence to refine these predictions, with for instance evaluation of pressure and flow depth distributions at each position of the runout zone. A strong sensitivity to the determination of local avalanche and dendrological record frequencies is however highlighted, indicating that this step is an essential step for an accurate probabilistic characterization of large-extent events.

  16. Cross-validation of a mass spectrometric-based method for the therapeutic drug monitoring of irinotecan: implementation of matrix-assisted laser desorption/ionization mass spectrometry in pharmacokinetic measurements.

    PubMed

    Calandra, Eleonora; Posocco, Bianca; Crotti, Sara; Marangon, Elena; Giodini, Luciana; Nitti, Donato; Toffoli, Giuseppe; Traldi, Pietro; Agostini, Marco

    2016-07-01

    Irinotecan is a widely used antineoplastic drug, mostly employed for the treatment of colorectal cancer. This drug is a feasible candidate for therapeutic drug monitoring due to the presence of a wide inter-individual variability in the pharmacokinetic and pharmacodynamic parameters. In order to determine the drug concentration during the administration protocol, we developed a quantitative MALDI-MS method using CHCA as MALDI matrix. Here, we demonstrate that MALDI-TOF can be applied in a routine setting for therapeutic drug monitoring in humans offering quick and accurate results. To reach this aim, we cross validated, according to FDA and EMA guidelines, the MALDI-TOF method in comparison with a standard LC-MS/MS method, applying it for the quantification of 108 patients' plasma samples from a clinical trial. Standard curves for irinotecan were linear (R (2) ≥ 0.9842) over the concentration ranges between 300 and 10,000 ng/mL and showed good back-calculated accuracy and precision. Intra- and inter-day precision and accuracy, determined on three quality control levels were always <12.8 % and between 90.1 and 106.9 %, respectively. The cross-validation procedure showed a good reproducibility between the two methods, the percentage differences within 20 % in more than 70 % of the total amount of clinical samples analysed. PMID:27235158

  17. Cross-validated methods for promoter/transcription start site mapping in SL trans-spliced genes, established using the Ciona intestinalis troponin I gene

    PubMed Central

    Khare, Parul; Mortimer, Sandra I.; Cleto, Cynthia L.; Okamura, Kohji; Suzuki, Yutaka; Kusakabe, Takehiro; Nakai, Kenta; Meedel, Thomas H.; Hastings, Kenneth E. M.

    2011-01-01

    In conventionally-expressed eukaryotic genes, transcription start sites (TSSs) can be identified by mapping the mature mRNA 5′-terminal sequence onto the genome. However, this approach is not applicable to genes that undergo pre-mRNA 5′-leader trans-splicing (SL trans-splicing) because the original 5′-segment of the primary transcript is replaced by the spliced leader sequence during the trans-splicing reaction and is discarded. Thus TSS mapping for trans-spliced genes requires different approaches. We describe two such approaches and show that they generate precisely agreeing results for an SL trans-spliced gene encoding the muscle protein troponin I in the ascidian tunicate chordate Ciona intestinalis. One method is based on experimental deletion of trans-splice acceptor sites and the other is based on high-throughput mRNA 5′-RACE sequence analysis of natural RNA populations in order to detect minor transcripts containing the pre-mRNA’s original 5′-end. Both methods identified a single major troponin I TSS located ∼460 nt upstream of the trans-splice acceptor site. Further experimental analysis identified a functionally important TATA element 31 nt upstream of the start site. The two methods employed have complementary strengths and are broadly applicable to mapping promoters/TSSs for trans-spliced genes in tunicates and in trans-splicing organisms from other phyla. PMID:21109525

  18. Methane Cross-Validation Between Spaceborne Solar Occultation Observations from ACE-FTS, Spaceborne Nadir Sounding from Gosat, and Ground-Based Solar Absorption Measurements, at a High Arctic Site.

    NASA Astrophysics Data System (ADS)

    Holl, G.; Walker, K. A.; Conway, S. A.; Saitoh, N.; Boone, C. D.; Strong, K.; Drummond, J. R.

    2014-12-01

    We present cross-validation of remote sensing observations of methane profiles in the Canadian High Arctic. Methane is the third most important greenhouse gas on Earth, and second only to carbon dioxide in its contribution to anthropogenic global warming. Accurate and precise observations of methane are essential to understand quantitatively its role in the climate system and in global change. The Arctic is a particular region of concern, as melting permafrost and disappearing sea ice might lead to accelerated release of methane into the atmosphere. Global observations require spaceborne instruments, in particular in the Arctic, where surface measurements are sparse and expensive to perform. Satellite-based remote sensing is an underconstrained problem, and specific validation under Arctic circumstances is required. Here, we show a cross-validation between two spaceborne instruments and ground-based measurements, all Fourier Transform Spectrometers (FTS). We consider the Canadian SCISAT ACE-FTS, a solar occultation spectrometer operating since 2004, and the Japanese GOSAT TANSO-FTS, a nadir-pointing FTS operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker Fourier Transform Infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environmental and Atmospheric Research Laboratory (PEARL) at Eureka, Nunavut (80°N, 86°W) since 2006. Measurements are collocated considering temporal, spatial, and geophysical criteria and regridded to a common vertical grid. We perform smoothing on the higher-resolution instrument results to account for different vertical resolutions. Then, profiles of differences for each pair of instruments are examined. Any bias between instruments, or any accuracy that is worse than expected, needs to be understood prior to using the data. The results of the study will serve as a guideline on how to use the vertically resolved methane products from ACE and

  19. Cross-Validation of Suspended Sediment Concentrations Derived from Satellite Imagery and Numerical Modeling of the 1997 New Year's Flood on the Feather River, CA

    NASA Astrophysics Data System (ADS)

    Kilham, N. E.

    2009-12-01

    Image analysis was applied to assess suspended sediment concentrations (SSC) predicted by a numerical model of 2D hydraulics and sediment transport (Telemac-2D), coupled to a solver for the advection-diffusion equation (SISYPHE) and representing 18 days of flooding over 70 kilometers of the lower Feather-Yuba Rivers. Sisyphe treats the suspended load as a tracer, removed from the flow if the bed shear velocity, u* is lower than an empirically derived threshold (ud* = 7.8E-3 m s-1). Agreement between model (D50 = 0.03 mm) and image-derived SSC (mg L-1) suggests that image interpretation could prove to be a viable approach for verifying spatially-distributed models of floodplain sediment transport if imagery is acquired for a particular flood and at a sufficient spatial and radiometric resolution. However, remotely derived SSC represents the integrated concentration of suspended sediment at the water surface. Hence, comparing SSC magnitudes derived from imagery and numerical modeling requires that a relationship is first established between the total suspended load and the portion of this load suspended within the optical range of the sensor (e.g., Aalto, 1995). Using the optical depth (0.5 m) determined from radiative transfer modeling, surface SSC measured from a 1/14/97 Landsat TM5 image (30 m) were converted to depth-integrated SSC with the Rouse (1937) equation. Surface concentrations were derived using a look-up table for the sensor to convert endmember fractions obtained from a spectral mixture analysis of the image. A two-endmember model (2.0 and 203 mg L-1) was used, with synthetic endmembers derived from optical and radiative transfer modeling and inversion of field spectra collected from the Sacramento and Feather Rivers and matched to measured SSC values. Remotely sensed SSC patterns were then compared to the Telemac results for the same day and time. Modeled concentrations are a function of both the rating curve boundary conditions, and the transport and

  20. In silico prediction of toxicity of non-congeneric industrial chemicals using ensemble learning based modeling approaches

    SciTech Connect

    Singh, Kunwar P. Gupta, Shikha

    2014-03-15

    Ensemble learning approach based decision treeboost (DTB) and decision tree forest (DTF) models are introduced in order to establish quantitative structure–toxicity relationship (QSTR) for the prediction of toxicity of 1450 diverse chemicals. Eight non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals was evaluated using Tanimoto similarity index. Stochastic gradient boosting and bagging algorithms supplemented DTB and DTF models were constructed for classification and function optimization problems using the toxicity end-point in T. pyriformis. Special attention was drawn to prediction ability and robustness of the models, investigated both in external and 10-fold cross validation processes. In complete data, optimal DTB and DTF models rendered accuracies of 98.90%, 98.83% in two-category and 98.14%, 98.14% in four-category toxicity classifications. Both the models further yielded classification accuracies of 100% in external toxicity data of T. pyriformis. The constructed regression models (DTB and DTF) using five descriptors yielded correlation coefficients (R{sup 2}) of 0.945, 0.944 between the measured and predicted toxicities with mean squared errors (MSEs) of 0.059, and 0.064 in complete T. pyriformis data. The T. pyriformis regression models (DTB and DTF) applied to the external toxicity data sets yielded R{sup 2} and MSE values of 0.637, 0.655; 0.534, 0.507 (marine bacteria) and 0.741, 0.691; 0.155, 0.173 (algae). The results suggest for wide applicability of the inter-species models in predicting toxicity of new chemicals for regulatory purposes. These approaches provide useful strategy and robust tools in the screening of ecotoxicological risk or environmental hazard potential of chemicals. - Graphical abstract: Importance of input variables in DTB and DTF classification models for (a) two-category, and (b) four-category toxicity intervals in T. pyriformis data. Generalization and predictive abilities of the

  1. Cross-validation and evaluation of the performance of methods for the elemental analysis of forensic glass by μ-XRF, ICP-MS, and LA-ICP-MS.

    PubMed

    Trejos, Tatiana; Koons, Robert; Becker, Stefan; Berman, Ted; Buscaglia, JoAnn; Duecking, Marc; Eckert-Lumsdon, Tiffany; Ernst, Troy; Hanlon, Christopher; Heydon, Alex; Mooney, Kim; Nelson, Randall; Olsson, Kristine; Palenik, Christopher; Pollock, Edward Chip; Rudell, David; Ryland, Scott; Tarifa, Anamary; Valadez, Melissa; Weis, Peter; Almirall, Jose

    2013-06-01

    Elemental analysis of glass was conducted by 16 forensic science laboratories, providing a direct comparison between three analytical methods [micro-x-ray fluorescence spectroscopy (μ-XRF), solution analysis using inductively coupled plasma mass spectrometry (ICP-MS), and laser ablation inductively coupled plasma mass spectrometry]. Interlaboratory studies using glass standard reference materials and other glass samples were designed to (a) evaluate the analytical performance between different laboratories using the same method, (b) evaluate the analytical performance of the different methods, (c) evaluate the capabilities of the methods to correctly associate glass that originated from the same source and to correctly discriminate glass samples that do not share the same source, and (d) standardize the methods of analysis and interpretation of results. Reference materials NIST 612, NIST 1831, FGS 1, and FGS 2 were employed to cross-validate these sensitive techniques and to optimize and standardize the analytical protocols. The resulting figures of merit for the ICP-MS methods include repeatability better than 5% RSD, reproducibility between laboratories better than 10% RSD, bias better than 10%, and limits of detection between 0.03 and 9 μg g(-1) for the majority of the elements monitored. The figures of merit for the μ-XRF methods include repeatability better than 11% RSD, reproducibility between laboratories after normalization of the data better than 16% RSD, and limits of detection between 5.8 and 7,400 μg g(-1). The results from this study also compare the analytical performance of different forensic science laboratories conducting elemental analysis of glass evidence fragments using the three analytical methods.

  2. Prediction of Biofilm Inhibiting Peptides: An In silico Approach.

    PubMed

    Gupta, Sudheer; Sharma, Ashok K; Jaiswal, Shubham K; Sharma, Vineet K

    2016-01-01

    Approximately 75% of microbial infections found in humans are caused by microbial biofilms. These biofilms are resistant to host immune system and most of the currently available antibiotics. Small peptides are extensively studied for their role as anti-microbial peptides, however, only a limited studies have shown their potential as inhibitors of biofilm. Therefore, to develop a unique computational method aimed at the prediction of biofilm inhibiting peptides, the experimentally validated biofilm inhibiting peptides sequences were used to extract sequence based features and to identify unique sequence motifs. Biofilm inhibiting peptides were observed to be abundant in positively charged and aromatic amino acids, and also showed selective abundance of some dipeptides and sequence motifs. These individual sequence based features were utilized to construct Support Vector Machine-based prediction models and additionally by including sequence motifs information, the hybrid models were constructed. Using 10-fold cross validation, the hybrid model displayed the accuracy and Matthews Correlation Coefficient (MCC) of 97.83% and 0.87, respectively. On the validation dataset, the hybrid model showed the accuracy and MCC value of 97.19% and 0.84, respectively. The validated model and other tools developed for the prediction of biofilm inhibiting peptides are available freely as web server at http://metagenomics.iiserb.ac.in/biofin/ and http://metabiosys.iiserb.ac.in/biofin/. PMID:27379078

  3. Prediction of Biofilm Inhibiting Peptides: An In silico Approach

    PubMed Central

    Gupta, Sudheer; Sharma, Ashok K.; Jaiswal, Shubham K.; Sharma, Vineet K.

    2016-01-01

    Approximately 75% of microbial infections found in humans are caused by microbial biofilms. These biofilms are resistant to host immune system and most of the currently available antibiotics. Small peptides are extensively studied for their role as anti-microbial peptides, however, only a limited studies have shown their potential as inhibitors of biofilm. Therefore, to develop a unique computational method aimed at the prediction of biofilm inhibiting peptides, the experimentally validated biofilm inhibiting peptides sequences were used to extract sequence based features and to identify unique sequence motifs. Biofilm inhibiting peptides were observed to be abundant in positively charged and aromatic amino acids, and also showed selective abundance of some dipeptides and sequence motifs. These individual sequence based features were utilized to construct Support Vector Machine-based prediction models and additionally by including sequence motifs information, the hybrid models were constructed. Using 10-fold cross validation, the hybrid model displayed the accuracy and Matthews Correlation Coefficient (MCC) of 97.83% and 0.87, respectively. On the validation dataset, the hybrid model showed the accuracy and MCC value of 97.19% and 0.84, respectively. The validated model and other tools developed for the prediction of biofilm inhibiting peptides are available freely as web server at http://metagenomics.iiserb.ac.in/biofin/ and http://metabiosys.iiserb.ac.in/biofin/. PMID:27379078

  4. An Empirical Study of Univariate and Genetic Algorithm-Based Feature Selection in Binary Classification with Microarray Data

    PubMed Central

    Lecocke, Michael; Hess, Kenneth

    2007-01-01

    Background We consider both univariate- and multivariate-based feature selection for the problem of binary classification with microarray data. The idea is to determine whether the more sophisticated multivariate approach leads to better misclassification error rates because of the potential to consider jointly significant subsets of genes (but without overfitting the data). Methods We present an empirical study in which 10-fold cross-validation is applied externally to both a univariate-based and two multivariate- (genetic algorithm (GA)-) based feature selection processes. These procedures are applied with respect to three supervised learning algorithms and six published two-class microarray datasets. Results Considering all datasets, and learning algorithms, the average 10-fold external cross-validation error rates for the univariate-, single-stage GA-, and two-stage GA-based processes are 14.2%, 14.6%, and 14.2%, respectively. We also find that the optimism bias estimates from the GA analyses were half that of the univariate approach, but the selection bias estimates from the GA analyses were 2.5 times that of the univariate results. Conclusions We find that the 10-fold external cross-validation misclassification error rates were very comparable. Further, we find that a two-stage GA approach did not demonstrate a significant advantage over a 1-stage approach. We also find that the univariate approach had higher optimism bias and lower selection bias compared to both GA approaches. PMID:19458774

  5. Sex estimation from the tarsal bones in a Portuguese sample: a machine learning approach.

    PubMed

    Navega, David; Vicente, Ricardo; Vieira, Duarte N; Ross, Ann H; Cunha, Eugénia

    2015-05-01

    Sex estimation is extremely important in the analysis of human remains as many of the subsequent biological parameters are sex specific (e.g., age at death, stature, and ancestry). When dealing with incomplete or fragmented remains, metric analysis of the tarsal bones of the feet has proven valuable. In this study, the utility of 18 width, length, and height tarsal measurements were assessed for sex-related variation in a Portuguese sample. A total of 300 males and females from the Coimbra Identified Skeletal Collection were used to develop sex prediction models based on statistical and machine learning algorithm such as discriminant function analysis, logistic regression, classification trees, and artificial neural networks. All models were evaluated using 10-fold cross-validation and an independent test sample composed of 60 males and females from the Identified Skeletal Collection of the 21st Century. Results showed that tarsal bone sex-related variation can be easily captured with a high degree of repeatability. A simple tree-based multivariate algorithm involving measurements from the calcaneus, talus, first and third cuneiforms, and cuboid resulted in 88.3% correct sex estimation both on training and independent test sets. Traditional statistical classifiers such as the discriminant function analysis were outperformed by machine learning techniques. Results obtained show that machine learning algorithm are an important tool the forensic practitioners should consider when developing new standards for sex estimation.

  6. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J.; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm's burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  7. Sex estimation from the tarsal bones in a Portuguese sample: a machine learning approach.

    PubMed

    Navega, David; Vicente, Ricardo; Vieira, Duarte N; Ross, Ann H; Cunha, Eugénia

    2015-05-01

    Sex estimation is extremely important in the analysis of human remains as many of the subsequent biological parameters are sex specific (e.g., age at death, stature, and ancestry). When dealing with incomplete or fragmented remains, metric analysis of the tarsal bones of the feet has proven valuable. In this study, the utility of 18 width, length, and height tarsal measurements were assessed for sex-related variation in a Portuguese sample. A total of 300 males and females from the Coimbra Identified Skeletal Collection were used to develop sex prediction models based on statistical and machine learning algorithm such as discriminant function analysis, logistic regression, classification trees, and artificial neural networks. All models were evaluated using 10-fold cross-validation and an independent test sample composed of 60 males and females from the Identified Skeletal Collection of the 21st Century. Results showed that tarsal bone sex-related variation can be easily captured with a high degree of repeatability. A simple tree-based multivariate algorithm involving measurements from the calcaneus, talus, first and third cuneiforms, and cuboid resulted in 88.3% correct sex estimation both on training and independent test sets. Traditional statistical classifiers such as the discriminant function analysis were outperformed by machine learning techniques. Results obtained show that machine learning algorithm are an important tool the forensic practitioners should consider when developing new standards for sex estimation. PMID:25186617

  8. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging.

    PubMed

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J; Lu, Yang; Sellke, Eric W; Fan, Wensheng; DiMaio, J Michael; Thatcher, Jeffrey E

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm’s burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z -test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm’s accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  9. puffMarker: A Multi-Sensor Approach for Pinpointing the Timing of First Lapse in Smoking Cessation

    PubMed Central

    Saleheen, Nazir; Ali, Amin Ahsan; Hossain, Syed Monowar; Sarker, Hillol; Chatterjee, Soujanya; Marlin, Benjamin; Ertin, Emre; al’Absi, Mustafa; Kumar, Santosh

    2015-01-01

    Recent researches have demonstrated the feasibility of detecting smoking from wearable sensors, but their performance on real-life smoking lapse detection is unknown. In this paper, we propose a new model and evaluate its performance on 61 newly abstinent smokers for detecting a first lapse. We use two wearable sensors — breathing pattern from respiration and arm movements from 6-axis inertial sensors worn on wrists. In 10-fold cross-validation on 40 hours of training data from 6 daily smokers, our model achieves a recall rate of 96.9%, for a false positive rate of 1.1%. When our model is applied to 3 days of post-quit data from 32 lapsers, it correctly pinpoints the timing of first lapse in 28 participants. Only 2 false episodes are detected on 20 abstinent days of these participants. When tested on 84 abstinent days from 28 abstainers, the false episode per day is limited to 1/6. PMID:26543927

  10. Cross-Validation of the Self-Motivation Inventory.

    ERIC Educational Resources Information Center

    Heiby, Elaine M.; And Others

    Because the literature suggests that aerobic exercise is associated with physical health and psychological well-being, there is a concern with discovering how to improve adherence to such exercise. There is growing evidence that self-motivation, as measured by the Dishman Self-Motivation Inventory (SMI), is a redictor of adherence to regular…

  11. Cross-validation of resting metabolic rate prediction equations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Background: Knowledge of the resting metabolic rate (RMR) is necessary for determining individual total energy requirements. Measurement of RMR is time consuming and requires specialized equipment. Prediction equations provide an easy method to estimate RMR; however, the accuracy of these equations...

  12. AnL1 smoothing spline algorithm with cross validation

    NASA Astrophysics Data System (ADS)

    Bosworth, Ken W.; Lall, Upmanu

    1993-08-01

    We propose an algorithm for the computation ofL1 (LAD) smoothing splines in the spacesWM(D), with . We assume one is given data of the formyiD(f(ti) +ɛi, iD1,...,N with {itti}iD1N ⊂D, theɛi are errors withE(ɛi)D0, andf is assumed to be inWM. The LAD smoothing spline, for fixed smoothing parameterλ?;0, is defined as the solution,sλ, of the optimization problem (1/N)∑iD1N yi-g(ti +λJM(g), whereJM(g) is the seminorm consisting of the sum of the squaredL2 norms of theMth partial derivatives ofg. Such an LAD smoothing spline,sλ, would be expected to give robust smoothed estimates off in situations where theɛi are from a distribution with heavy tails. The solution to such a problem is a "thin plate spline" of known form. An algorithm for computingsλ is given which is based on considering a sequence of quadratic programming problems whose structure is guided by the optimality conditions for the above convex minimization problem, and which are solved readily, if a good initial point is available. The "data driven" selection of the smoothing parameter is achieved by minimizing aCV(λ) score of the form .The combined LAD-CV smoothing spline algorithm is a continuation scheme in λ↘0 taken on the above SQPs parametrized inλ, with the optimal smoothing parameter taken to be that value ofλ at which theCV(λ) score first begins to increase. The feasibility of constructing the LAD-CV smoothing spline is illustrated by an application to a problem in environment data interpretation.

  13. Evaluation and cross-validation of Environmental Models

    NASA Astrophysics Data System (ADS)

    Lemaire, Joseph

    Before scientific models (statistical or empirical models based on experimental measurements; physical or mathematical models) can be proposed and selected as ISO Environmental Standards, a Commission of professional experts appointed by an established International Union or Association (e.g. IAGA for Geomagnetism and Aeronomy, . . . ) should have been able to study, document, evaluate and validate the best alternative models available at a given epoch. Examples will be given, indicating that different values for the Earth radius have been employed in different data processing laboratories, institutes or agencies, to process, analyse or retrieve series of experimental observations. Furthermore, invariant magnetic coordinates like B and L, commonly used in the study of Earth's radiation belts fluxes and for their mapping, differ from one space mission data center to the other, from team to team, and from country to country. Worse, users of empirical models generally fail to use the original magnetic model which had been employed to compile B and L , and thus to build these environmental models. These are just some flagrant examples of inconsistencies and misuses identified so far; there are probably more of them to be uncovered by careful, independent examination and benchmarking. A meter prototype, the standard unit length that has been determined on 20 May 1875, during the Diplomatic Conference of the Meter, and deposited at the BIPM (Bureau International des Poids et Mesures). In the same token, to coordinate and safeguard progress in the field of Space Weather, similar initiatives need to be undertaken, to prevent wild, uncontrolled dissemination of pseudo Environmental Models and Standards. Indeed, unless validation tests have been performed, there is guaranty, a priori, that all models on the market place have been built consistently with the same units system, and that they are based on identical definitions for the coordinates systems, etc... Therefore, preliminary analyses should be carried out under the control and authority of an established international professional Organization or Association, before any final political decision is made by ISO to select a specific Environmental Models, like for example IGRF and DGRF. Of course, Commissions responsible for checking the consistency of definitions, methods and algorithms for data processing might consider to delegate specific tasks (e.g. bench-marking the technical tools, the calibration procedures, the methods of data analysis, and the software algorithms employed in building the different types of models, as well as their usage) to private, intergovernmental or international organization/agencies (e.g.: NASA, ESA, AGU, EGU, COSPAR, . . . ); eventually, the latter should report conclusions to the Commissions members appointed by IAGA or any established authority like IUGG.

  14. [Cross validity of the UCLA Loneliness Scale factorization].

    PubMed

    Borges, Africa; Prieto, Pedro; Ricchetti, Giacinto; Hernández-Jorge, Carmen; Rodríguez-Naveiras, Elena

    2008-11-01

    Loneliness is an unpleasant experience that takes place when a person's network of social relationships is significantly deficient in quality and quantity, and it is associated with negative feelings. Loneliness is a fundamental construct that provides information about several psychological processes, especially in the clinical setting. It is well known that this construct is related to isolation and emotional loneliness. One of the most well-known psychometric instruments to measure loneliness is the revised UCLA Loneliness Scale, which has been factorized in several populations. A controversial issue related to the UCLA Loneliness Scale is its factor structure, because the test was first created based on a unidimensional structure; however, subsequent research has proved that its structure may be bipolar or even multidimensional. In the present work, the UCLA Loneliness Scale was completed by two populations: Spanish and Italian undergraduate university students. Results show a multifactorial structure in both samples. This research presents a theoretically and analytically coherent bifactorial structure. PMID:18940104

  15. Translational approaches to anxiety: focus on genetics, fear extinction and brain imaging.

    PubMed

    Erhardt, Angelika; Spoormaker, Victor I

    2013-12-01

    Anxiety disorders are highly prevalent and debilitating psychiatric disorders. Owing to the complex aetiology of anxiety disorders, translational studies involving multiple approaches, including human and animal genetics, molecular, endocrinological and imaging studies, are needed to get a converging picture of function or dysfunction of anxiety-related circuits. An advantage of anxiety disorders is that the neural circuitry of fear is comparatively well understood, with striking analogies between animal and human models, and this article aims to provide a brief overview of current translational approaches to anxiety. Experimental models that involve similar tasks in animals and humans, such as fear conditioning and extinction, seem particularly promising and can be readily integrated with imaging, behavioural and physiological readouts. The cross-validation between animal and human genetics models is essential to examine the relevance of candidate genes, as well as their neural pathways, for anxiety disorders; a recent example of such cross-validation work is provided by preclinical and clinical work on TMEM132D, which has been identified as a candidate gene for panic disorder. Further integration of epigenetic data and gene × environment interaction are promising approaches, as highlighted by FKPB5 and PACAP, early life trauma and stress-related anxiety disorders. Finally, connecting genetic and epigenetic data with functionally relevant imaging readouts will allow a comparison of overlap and differences across species in mechanistic pathways from genes to brain functioning and behaviour.

  16. Validation of the modified Becker's split-window approach for retrieving land surface temperature from AVHRR

    NASA Astrophysics Data System (ADS)

    Quan, Weijun; Chen, Hongbin; Han, Xiuzhen; Ma, Zhiqiang

    2015-10-01

    To further verify the modified Becker's split-window approach for retrieving land surface temperature (LST) from long-term Advanced Very High Resolution Radiometer (AVHRR) data, a cross-validation and a radiance-based (R-based) validation are performed and examined in this paper. In the cross-validation, 3481 LST data pairs are extracted from the AVHRR LST product retrieved with the modified Becker's approach and compared with the Moderate Resolution Imaging Spectroradiometer (MODIS) LST product (MYD11A1) for the period 2002-2008, relative to the positions of 548 weather stations in China. The results show that in most cases, the AVHRR LST values are higher than the MYD11A1. When the AVHRR LSTs are adjusted with a linear regression, the values are close to the MYD11A1, showing a good linear relationship between the two datasets ( R 2 = 0.91). In the R-based validation, comparison is made between AVHRR LST retrieved from the modified Becker's approach and the inversed LST from the Moderate Resolution Transmittance Model (MODTRAN) consolidated with observed temperature and humidity profiles at four radiosonde stations. The results show that the retrieved AVHRR LST deviates from the MODTRAN inversed LST by-1.3 (-2.5) K when the total water vapor amount is less (larger) than 20 mm. This provides useful hints for further improvement of the LST retrieval algorithms' accuracy and consistency.

  17. A novel approach to CAD system for the detection of lung nodules in CT images.

    PubMed

    Javaid, Muzzamil; Javid, Moazzam; Rehman, Muhammad Zia Ur; Shah, Syed Irtiza Ali

    2016-10-01

    Detection of pulmonary nodule plays a significant role in the diagnosis of lung cancer in early stage that improves the chances of survival of an individual. In this paper, a computer aided nodule detection method is proposed for the segmentation and detection of challenging nodules like juxtavascular and juxtapleural nodules. Lungs are segmented from computed tomography (CT) images using intensity thresholding; brief analysis of CT image histogram is done to select a suitable threshold value for better segmentation results. Simple morphological closing is used to include juxtapleural nodules in segmented lung regions. K-means clustering is applied for the initial detection and segmentation of potential nodules; shape specific morphological opening is implemented to refine segmentation outcomes. These segmented potential nodules are then divided into six groups on the basis of their thickness and percentage connectivity with lung walls. Grouping not only helped in improving system's efficiency but also reduced computational time, otherwise consumed in calculating and analyzing unnecessary features for all nodules. Different sets of 2D and 3D features are extracted from nodules in each group to eliminate false positives. Small size nodules are differentiated from false positives (FPs) on the basis of their salient features; sensitivity of the system for small nodules is 83.33%. SVM classifier is used for the classification of large nodules, for which the sensitivity of the proposed system is 93.8% applying 10-fold cross-validation. Receiver Operating Characteristic (ROC) curve is used for the analysis of CAD system. Overall sensitivity of the system is 91.65% with 3.19 FPs per case, and accuracy is 96.22%. The system took 3.8 seconds to analyze each image.

  18. A novel approach to CAD system for the detection of lung nodules in CT images.

    PubMed

    Javaid, Muzzamil; Javid, Moazzam; Rehman, Muhammad Zia Ur; Shah, Syed Irtiza Ali

    2016-10-01

    Detection of pulmonary nodule plays a significant role in the diagnosis of lung cancer in early stage that improves the chances of survival of an individual. In this paper, a computer aided nodule detection method is proposed for the segmentation and detection of challenging nodules like juxtavascular and juxtapleural nodules. Lungs are segmented from computed tomography (CT) images using intensity thresholding; brief analysis of CT image histogram is done to select a suitable threshold value for better segmentation results. Simple morphological closing is used to include juxtapleural nodules in segmented lung regions. K-means clustering is applied for the initial detection and segmentation of potential nodules; shape specific morphological opening is implemented to refine segmentation outcomes. These segmented potential nodules are then divided into six groups on the basis of their thickness and percentage connectivity with lung walls. Grouping not only helped in improving system's efficiency but also reduced computational time, otherwise consumed in calculating and analyzing unnecessary features for all nodules. Different sets of 2D and 3D features are extracted from nodules in each group to eliminate false positives. Small size nodules are differentiated from false positives (FPs) on the basis of their salient features; sensitivity of the system for small nodules is 83.33%. SVM classifier is used for the classification of large nodules, for which the sensitivity of the proposed system is 93.8% applying 10-fold cross-validation. Receiver Operating Characteristic (ROC) curve is used for the analysis of CAD system. Overall sensitivity of the system is 91.65% with 3.19 FPs per case, and accuracy is 96.22%. The system took 3.8 seconds to analyze each image. PMID:27586486

  19. A novel fractal approach for predicting G-protein-coupled receptors and their subfamilies with support vector machines.

    PubMed

    Nie, Guoping; Li, Yong; Wang, Feichi; Wang, Siwen; Hu, Xuehai

    2015-01-01

    G-protein-coupled receptors (GPCRs) are seven membrane-spanning proteins and regulate many important physiological processes, such as vision, neurotransmission, immune response and so on. GPCRs-related pathways are the targets of a large number of marketed drugs. Therefore, the design of a reliable computational model for predicting GPCRs from amino acid sequence has long been a significant biomedical problem. Chaos game representation (CGR) reveals the fractal patterns hidden in protein sequences, and then fractal dimension (FD) is an important feature of these highly irregular geometries with concise mathematical expression. Here, in order to extract important features from GPCR protein sequences, CGR algorithm, fractal dimension and amino acid composition (AAC) are employed to formulate the numerical features of protein samples. Four groups of features are considered, and each group is evaluated by support vector machine (SVM) and 10-fold cross-validation test. To test the performance of the present method, a new non-redundant dataset was built based on latest GPCRDB database. Comparing the results of numerical experiments, the group of combined features with AAC and FD gets the best result, the accuracy is 99.22% and Matthew's correlation coefficient (MCC) is 0.9845 for identifying GPCRs from non-GPCRs. Moreover, if it is classified as a GPCR, it will be further put into the second level, which will classify a GPCR into one of the five main subfamilies. At this level, the group of combined features with AAC and FD also gets best accuracy 85.73%. Finally, the proposed predictor is also compared with existing methods and shows better performances.

  20. A Bayesian Shrinkage Approach for AMMI Models

    PubMed Central

    de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves

    2015-01-01

    Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior

  1. A Bayesian Shrinkage Approach for AMMI Models.

    PubMed

    da Silva, Carlos Pereira; de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves; Balestre, Marcio

    2015-01-01

    Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior

  2. Hyperdimensional Computing Approach to Word Sense Disambiguation

    PubMed Central

    Berster, Bjoern-Toby; Goodwin, J Caleb; Cohen, Trevor

    2012-01-01

    Coping with the ambiguous meanings of words has long been a hurdle for information retrieval and natural language processing systems. This paper presents a new word sense disambiguation approach using high-dimensional binary vectors, which encode meanings of words based on the different contexts in which they occur. In our approach, a randomly constructed vector is assigned to each ambiguous term, and another to each sense of this term. In the context of a sense-annotated training set, a reversible vector transformation is used to combine these vectors, such that both the term and the sense assigned to a context in which the term occurs are encoded into vectors representing the surrounding terms in this context. When a new context is encountered, the information required to disambiguate this term is extracted from the trained semantic vectors for the terms in this context by reversing the vector transformation to recover the correct sense of the term. On repeated experiments using ten-fold cross-validation and a standard test set, we obtained results comparable to the best obtained in previous studies. These results demonstrate the potential of our methodology, and suggest directions for future research. PMID:23304389

  3. A new approach to training back-propagation artificial neural networks: empirical evaluation on ten data sets from clinical studies.

    PubMed

    Ciampi, Antonio; Zhang, Fulin

    2002-05-15

    We present a new approach to training back-propagation artificial neural nets (BP-ANN) based on regularization and cross-validation and on initialization by a logistic regression (LR) model. The new approach is expected to produce a BP-ANN predictor at least as good as the LR-based one. We have applied the approach to ten data sets of biomedical interest and systematically compared BP-ANN and LR. In all data sets, taking deviance as criterion, the BP-ANN predictor outperforms the LR predictor used in the initialization, and in six cases the improvement is statistically significant. The other evaluation criteria used (C-index, MSE and error rate) yield variable results, but, on the whole, confirm that, in practical situations of clinical interest, proper training may significantly improve the predictive performance of a BP-ANN.

  4. The H50Q mutation induces a 10-fold decrease in the solubility of α-synuclein.

    PubMed

    Porcari, Riccardo; Proukakis, Christos; Waudby, Christopher A; Bolognesi, Benedetta; Mangione, P Patrizia; Paton, Jack F S; Mullin, Stephen; Cabrita, Lisa D; Penco, Amanda; Relini, Annalisa; Verona, Guglielmo; Vendruscolo, Michele; Stoppini, Monica; Tartaglia, Gian Gaetano; Camilloni, Carlo; Christodoulou, John; Schapira, Anthony H V; Bellotti, Vittorio

    2015-01-23

    The conversion of α-synuclein from its intrinsically disordered monomeric state into the fibrillar cross-β aggregates characteristically present in Lewy bodies is largely unknown. The investigation of α-synuclein variants causative of familial forms of Parkinson disease can provide unique insights into the conditions that promote or inhibit aggregate formation. It has been shown recently that a newly identified pathogenic mutation of α-synuclein, H50Q, aggregates faster than the wild-type. We investigate here its aggregation propensity by using a sequence-based prediction algorithm, NMR chemical shift analysis of secondary structure populations in the monomeric state, and determination of thermodynamic stability of the fibrils. Our data show that the H50Q mutation induces only a small increment in polyproline II structure around the site of the mutation and a slight increase in the overall aggregation propensity. We also find, however, that the H50Q mutation strongly stabilizes α-synuclein fibrils by 5.0 ± 1.0 kJ mol(-1), thus increasing the supersaturation of monomeric α-synuclein within the cell, and strongly favors its aggregation process. We further show that wild-type α-synuclein can decelerate the aggregation kinetics of the H50Q variant in a dose-dependent manner when coaggregating with it. These last findings suggest that the precise balance of α-synuclein synthesized from the wild-type and mutant alleles may influence the natural history and heterogeneous clinical phenotype of Parkinson disease. PMID:25505181

  5. Prediction of 10-fold coordinated TiO2 and SiO2 structures at multimegabar pressures

    PubMed Central

    Lyle, Matthew J.; Pickard, Chris J.; Needs, Richard J.

    2015-01-01

    We predict by first-principles methods a phase transition in TiO2 at 6.5 Mbar from the Fe2P-type polymorph to a ten-coordinated structure with space group I4/mmm. This is the first report, to our knowledge, of the pressure-induced phase transition to the I4/mmm structure among all dioxide compounds. The I4/mmm structure was found to be up to 3.3% denser across all pressures investigated. Significant differences were found in the electronic properties of the two structures, and the metallization of TiO2 was calculated to occur concomitantly with the phase transition to I4/mmm. The implications of our findings were extended to SiO2, and an analogous Fe2P-type to I4/mmm transition was found to occur at 10 TPa. This is consistent with the lower-pressure phase transitions of TiO2, which are well-established models for the phase transitions in other AX2 compounds, including SiO2. As in TiO2, the transition to I4/mmm corresponds to the metallization of SiO2. This transformation is in the pressure range reached in the interiors of recently discovered extrasolar planets and calls for a reformulation of the equations of state used to model them. PMID:25991859

  6. The H50Q Mutation Induces a 10-fold Decrease in the Solubility of α-Synuclein*

    PubMed Central

    Porcari, Riccardo; Proukakis, Christos; Waudby, Christopher A.; Bolognesi, Benedetta; Mangione, P. Patrizia; Paton, Jack F. S.; Mullin, Stephen; Cabrita, Lisa D.; Penco, Amanda; Relini, Annalisa; Verona, Guglielmo; Vendruscolo, Michele; Stoppini, Monica; Tartaglia, Gian Gaetano; Camilloni, Carlo; Christodoulou, John; Schapira, Anthony H. V.; Bellotti, Vittorio

    2015-01-01

    The conversion of α-synuclein from its intrinsically disordered monomeric state into the fibrillar cross-β aggregates characteristically present in Lewy bodies is largely unknown. The investigation of α-synuclein variants causative of familial forms of Parkinson disease can provide unique insights into the conditions that promote or inhibit aggregate formation. It has been shown recently that a newly identified pathogenic mutation of α-synuclein, H50Q, aggregates faster than the wild-type. We investigate here its aggregation propensity by using a sequence-based prediction algorithm, NMR chemical shift analysis of secondary structure populations in the monomeric state, and determination of thermodynamic stability of the fibrils. Our data show that the H50Q mutation induces only a small increment in polyproline II structure around the site of the mutation and a slight increase in the overall aggregation propensity. We also find, however, that the H50Q mutation strongly stabilizes α-synuclein fibrils by 5.0 ± 1.0 kJ mol−1, thus increasing the supersaturation of monomeric α-synuclein within the cell, and strongly favors its aggregation process. We further show that wild-type α-synuclein can decelerate the aggregation kinetics of the H50Q variant in a dose-dependent manner when coaggregating with it. These last findings suggest that the precise balance of α-synuclein synthesized from the wild-type and mutant alleles may influence the natural history and heterogeneous clinical phenotype of Parkinson disease. PMID:25505181

  7. Prediction of 10-fold coordinated TiO2 and SiO2 structures at multimegabar pressures.

    PubMed

    Lyle, Matthew J; Pickard, Chris J; Needs, Richard J

    2015-06-01

    We predict by first-principles methods a phase transition in TiO2 at 6.5 Mbar from the Fe2P-type polymorph to a ten-coordinated structure with space group I4/mmm. This is the first report, to our knowledge, of the pressure-induced phase transition to the I4/mmm structure among all dioxide compounds. The I4/mmm structure was found to be up to 3.3% denser across all pressures investigated. Significant differences were found in the electronic properties of the two structures, and the metallization of TiO2 was calculated to occur concomitantly with the phase transition to I4/mmm. The implications of our findings were extended to SiO2, and an analogous Fe2P-type to I4/mmm transition was found to occur at 10 TPa. This is consistent with the lower-pressure phase transitions of TiO2, which are well-established models for the phase transitions in other AX2 compounds, including SiO2. As in TiO2, the transition to I4/mmm corresponds to the metallization of SiO2. This transformation is in the pressure range reached in the interiors of recently discovered extrasolar planets and calls for a reformulation of the equations of state used to model them. PMID:25991859

  8. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  9. A Predictive Approach to Network Reverse-Engineering

    NASA Astrophysics Data System (ADS)

    Wiggins, Chris

    2005-03-01

    A central challenge of systems biology is the ``reverse engineering" of transcriptional networks: inferring which genes exert regulatory control over which other genes. Attempting such inference at the genomic scale has only recently become feasible, via data-intensive biological innovations such as DNA microrrays (``DNA chips") and the sequencing of whole genomes. In this talk we present a predictive approach to network reverse-engineering, in which we integrate DNA chip data and sequence data to build a model of the transcriptional network of the yeast S. cerevisiae capable of predicting the response of genes in unseen experiments. The technique can also be used to extract ``motifs,'' sequence elements which act as binding sites for regulatory proteins. We validate by a number of approaches and present comparison of theoretical prediction vs. experimental data, along with biological interpretations of the resulting model. En route, we will illustrate some basic notions in statistical learning theory (fitting vs. over-fitting; cross- validation; assessing statistical significance), highlighting ways in which physicists can make a unique contribution in data- driven approaches to reverse engineering.

  10. A single vs. multi-sensor approach to enhanced detection of smartphone placement.

    PubMed

    Guiry, John J; Karr, Chris J; van de Ven, Pepijn; Nelson, John; Begale, Mark

    2014-01-01

    In this paper, the authors evaluate the ability to detect on-body device placement of smartphones. A feasibility study is undertaken with N=5 participants to identify nine key locations, including in the hand, thigh and backpack, using a multitude of commonly available smartphone sensors. Sensors examined include the accelerometer, magnetometer, gyroscope, pressure and light sensors. Each sensor is examined independently, to identify the potential contributions it can offer, before a fused approach, using all sensors is adopted. A total of 139 features are generated from these sensors, and used to train five machine learning algorithms, i.e. C4.5, CART, Naïve Bayes, Multilayer Perceptrons, and Support Vector Machines. Ten-fold cross validation is used to validate these models, achieving classification results as high as 99%.

  11. A single vs. multi-sensor approach to enhanced detection of smartphone placement.

    PubMed

    Guiry, John J; Karr, Chris J; van de Ven, Pepijn; Nelson, John; Begale, Mark

    2014-01-01

    In this paper, the authors evaluate the ability to detect on-body device placement of smartphones. A feasibility study is undertaken with N=5 participants to identify nine key locations, including in the hand, thigh and backpack, using a multitude of commonly available smartphone sensors. Sensors examined include the accelerometer, magnetometer, gyroscope, pressure and light sensors. Each sensor is examined independently, to identify the potential contributions it can offer, before a fused approach, using all sensors is adopted. A total of 139 features are generated from these sensors, and used to train five machine learning algorithms, i.e. C4.5, CART, Naïve Bayes, Multilayer Perceptrons, and Support Vector Machines. Ten-fold cross validation is used to validate these models, achieving classification results as high as 99%. PMID:25570792

  12. Artificial neural network approach to modelling of metal contents in different types of chocolates.

    PubMed

    Podunavac-Kuzmanović, Sanja; Jevrić, Lidija; Švarc-Gajić, Jaroslava; Kovačević, Strahinja; Vasiljević, Ivana; Kecojević, Isidora; Ivanović, Evica

    2015-01-01

    The relationships between the contents of various metals in different types of chocolates were studied using chemometric approach. Chemometric analysis was based on the application of artificial neural networks (ANN). ANN was performed in order to select the significant models for predicting the metal contents. ANN equations, that represent the content of one metal as a function of the contents of other metals were established. The statistical quality of the generated mathematical models was determined by standard statistical measures and cross-validation parameters. High agreement between experimental and predicted values, obtained in the validation procedure, indicated the good quality of the models. The obtained results indicate the possibility of predicting the metal contents in different types of chocolate. PMID:25830975

  13. Origin of aromatase inhibitory activity via proteochemometric modeling.

    PubMed

    Simeon, Saw; Spjuth, Ola; Lapins, Maris; Nabu, Sunanta; Anuwongcharoen, Nuttapat; Prachayasittikul, Virapong; Wikberg, Jarl E S; Nantasenamat, Chanin

    2016-01-01

    Aromatase, the rate-limiting enzyme that catalyzes the conversion of androgen to estrogen, plays an essential role in the development of estrogen-dependent breast cancer. Side effects due to aromatase inhibitors (AIs) necessitate the pursuit of novel inhibitor candidates with high selectivity, lower toxicity and increased potency. Designing a novel therapeutic agent against aromatase could be achieved computationally by means of ligand-based and structure-based methods. For over a decade, we have utilized both approaches to design potential AIs for which quantitative structure-activity relationships and molecular docking were used to explore inhibitory mechanisms of AIs towards aromatase. However, such approaches do not consider the effects that aromatase variants have on different AIs. In this study, proteochemometrics modeling was applied to analyze the interaction space between AIs and aromatase variants as a function of their substructural and amino acid features. Good predictive performance was achieved, as rigorously verified by 10-fold cross-validation, external validation, leave-one-compound-out cross-validation, leave-one-protein-out cross-validation and Y-scrambling tests. The investigations presented herein provide important insights into the mechanisms of aromatase inhibitory activity that could aid in the design of novel potent AIs as breast cancer therapeutic agents. PMID:27190705

  14. Origin of aromatase inhibitory activity via proteochemometric modeling

    PubMed Central

    Simeon, Saw; Spjuth, Ola; Lapins, Maris; Nabu, Sunanta; Anuwongcharoen, Nuttapat; Prachayasittikul, Virapong; Wikberg, Jarl E.S.

    2016-01-01

    Aromatase, the rate-limiting enzyme that catalyzes the conversion of androgen to estrogen, plays an essential role in the development of estrogen-dependent breast cancer. Side effects due to aromatase inhibitors (AIs) necessitate the pursuit of novel inhibitor candidates with high selectivity, lower toxicity and increased potency. Designing a novel therapeutic agent against aromatase could be achieved computationally by means of ligand-based and structure-based methods. For over a decade, we have utilized both approaches to design potential AIs for which quantitative structure–activity relationships and molecular docking were used to explore inhibitory mechanisms of AIs towards aromatase. However, such approaches do not consider the effects that aromatase variants have on different AIs. In this study, proteochemometrics modeling was applied to analyze the interaction space between AIs and aromatase variants as a function of their substructural and amino acid features. Good predictive performance was achieved, as rigorously verified by 10-fold cross-validation, external validation, leave-one-compound-out cross-validation, leave-one-protein-out cross-validation and Y-scrambling tests. The investigations presented herein provide important insights into the mechanisms of aromatase inhibitory activity that could aid in the design of novel potent AIs as breast cancer therapeutic agents. PMID:27190705

  15. Origin of aromatase inhibitory activity via proteochemometric modeling.

    PubMed

    Simeon, Saw; Spjuth, Ola; Lapins, Maris; Nabu, Sunanta; Anuwongcharoen, Nuttapat; Prachayasittikul, Virapong; Wikberg, Jarl E S; Nantasenamat, Chanin

    2016-01-01

    Aromatase, the rate-limiting enzyme that catalyzes the conversion of androgen to estrogen, plays an essential role in the development of estrogen-dependent breast cancer. Side effects due to aromatase inhibitors (AIs) necessitate the pursuit of novel inhibitor candidates with high selectivity, lower toxicity and increased potency. Designing a novel therapeutic agent against aromatase could be achieved computationally by means of ligand-based and structure-based methods. For over a decade, we have utilized both approaches to design potential AIs for which quantitative structure-activity relationships and molecular docking were used to explore inhibitory mechanisms of AIs towards aromatase. However, such approaches do not consider the effects that aromatase variants have on different AIs. In this study, proteochemometrics modeling was applied to analyze the interaction space between AIs and aromatase variants as a function of their substructural and amino acid features. Good predictive performance was achieved, as rigorously verified by 10-fold cross-validation, external validation, leave-one-compound-out cross-validation, leave-one-protein-out cross-validation and Y-scrambling tests. The investigations presented herein provide important insights into the mechanisms of aromatase inhibitory activity that could aid in the design of novel potent AIs as breast cancer therapeutic agents.

  16. In-silico predictive mutagenicity model generation using supervised learning approaches

    PubMed Central

    2012-01-01

    Background Experimental screening of chemical compounds for biological activity is a time consuming and expensive practice. In silico predictive models permit inexpensive, rapid “virtual screening” to prioritize selection of compounds for experimental testing. Both experimental and in silico screening can be used to test compounds for desirable or undesirable properties. Prior work on prediction of mutagenicity has primarily involved identification of toxicophores rather than whole-molecule predictive models. In this work, we examined a range of in silico predictive classification models for prediction of mutagenic properties of compounds, including methods such as J48 and SMO which have not previously been widely applied in cheminformatics. Results The Bursi mutagenicity data set containing 4337 compounds (Set 1) and a Benchmark data set of 6512 compounds (Set 2) were taken as input data set in this work. A third data set (Set 3) was prepared by joining up the previous two sets. Classification algorithms including Naïve Bayes, Random Forest, J48 and SMO with 10 fold cross-validation and default parameters were used for model generation on these data sets. Models built using the combined performed better than those developed from the Benchmark data set. Significantly, Random Forest outperformed other classifiers for all the data sets, especially for Set 3 with 89.27% accuracy, 89% precision and ROC of 95.3%. To validate the developed models two external data sets, AID1189 and AID1194, with mutagenicity data were tested showing 62% accuracy with 67% precision and 65% ROC area and 91% accuracy, 91% precision with 96.3% ROC area respectively. A Random Forest model was used on approved drugs from DrugBank and metabolites from the Zinc Database with True Positives rate almost 85% showing the robustness of the model. Conclusion We have created a new mutagenicity benchmark data set with around 8,000 compounds. Our work shows that highly accurate predictive

  17. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding

    PubMed Central

    2013-01-01

    Background In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. Results The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. Conclusions The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the

  18. A novel logic-based approach for quantitative toxicology prediction.

    PubMed

    Amini, Ata; Muggleton, Stephen H; Lodhi, Huma; Sternberg, Michael J E

    2007-01-01

    There is a pressing need for accurate in silico methods to predict the toxicity of molecules that are being introduced into the environment or are being developed into new pharmaceuticals. Predictive toxicology is in the realm of structure activity relationships (SAR), and many approaches have been used to derive such SAR. Previous work has shown that inductive logic programming (ILP) is a powerful approach that circumvents several major difficulties, such as molecular superposition, faced by some other SAR methods. The ILP approach reasons with chemical substructures within a relational framework and yields chemically understandable rules. Here, we report a general new approach, support vector inductive logic programming (SVILP), which extends the essentially qualitative ILP-based SAR to quantitative modeling. First, ILP is used to learn rules, the predictions of which are then used within a novel kernel to derive a support-vector generalization model. For a highly heterogeneous dataset of 576 molecules with known fathead minnow fish toxicity, the cross-validated correlation coefficients (R2CV) from a chemical descriptor method (CHEM) and SVILP are 0.52 and 0.66, respectively. The ILP, CHEM, and SVILP approaches correctly predict 55, 58, and 73%, respectively, of toxic molecules. In a set of 165 unseen molecules, the R2 values from the commercial software TOPKAT and SVILP are 0.26 and 0.57, respectively. In all calculations, SVILP showed significant improvements in comparison with the other methods. The SVILP approach has a major advantage in that it uses ILP automatically and consistently to derive rules, mostly novel, describing fragments that are toxicity alerts. The SVILP is a general machine-learning approach and has the potential of tackling many problems relevant to chemoinformatics including in silico drug design.

  19. Selecting Relevant Descriptors for Classification by Bayesian Estimates: A Comparison with Decision Trees and Support Vector Machines Approaches for Disparate Data Sets.

    PubMed

    Carbon-Mangels, Miriam; Hutter, Michael C

    2011-10-01

    Classification algorithms suffer from the curse of dimensionality, which leads to overfitting, particularly if the problem is over-determined. Therefore it is of particular interest to identify the most relevant descriptors to reduce the complexity. We applied Bayesian estimates to model the probability distribution of descriptors values used for binary classification using n-fold cross-validation. As a measure for the discriminative power of the classifiers, the symmetric form of the Kullback-Leibler divergence of their probability distributions was computed. We found that the most relevant descriptors possess a Gaussian-like distribution of their values, show the largest divergences, and therefore appear most often in the cross-validation scenario. The results were compared to those of the LASSO feature selection method applied to multiple decision trees and support vector machine approaches for data sets of substrates and nonsubstrates of three Cytochrome P450 isoenzymes, which comprise strongly unbalanced compound distributions. In contrast to decision trees and support vector machines, the performance of Bayesian estimates is less affected by unbalanced data sets. This strategy reveals those descriptors that allow a simple linear separation of the classes, whereas the superior accuracy of decision trees and support vector machines can be attributed to nonlinear separation, which are in turn more prone to overfitting.

  20. Discovery of Potent Succinate-Ubiquinone Oxidoreductase Inhibitors via Pharmacophore-linked Fragment Virtual Screening Approach.

    PubMed

    Xiong, Li; Zhu, Xiao-Lei; Gao, Hua-Wei; Fu, Yu; Hu, Sheng-Quan; Jiang, Li-Na; Yang, Wen-Chao; Yang, Guang-Fu

    2016-06-22

    Succinate-ubiquinone oxidoreductase (SQR) is an attractive target for fungicide discovery. Herein, we report the discovery of novel SQR inhibitors using a pharmacophore-linked fragment virtual screening approach, a new drug design method developed in our laboratory. Among newly designed compounds, compound 9s was identified as the most potent inhibitor with a Ki value of 34 nM against porcine SQR, displaying approximately 10-fold higher potency than that of the commercial control penthiopyrad. Further inhibitory kinetics studies revealed that compound 9s is a noncompetitive inhibitor with respect to the substrate cytochrome c and DCIP. Interestingly, compounds 8a, 9h, 9j, and 9k exhibited good in vivo preventive effects against Rhizoctonia solani. The results obtained from molecular modeling showed that the orientation of the R(2) group had a significant effect on binding with the protein. PMID:27225833

  1. A novel approach for food intake detection using electroglottography

    PubMed Central

    Farooq, Muhammad; Fontana, Juan M; Sazonov, Edward

    2014-01-01

    Many methods for monitoring diet and food intake rely on subjects self-reporting their daily intake. These methods are subjective, potentially inaccurate and need to be replaced by more accurate and objective methods. This paper presents a novel approach that uses an Electroglottograph (EGG) device for an objective and automatic detection of food intake. Thirty subjects participated in a 4-visit experiment involving the consumption of meals with self-selected content. Variations in the electrical impedance across the larynx caused by the passage of food during swallowing were captured by the EGG device. To compare performance of the proposed method with a well-established acoustical method, a throat microphone was used for monitoring swallowing sounds. Both signals were segmented into non-overlapping epochs of 30 s and processed to extract wavelet features. Subject-independent classifiers were trained using Artificial Neural Networks, to identify periods of food intake from the wavelet features. Results from leave-one-out cross-validation showed an average per-epoch classification accuracy of 90.1% for the EGG-based method and 83.1% for the acoustic-based method, demonstrating the feasibility of using an EGG for food intake detection. PMID:24671094

  2. A Multiscale Approach to InSAR Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Hetland, E. A.; Muse, P.; Simons, M.; Lin, N.; Dicaprio, C. J.

    2010-12-01

    We present a technique to constrain time-dependent deformation from repeated satellite-based InSAR observations of a given region. This approach, which we call MInTS (Multiscale InSAR Time Series analysis), relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. As opposed to single pixel InSAR time series techniques, MInTS takes advantage of both spatial and temporal characteristics of the deformation field. We use a weighting scheme which accounts for the presence of localized holes due to decorrelation or unwrapping errors in any given interferogram. We represent time-dependent deformation using a dictionary of general basis functions, capable of detecting both steady and transient processes. The estimation is regularized using a model resolution based smoothing so as to be able to capture rapid deformation where there are temporally dense radar acquisitions and to avoid oscillations during time periods devoid of acquisitions. MInTS also has the flexibility to explicitly parametrize known time-dependent processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). We use cross validation to choose the regularization penalty parameter in the inversion of for the time-dependent deformation field. We demonstrate MInTS using a set of 63 ERS-1/2 and 29 Envisat interferograms for Long Valley Caldera.

  3. Predicting dispersal distance in mammals: a trait-based approach.

    PubMed

    Whitmee, Sarah; Orme, C David L

    2013-01-01

    Dispersal is one of the principal mechanisms influencing ecological and evolutionary processes but quantitative empirical data are unfortunately scarce. As dispersal is likely to influence population responses to climate change, whether by adaptation or by migration, there is an urgent need to obtain estimates of dispersal distance. Cross-species correlative approaches identifying predictors of dispersal distance can provide much-needed insights into this data-scarce area. Here, we describe the compilation of a new data set of natal dispersal distances and use it to test life-history predictors of dispersal distance in mammals and examine the strength of the phylogenetic signal in dispersal distance. We find that both maximum and median dispersal distances have strong phylogenetic signals. No single model performs best in describing either maximum or median dispersal distances when phylogeny is taken into account but many models show high explanatory power, suggesting that dispersal distance per generation can be estimated for mammals with comparatively little data availability. Home range area, geographic range size and body mass are identified as the most important terms across models. Cross-validation of models supports the ability of these variables to predict dispersal distances, suggesting that models may be extended to species where dispersal distance is unknown.

  4. A multiscale approach to InSAR time series analysis

    NASA Astrophysics Data System (ADS)

    Simons, M.; Hetland, E. A.; Muse, P.; Lin, Y. N.; Dicaprio, C.; Rickerby, A.

    2008-12-01

    We describe a new technique to constrain time-dependent deformation from repeated satellite-based InSAR observations of a given region. This approach, which we call MInTS (Multiscale analysis of InSAR Time Series), relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. This approach also permits a consistent treatment of all data independent of the presence of localized holes in any given interferogram. In essence, MInTS allows one to considers all data at the same time (as opposed to one pixel at a time), thereby taking advantage of both spatial and temporal characteristics of the deformation field. In terms of the temporal representation, we have the flexibility to explicitly parametrize known processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). Our approach also allows for the temporal parametrization to includes a set of general functions (e.g., splines) in order to account for unexpected processes. We allow for various forms of model regularization using a cross-validation approach to select penalty parameters. The multiscale analysis allows us to consider various contributions (e.g., orbit errors) that may affect specific scales but not others. The methods described here are all embarrassingly parallel and suitable for implementation on a cluster computer. We demonstrate the use of MInTS using a large suite of ERS-1/2 and Envisat interferograms for Long Valley Caldera, and validate our results by comparing with ground-based observations.

  5. The prosthesis evaluation questionnaire: reliability and cross-validation of the Turkish version

    PubMed Central

    Safer, Vildan Binay; Yavuzer, Gunes; Demir, Sibel Ozbudak; Yanikoglu, Inci; Guneri, Fulya Demircioglu

    2015-01-01

    [Purpose] Currently, there are a limited number of amputee-specific instruments for measuring prosthesis-related quality of life with good psychometric properties in Turkey. This study translated the Prosthetic Evaluation Questionnaire to Turkish and analyzed as well as discussed its construct validity and internal consistency. [Subjects and Methods] The Prosthetic Evaluation Questionnaire was adapted for use in Turkish by forward/backward translation. The final Turkish version of this questionnaire was administered to 90 unilateral amputee patients. Second evaluation was possible in 83 participants within a median 28 day time period. [Results] Point estimates for the intraclass correlation coefficient ranged from 0.69 to 0.89 for all 9 Prosthetic Evaluation Questionnaire scales, indicating good correlation. Overall Cronbach’s alpha coefficients ranged from 0.64 to 0.92, except for the perceived response subscale of 0.39. The ambulation subscale was correlated with the physical functioning subscales of Short Form-36 (SF-36) (r=0.48). The social burden subscale score of the Prosthetic Evaluation Questionnaire was correlated with social functioning subscales of SF-36 (r= 0.63). [Conclusion] The Turkish version of the Prosthetic Evaluation Questionnaire is a valid and reliable tool for implementation in the Turkish unilateral amputee population. PMID:26180296

  6. Categories of Counselors Behavior as Defined from Cross-Validated Factoral Descriptions.

    ERIC Educational Resources Information Center

    Zimmer, Jules M.; And Others

    The intent of the study was to explore and categorize counselor responses. Three separate filmed presentations were shown. Participating with the same client were Albert Ellis, Frederick Perls, and Carl Rogers. At the beginning of each counselor statement, a number was inserted in sequence and remained on the videotape until completion of that…

  7. Cross-Validation of the PAI Negative Distortion Scale for Feigned Mental Disorders: A Research Report

    ERIC Educational Resources Information Center

    Rogers, Richard; Gillard, Nathan D.; Wooley, Chelsea N.; Kelsey, Katherine R.

    2013-01-01

    A major strength of the Personality Assessment Inventory (PAI) is its systematic assessment of response styles, including feigned mental disorders. Recently, Mogge, Lepage, Bell, and Ragatz developed and provided the initial validation for the Negative Distortion Scale (NDS). Using rare symptoms as its detection strategy for feigning, the…

  8. Cross-validation of recent and longstanding resting metabolic rate prediction equations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Resting metabolic rate (RMR) measurement is time consuming and requires specialized equipment. Prediction equations provide an easy method to estimate RMR; however, their accuracy likely varies across individuals. Understanding the factors that influence predicted RMR accuracy at the individual lev...

  9. A Cross Validation of the Porter Needs Satisfaction Questionnaire for Educators.

    ERIC Educational Resources Information Center

    Pierson, Dorothy; And Others

    1985-01-01

    The construct validity and reliability of the Porter Needs Satisfaction Questionnaire (adapted) for educators were examined. Results did not support its use as suggested by Porter. Suggestions for its revision and alternate use are presented. (Author/GDC)

  10. Cross-Validation of a Psychological Test Battery to Detect Faked Insanity.

    ERIC Educational Resources Information Center

    Schretlen, David; And Others

    1992-01-01

    Eight predictor variables from the Minnesota Multiphasic Personality Inventory, the Bender Gestalt, and a Malingering Scale differentiated 20 prison inmates faking insanity from 40 nonfaking controls. A second experiment with 22 substance abusers faking insanity and 20 schizophrenics also supports the use of the test battery to detect faking. (SLD)

  11. Correlates of Achievement: Prediction and Cross-Validation for Intermediate Grade Levels.

    ERIC Educational Resources Information Center

    Marshall, Jon C.; Powers, Jerry M.

    A study was conducted to: (1) determine the simple and multiple correlation coefficients between selected educational/personal variables and academic achievement at intermediate grade levels as measured by the Iowa Tests of Basic Skills; (2) determine the multiple linear regression equations for predicting individual student achievement as…

  12. Cross-Validation of Mental Health Recovery Measures in a Hong Kong Chinese Sample

    ERIC Educational Resources Information Center

    Ye, Shengquan; Pan, Jia-Yan; Wong, Daniel Fu Keung; Bola, John Robert

    2013-01-01

    Objectives: The concept of recovery has begun shifting mental health service delivery from a medical perspective toward a client-centered recovery orientation. This shift is also beginning in Hong Kong, but its development is hampered by a dearth of available measures in Chinese. Method: This article translates two measures of recovery (mental…

  13. Cross-Validation of a PACER Prediction Equation for Assessing Aerobic Capacity in Hungarian Youth

    ERIC Educational Resources Information Center

    Saint-Maurice, Pedro F.; Welk, Gregory J.; Finn, Kevin J.; Kaj, Mónika

    2015-01-01

    Purpose: The purpose of this article was to evaluate the validity of the Progressive Aerobic Cardiovascular and Endurance Run (PACER) test in a sample of Hungarian youth. Method: Approximately 500 participants (aged 10-18 years old) were randomly selected across Hungary to complete both laboratory (maximal treadmill protocol) and field assessments…

  14. The African American Acculturation Scale II: Cross-Validation and Short Form.

    ERIC Educational Resources Information Center

    Landrine, Hope; Klonoff, Elizabeth A.

    1995-01-01

    Studied African American culture, using a new, shortened, 33-item African American Acculturation Scale (AAAS-33) to assess the scale's validity and reliability. Comparisons between the original form and AAAS-33 reveal high correlations, however, the longer form may be sensitive to some beliefs, practices, and attitudes not assessed by the short…

  15. "Hits" (Not "Discussion Posts") Predict Student Success in Online Courses: A Double Cross-Validation Study

    ERIC Educational Resources Information Center

    Ramos, Cheryl; Yudko, Errol

    2008-01-01

    The efficacy of individual components of an online course on positive course outcome was examined via stepwise multiple regression analysis. Outcome was measured as the student's total score on all exams given during the course. The predictors were page hits, discussion posts, and discussion reads. The vast majority of the variance of outcome was…

  16. Computational Steroidogenesis Model To Predict Biochemical Responses to Endocrine Active Chemicals: Model Development and Cross Validation

    EPA Science Inventory

    Steroids, which have an important role in a wide range of physiological processes, are synthesized primarily in the gonads and adrenal glands through a series of enzyme-mediated reactions. The activity of steroidogenic enzymes can be altered by a variety of endocrine active chem...

  17. Temporal filtering of event-related fMRI data using cross-validation.

    PubMed

    Ngan, S C; LaConte, S M; Hu, X

    2000-06-01

    To circumvent the problem of low signal-to-noise ratio (SNR) in event-related fMRI data, the fMRI experiment is typically designed to consist of repeated presentations of the stimulus and measurements of the response, allowing for subsequent averaging of the resulting data. Due to factors such as time limitation, subject motion, habituation, and fatigue, practical constraints on the number of repetitions exist. Thus, filtering is commonly applied to further improve the SNR of the averaged data. Here, a time-varying filter based on theoretical work by Nowak is employed. This filter operates under the stationary wavelet transform framework and is demonstrated to lead to good estimates of the true signals in simulated data. The utility of the filter is also shown using experimental data obtained with a visual-motor paradigm.

  18. Repeated holdout Cross-Validation of Model to Estimate Risk of Lyme Disease by Landscape Attributes

    EPA Science Inventory

    We previously modeled Lyme disease (LD) risk at the landscape scale; here we evaluate the model's overall goodness-of-fit using holdout validation. Landscapes were characterized within road-bounded analysis units (AU). Observed LD cases (obsLD) were ascertained per AU. Data were ...

  19. Cross-Validation of Levenson's Psychopathy Scale in a Sample of Federal Female Inmates

    ERIC Educational Resources Information Center

    Brinkley, Chad A.; Diamond, Pamela M.; Magaletta, Philip R.; Heigel, Caron P.

    2008-01-01

    Levenson, Kiehl, and Fitzpatrick's Self-Report Psychopathy Scale (LSRPS) is evaluated to determine the factor structure and concurrent validity of the instrument among 430 federal female inmates. Confirmatory factor analysis fails to validate the expected 2-factor structure. Subsequent exploratory factor analysis reveals a 3-factor structure…

  20. Cross-validation of strainmeter observations in Cascadia using GPS and tremor-derived slip distributions

    NASA Astrophysics Data System (ADS)

    Krogstad, R. D.; Schmidt, D. A.

    2013-12-01

    We address calibration and resolvability issues associated with strainmeter observations in Cascadia, with the ultimate goal of integrating strainmeters, GPS, and tremor observations of slow slip events. The distribution and propagation of episodic tremor and slow slip (ETS) events in Cascadia have primarily been characterized with observations from a broad network of GPS and seismic stations. Slip models spatially constrained by tremor are more heterogeneous than suggested by GPS. Geodetically derived slip distributions tend to be spatially and temporally smoothed and slightly offset from tremor distributions. These discrepancies may be real, or they may be a consequence of the resolution of GPS data or an artifact of the inversion methodology. Borehole strainmeters can potentially bridge the gap between GPS and seismic observations, given the greater sensitivity of the strainmeters. However, high noise values, the inclusion of non-tectonic artifacts, and difficulties in the calibration of the strainmeters have made deriving reliable information from strainmeters during slip events difficult. We examine the strainmeter time series of multiple stations for the 2010 to 2012 events in northern Washington. After accounting for nontectonic signals, such as atmospheric pressure, hydraulic loading and the curing of borehole grout, instrument drift and in situ calibrations using modeled earth tides account for a significant portion of the observational uncertainty of strain. We evaluate the strain observations of ETS events by predicting strain transients using synthetic forward slip models, GPS inversions, and slip models based on tremor distributions. To evaluate the magnitude of observed strain transients during slow slip events, we compare the strain observations with predicted strain transients derived from time-dependent GPS inversions. Preliminary results show that for well-behaved strainmeters (e.g. B003, B004, B005, etc.), the predicted strain is typically of similar duration and form, but may differ in amplitude by up to one order-of-magnitude. In an effort to reconcile the independent GPS, strainmeter, and seismic observations, we construct slip distributions using tremor occurrences as a proxy for localized slip on the plate interface. The magnitude of slip is then scaled by matching the predicted surface displacements derived from the tremor-based slip model with GPS observations of surface displacements. Once a slip model is obtained that satisfies the GPS and seismic data, the resultant strain predictions are evaluated in relation to the observed strain measurements. Preliminary results for the August 2012 event suggest that the observed strain at multiple stations occurs a couple days later than the strain predicted from the tremor-based slip model. Apart from the magnitude of strain change during an event, the sign of the strain change is also useful in constraining the along-dip extent and propagation of slow slip events. An instance where the sign of the observed strain differs from GPS-derived predictions likely indicates the slip distribution solution is either too narrow or too broad.

  1. Initial Factor Analysis and Cross-Validation of the Multicultural Teaching Competencies Inventory

    ERIC Educational Resources Information Center

    Prieto, Loreto R.

    2012-01-01

    The Multicultural Teaching Competencies Inventory (MTCI) contains items based on the tri-parte model of cultural competencies established by Sue and associates (Sue et al., 1992, 1982, 2003) that identify multicultural Awareness, Knowledge, and Skill as central characteristics of a culturally sensitive professional. The development and validation…

  2. The Bland-Altman Method Should Not Be Used in Regression Cross-Validation Studies

    ERIC Educational Resources Information Center

    O'Connor, Daniel P.; Mahar, Matthew T.; Laughlin, Mitzi S.; Jackson, Andrew S.

    2011-01-01

    The purpose of this study was to demonstrate the bias in the Bland-Altman (BA) limits of agreement method when it is used to validate regression models. Data from 1,158 men were used to develop three regression equations to estimate maximum oxygen uptake (R[superscript 2] = 0.40, 0.61, and 0.82, respectively). The equations were evaluated in a…

  3. A Cross-Validation Study of the Trauma Symptom Checklist: The Role of Mediating Variables.

    ERIC Educational Resources Information Center

    Gold, Steven R.; And Others

    1994-01-01

    Examines the responses to the Trauma Symptom Checklist (TSC) of college women (n=654) sexually abused as children, sexually assaulted as adults, sexually assaulted as children and adults, and nonabused. Results support the validity of the TSC as a measure of sexual abuse trauma and point to family patterns associated with prolonged symptomatology.…

  4. Cross-Validation of the Implementation Leadership Scale (ILS) in Child Welfare Service Organizations.

    PubMed

    Finn, Natalie K; Torres, Elisa M; Ehrhart, Mark G; Roesch, Scott C; Aarons, Gregory A

    2016-08-01

    The Implementation Leadership Scale (ILS) is a brief, pragmatic, and efficient measure that can be used for research or organizational development to assess leader behaviors and actions that actively support effective implementation of evidence-based practices (EBPs). The ILS was originally validated with mental health clinicians. This study validates the ILS factor structure with providers in community-based organizations (CBOs) providing child welfare services. Participants were 214 service providers working in 12 CBOs that provide child welfare services. All participants completed the ILS, reporting on their immediate supervisor. Confirmatory factor analyses were conducted to examine the factor structure of the ILS. Internal consistency reliability and measurement invariance were also examined. Confirmatory factor analyses showed acceptable fit to the hypothesized first- and second-order factor structure. Internal consistency reliability was strong and there was partial measurement invariance for the first-order factor structure when comparing child welfare and mental health samples. The results support the use of the ILS to assess leadership for implementation of EBPs in child welfare organizations.

  5. Cross-validation of spaceborne radar and ground polarimetric radar observations

    NASA Astrophysics Data System (ADS)

    Bolen, Steven Matthew

    There is great potential for spaceborne weather radar to make significant observations of the precipitating medium on global scales. The Tropical Rainfall Mapping Mission (TRMM) is the first mission dedicated to measuring rainfall in the tropics from space using radar. The Precipitation Radar (PR) is one of several instruments aboard the TRMM satellite that is operating in a nearly circular orbit at 350 km altitude and 35 degree inclination. The PR is a single frequency Ku-band instrument that is designed to yield information about the vertical storm structure so as to gain insight into the intensity and distribution of rainfall. Attenuation effects on PR measurements, however, can be significant, which can be as high as 10--15 dB. This can seriously impair the accuracy of rain rate retrieval algorithms derived from PR returns. Direct inter-comparison of meteorological measurements between space and ground radar observations can be used to evaluate spaceborne processing algorithms. Though conceptually straightforward, this can be a challenging task. Differences in viewing aspects between space and earth point observations, propagation frequencies, resolution volume size and time synchronization mismatch between measurements can contribute to direct point-by-point inter-comparison errors. The problem is further complicated by spatial geometric distortions induced into the space-based observations caused by the movements and attitude perturbations of the spacecraft itself. A method is developed to align space and ground radar observations so that a point-by-point inter-comparison of measurements can be made. Ground-based polarimetric observations are used to estimate the attenuation of PR signal returns along individual PR beams, and a technique is formulated to determine the true PR return from GR measurements via theoretical modeling of specific attenuation (k) at PR wavelength with ground-based S-band radar observations. The statistical behavior of the parameters of a three-parameter gamma raindrop size distribution (RSD) model is also presented along with analysis of the initial PR RSD model on rain rate estimates. Data is taken from the TExas and FLorida UNderflights (TEFLUN-B) and the TRMM Large-scale Biosphere Atmosphere (LBA) field campaigns. Data from the Kwajalein KPOL radar is also used to validate the algorithms developed.

  6. Cross-Validation of the Norwegian Teacher's Self-Efficacy Scale (NTSES)

    ERIC Educational Resources Information Center

    Avanzi, Lorenzo; Miglioretti, Massimo; Velasco, Veronica; Balducci, Cristian; Vecchio, Luca; Fraccaroli, Franco; Skaalvik, Einar M.

    2013-01-01

    The study assesses the psychometric properties of the Italian version of the Norwegian Teacher Self-Efficacy Scale--NTSES. Multiple group confirmatory factor analysis was used to explore the measurement invariance of the scale across two countries. Analyses performed on Italian and Norwegian samples confirmed a six-factor structure of the scale…

  7. The Severe Sexual Sadism Scale: Cross-Validation and Scale Properties

    ERIC Educational Resources Information Center

    Mokros, Andreas; Schilling, Frank; Eher, Reinhard; Nitschke, Joachim

    2012-01-01

    The Severe Sexual Sadism Scale (SSSS) is a screening device for the file-based assessment of forensically relevant sexual sadism. The SSSS consists of 11 dichotomous (yes/no) items that code behavioral indicators of severe sexual sadism within sexual offenses. Based on an Austrian sample of 105 sexual offenders, the present study replicated the…

  8. Cross Validation of Job Families Using an Expanded Data Set. USES Test Research Report No. 53.

    ERIC Educational Resources Information Center

    Swarthout, David

    The analyses of J. E. Hunter (1983) were replicated with an expanded data set. The Hunter study, the basis of the Validity Generalization system used by the United States Employment Service, contained 515 General Aptitude Test Battery validation studies. The data set in this study included these and additional studies to bring the data set to 755…

  9. Cross-validation of satellite products over France through their integration into a land surface model

    NASA Astrophysics Data System (ADS)

    Calvet, Jean-Christophe; Barbu, Alina; Carrer, Dominique; Meurey, Catherine

    2014-05-01

    Long (more than 30 years) time series of satellite-derived products over land are now available. They concern Essential Climate Variables (ECV) such as LAI, FAPAR, surface albedo, and soil moisture. The direct validation of such Climate Data Records (CDR) is not easy, as in situ observations are limited in space and time. Therefore, indirect validation has a key role. It consists in comparing the products with similar preexisting products derived from satellite observations or from land surface model (LSM) simulations. The most advanced indirect validation technique consists in integrating the products into a LSM using a data assimilation scheme. The obtained reanalysis accounts for the synergies of the various upstream products and provides statistics which can be used to monitor the quality of the assimilated observations. Meteo-France develops the ISBA-A-gs generic LSM able to represent the diurnal cycle of the surface fluxes together with the seasonal, interannual and decadal variability of the vegetation biomass. The LSM is embedded in the SURFEX modeling platform together with a simplified extended Kalman filter. These tools form a Land Data Assimilation System (LDAS). The current version of the LDAS assimilates SPOT-VGT LAI and ASCAT surface soil moisture (SSM) products over France (8km x 8km), and a passive monitoring of albedo, FAPAR and Land Surface temperature (LST) is performed (i.e., the simulated values are compared with the satellite products). The LDAS-France system is used in the European Copernicus Global Land Service (http://land.copernicus.eu/global/) to monitor the quality of upstream products. The LDAS generates statistics whose trends can be analyzed in order to detect possible drifts in the quality of the products: (1) for LAI and SSM, metrics derived from the active monitoring (i.e. assimilation) such as innovations (observations vs. model forecast), residuals (observations vs. analysis), and increments (analysis vs. model forecast) ; (2) for albedo, LST, and FAPAR, metrics derived from the passive monitoring such as the Pearson correlation coefficient, z-score, RMSD, SDD, mean bias. The results obtained over the 2007-2013 period are presented. The added value of computing a prognostic FAPAR is shown, as this quantity can be used to better estimate the LAI observation error used by the LDAS. In the near future, the LDAS will be upgraded in order to assimilate FAPAR and surface albedo, and it will be extended to a global scale. At the same time, the coupling to hydrological models (now in a testing phase) will be consolidated, and this will allow the use of in situ river discharge observations for the validation of the whole system.

  10. The Adolescent Religious Coping Scale: Development, Validation, and Cross-Validation

    ERIC Educational Resources Information Center

    Bjorck, Jeffrey P.; Braese, Robert W.; Tadie, Joseph T.; Gililland, David D.

    2010-01-01

    Research literature on adolescent coping is growing, but typically such studies have ignored religious coping strategies and their potential impact on functioning. To address this lack, we developed the Adolescent Religious Coping Scale and used its seven subscales to examine the relationship between religious coping and emotional functioning. A…

  11. A land use regression model for ambient ultrafine particles in Montreal, Canada: A comparison of linear regression and a machine learning approach.

    PubMed

    Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne

    2016-04-01

    Existing evidence suggests that ambient ultrafine particles (UFPs) (<0.1µm) may contribute to acute cardiorespiratory morbidity. However, few studies have examined the long-term health effects of these pollutants owing in part to a need for exposure surfaces that can be applied in large population-based studies. To address this need, we developed a land use regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure. PMID:26720396

  12. A land use regression model for ambient ultrafine particles in Montreal, Canada: A comparison of linear regression and a machine learning approach.

    PubMed

    Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne

    2016-04-01

    Existing evidence suggests that ambient ultrafine particles (UFPs) (<0.1µm) may contribute to acute cardiorespiratory morbidity. However, few studies have examined the long-term health effects of these pollutants owing in part to a need for exposure surfaces that can be applied in large population-based studies. To address this need, we developed a land use regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure.

  13. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

    PubMed Central

    Yilmaz, Nihat; Inan, Onur

    2013-01-01

    This paper offers a hybrid approach that uses the artificial bee colony (ABC) algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications. PMID:23983632

  14. Predicting lipase types by improved Chou's pseudo-amino acid composition.

    PubMed

    Zhang, Guang-Ya; Li, Hong-Chun; Gao, Jia-Qiang; Fang, Bai-Shan

    2008-01-01

    By proposing a improved Chou's pseudo amino acid composition approach to extract the features of the sequences, a powerful predictor based on k-nearest neighbor was introduced to identify the types of lipases according to their sequences. To avoid redundancy and bias, demonstrations were performed on a dataset where none of the proteins has > or =25% sequence identity to any other. The overall success rate thus obtained by the 10-fold cross-validation test was over 90%, indicating that the improved Chou's pseudo amino acid composition might be a useful tool for extracting the features of protein sequences, or at lease can play a complementary role to many of the other existing approaches. PMID:19075826

  15. An extensive cocktail approach for rapid risk assessment of in vitro CYP450 direct reversible inhibition by xenobiotic exposure.

    PubMed

    Spaggiari, Dany; Daali, Youssef; Rudaz, Serge

    2016-07-01

    Acute exposure to environmental factors strongly affects the metabolic activity of cytochrome P450 (P450). As a consequence, the risk of interaction could be increased, modifying the clinical outcomes of a medication. Because toxic agents cannot be administered to humans for ethical reasons, in vitro approaches are therefore essential to evaluate their impact on P450 activities. In this work, an extensive cocktail mixture was developed and validated for in vitro P450 inhibition studies using human liver microsomes (HLM). The cocktail comprised eleven P450-specific probe substrates to simultaneously assess the activities of the following isoforms: 1A2, 2A6, 2B6, 2C8, 2C9, 2C19, 2D6, 2E1, 2J2 and subfamily 3A. The high selectivity and sensitivity of the developed UHPLC-MS/MS method were critical for the success of this methodology, whose main advantages are: (i) the use of eleven probe substrates with minimized interactions, (ii) a low HLM concentration, (iii) fast incubation (5min) and (iv) the use of metabolic ratios as microsomal P450 activities markers. This cocktail approach was successfully validated by comparing the obtained IC50 values for model inhibitors with those generated with the conventional single probe methods. Accordingly, reliable inhibition values could be generated 10-fold faster using a 10-fold smaller amount of HLM compared to individual assays. This approach was applied to assess the P450 inhibition potential of widespread insecticides, namely, chlorpyrifos, fenitrothion, methylparathion and profenofos. In all cases, P450 2B6 was the most affected with IC50 values in the nanomolar range. For the first time, mixtures of these four insecticides incubated at low concentrations showed a cumulative inhibitory in vitro effect on P450 2B6. PMID:27105555

  16. A new approach for prediction of tumor sensitivity to targeted drugs based on functional data

    PubMed Central

    2013-01-01

    Background The success of targeted anti-cancer drugs are frequently hindered by the lack of knowledge of the individual pathway of the patient and the extreme data requirements on the estimation of the personalized genetic network of the patient’s tumor. The prediction of tumor sensitivity to targeted drugs remains a major challenge in the design of optimal therapeutic strategies. The current sensitivity prediction approaches are primarily based on genetic characterizations of the tumor sample. We propose a novel sensitivity prediction approach based on functional perturbation data that incorporates the drug protein interaction information and sensitivities to a training set of drugs with known targets. Results We illustrate the high prediction accuracy of our framework on synthetic data generated from the Kyoto Encyclopedia of Genes and Genomes (KEGG) and an experimental dataset of four canine osteosarcoma tumor cultures following application of 60 targeted small-molecule drugs. We achieve a low leave one out cross validation error of <10% for the canine osteosarcoma tumor cultures using a drug screen consisting of 60 targeted drugs. Conclusions The proposed framework provides a unique input-output based methodology to model a cancer pathway and predict the effectiveness of targeted anti-cancer drugs. This framework can be developed as a viable approach for personalized cancer therapy. PMID:23890326

  17. Fully automated 3D prostate central gland segmentation in MR images: a LOGISMOS based approach

    NASA Astrophysics Data System (ADS)

    Yin, Yin; Fotin, Sergei V.; Periaswamy, Senthil; Kunz, Justin; Haldankar, Hrishikesh; Muradyan, Naira; Turkbey, Baris; Choyke, Peter

    2012-02-01

    One widely accepted classification of a prostate is by a central gland (CG) and a peripheral zone (PZ). In some clinical applications, separating CG and PZ from the whole prostate is useful. For instance, in prostate cancer detection, radiologist wants to know in which zone the cancer occurs. Another application is for multiparametric MR tissue characterization. In prostate T2 MR images, due to the high intensity variation between CG and PZ, automated differentiation of CG and PZ is difficult. Previously, we developed an automated prostate boundary segmentation system, which tested on large datasets and showed good performance. Using the results of the pre-segmented prostate boundary, in this paper, we proposed an automated CG segmentation algorithm based on Layered Optimal Graph Image Segmentation of Multiple Objects and Surfaces (LOGISMOS). The designed LOGISMOS model contained both shape and topology information during deformation. We generated graph cost by training classifiers and used coarse-to-fine search. The LOGISMOS framework guarantees optimal solution regarding to cost and shape constraint. A five-fold cross-validation approach was applied to training dataset containing 261 images to optimize the system performance and compare with a voxel classification based reference approach. After the best parameter settings were found, the system was tested on a dataset containing another 261 images. The mean DSC of 0.81 for the test set indicates that our approach is promising for automated CG segmentation. Running time for the system is about 15 seconds.

  18. An Ensemble-of-Classifiers Based Approach for Early Diagnosis of Alzheimer's Disease: Classification Using Structural Features of Brain Images

    PubMed Central

    Farhan, Saima; Tauseef, Huma

    2014-01-01

    Structural brain imaging is playing a vital role in identification of changes that occur in brain associated with Alzheimer's disease. This paper proposes an automated image processing based approach for the identification of AD from MRI of the brain. The proposed approach is novel in a sense that it has higher specificity/accuracy values despite the use of smaller feature set as compared to existing approaches. Moreover, the proposed approach is capable of identifying AD patients in early stages. The dataset selected consists of 85 age and gender matched individuals from OASIS database. The features selected are volume of GM, WM, and CSF and size of hippocampus. Three different classification models (SVM, MLP, and J48) are used for identification of patients and controls. In addition, an ensemble of classifiers, based on majority voting, is adopted to overcome the error caused by an independent base classifier. Ten-fold cross validation strategy is applied for the evaluation of our scheme. Moreover, to evaluate the performance of proposed approach, individual features and combination of features are fed to individual classifiers and ensemble based classifier. Using size of left hippocampus as feature, the accuracy achieved with ensemble of classifiers is 93.75%, with 100% specificity and 87.5% sensitivity. PMID:25276224

  19. Prediction of food protein allergenicity: a bioinformatic learning systems approach.

    PubMed

    Zorzet, Anna; Gustafsson, Mats; Hammerling, Ulf

    2002-01-01

    Food hypersensitivity is constantly increasing in Western societies with a prevalence of about 1-2% in Europe and in the USA. Among children, the incidence is even higher. Because of the introduction of foods derived from genetically modified crops on the marketplace, the scientific community, regulatory bodies and international associations have intensified discussions on risk assessment procedures to identify potential food allergenicity of the newly introduced proteins. In this work, we present a novel biocomputational methodology for the classification of amino acid sequences with regard to food allergenicity and non-allergenicity. This method relies on a computerised learning system trained using selected excerpts of amino acid sequences. One example of such a successful learning system is presented which consists of feature extraction from sequence alignments performed with the FASTA3 algorithm (employing the BLOSUM50 substitution matrix) combined with the k-Nearest-Neighbour (kNN) classification algorithm. Briefly, the two features extracted are the alignment score and the alignment length and the kNN algorithm assigns the pair of extracted features from an unknown sequence to the prevalent class among its k nearest neighbours in the training (prototype) set available. 91 food allergens from several specialised public repositories of food allergy and the SWALL database were identified, pre-processed, and stored, yielding one of the most extensively characterised repositories of allergenic sequences known today. All allergenic sequences were classified using a standard one-leave-out cross validation procedure yielding about 81% correctly classified allergens and the classification of 367 non-allergens in an independent test set resulted in about 98% correct classifications. The biocomputational approach presented should be regarded as a significant extension and refinement of earlier attempts suggested for in silico food safety assessment. Our results show

  20. Future Performance Trend Indicators: A Current Value Approach to Human Resources Accounting. Report III. Multivariate Predictions of Organizational Performance Across Time.

    ERIC Educational Resources Information Center

    Pecorella, Patricia A.; Bowers, David G.

    Multiple regression in a double cross-validated design was used to predict two performance measures (total variable expense and absence rate) by multi-month period in five industrial firms. The regressions do cross-validate, and produce multiple coefficients which display both concurrent and predictive effects, peaking 18 months to two years…

  1. COMPUTING THERAPY FOR PRECISION MEDICINE: COLLABORATIVE FILTERING INTEGRATES AND PREDICTS MULTI-ENTITY INTERACTIONS

    PubMed Central

    REGENBOGEN, SAM; WILKINS, ANGELA D.; LICHTARGE, OLIVIER

    2015-01-01

    Biomedicine produces copious information it cannot fully exploit. Specifically, there is considerable need to integrate knowledge from disparate studies to discover connections across domains. Here, we used a Collaborative Filtering approach, inspired by online recommendation algorithms, in which non-negative matrix factorization (NMF) predicts interactions among chemicals, genes, and diseases only from pairwise information about their interactions. Our approach, applied to matrices derived from the Comparative Toxicogenomics Database, successfully recovered Chemical-Disease, Chemical-Gene, and Disease-Gene networks in 10-fold cross-validation experiments. Additionally, we could predict each of these interaction matrices from the other two. Integrating all three CTD interaction matrices with NMF led to good predictions of STRING, an independent, external network of protein-protein interactions. Finally, this approach could integrate the CTD and STRING interaction data to improve Chemical-Gene cross-validation performance significantly, and, in a time-stamped study, it predicted information added to CTD after a given date, using only data prior to that date. We conclude that collaborative filtering can integrate information across multiple types of biological entities, and that as a first step towards precision medicine it can compute drug repurposing hypotheses. PMID:26776170

  2. Automated Learning of Temporal Expressions.

    PubMed

    Redd, Douglas; Shaoa, YiJun; Yang, Jing; Divita, Guy; Zeng-Treitler, Qing

    2015-01-01

    Clinical notes contain important temporal information that are critical for making clinical diagnosis and treatment as well as for retrospective analyses. Manually created regular expressions are commonly used for the extraction of temporal information; however, this can be a time consuming and brittle approach. We describe a novel algorithm for automatic learning of regular expressions in recognizing temporal expressions. Five classes of temporal expressions are identified. Keywords specific to those classes are used to retrieve snippets of text representing the same keywords in context. Those snippets are used for Regular Expression Discovery Extraction (REDEx). These learned regular expressions are then evaluated using 10-fold cross validation. Precision and recall are very high, above 0.95 for most classes.

  3. Supervised learning for neural manifold using spatiotemporal brain activity

    NASA Astrophysics Data System (ADS)

    Kuo, Po-Chih; Chen, Yong-Sheng; Chen, Li-Fen

    2015-12-01

    Objective. Determining the means by which perceived stimuli are compactly represented in the human brain is a difficult task. This study aimed to develop techniques for the construction of the neural manifold as a representation of visual stimuli. Approach. We propose a supervised locally linear embedding method to construct the embedded manifold from brain activity, taking into account similarities between corresponding stimuli. In our experiments, photographic portraits were used as visual stimuli and brain activity was calculated from magnetoencephalographic data using a source localization method. Main results. The results of 10 × 10-fold cross-validation revealed a strong correlation between manifolds of brain activity and the orientation of faces in the presented images, suggesting that high-level information related to image content can be revealed in the brain responses represented in the manifold. Significance. Our experiments demonstrate that the proposed method is applicable to investigation into the inherent patterns of brain activity.

  4. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; Liu, Ying

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  5. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    SciTech Connect

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; Liu, Ying

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalized linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.

  6. Complementing boosted regression trees models of SOC stocks distributions with geostatistical approaches

    NASA Astrophysics Data System (ADS)

    martin, manuel; Lacarce, Eva; Meersmans, Jeroen; Orton, Thomas; Saby, Nicolas; Paroissien, Jean-Baptiste; Jolivet, Claudy; Boulonne, Line; Arrouays, Dominique

    2013-04-01

    Soil organic carbon (SOC) plays a major role in the global carbon budget. It can act as a source or a sink of atmospheric carbon, thereby possibly influencing the course of climate change. Improving the tools that model the spatial distributions of SOC stocks at national scales is a priority, both for monitoring changes in SOC and as an input for global carbon cycles studies. In this paper, first, we considered several increasingly complex boosted regression trees (BRT), a convenient and efficient multiple regression model from the statistical learning field. Further, we considered and a robust geostatistical approach coupled to the BRT models. Testing the different approaches was performed on the dataset from the French Soil Monitoring Network, with a consistent cross-validation procedure. We showed that the BRT models, given its ease of use and its predictive performance, could be preferred to geostatistical models for SOC mapping at the national scale, and if possible be joined with geostatistical models. This conclusion is valid provided that care is exercised in model fitting and validating, that the dataset does not allow for modeling local spatial autocorrelations, as it is the case for many national systematic sampling schemes, and when good quality data about SOC drivers included in the models is available.

  7. A prediction model for Atlantic named storm frequency using a year-by-year increment approach

    NASA Astrophysics Data System (ADS)

    Fan, K.

    2010-12-01

    This paper presents a year-by-year incremental approach to forecasting the Atlantic named storm frequency (ATSF) for the hurricane season (June 1- November 30). The year-by-year increase or decrease in the ATSF is first forecasted to yield a net ATSF prediction. Six key predictors for the year-by-year increment in the number of Atlantic named tropical storms have been identified that are available before May 1. The forecast model for the year-by-year increment of the ATSF is first established using a multi-linear regression method based on data taken from the years of 1965-1999, and the forecast model of the ATSF is then derived. The prediction model for the ATSF shows good prediction skill. Compared to the climatological average mean absolute error (MAE) of 4.1, the percentage improvement in the MAE is 29 % for the hindcast period of 2004-2009 and 46 % for the cross-validation test from 1985-2009 (26 yrs). This work demonstrates that the year-by-year incremental approach has the potential to improve operational forecasting skill for the ATSF.

  8. A Market-Basket Approach to Predict the Acute Aquatic Toxicity of Munitions and Energetic Materials.

    PubMed

    Burgoon, Lyle D

    2016-06-01

    An ongoing challenge in chemical production, including the production of insensitive munitions and energetics, is the ability to make predictions about potential environmental hazards early in the process. To address this challenge, a quantitative structure activity relationship model was developed to predict acute fathead minnow toxicity of insensitive munitions and energetic materials. Computational predictive toxicology models like this one may be used to identify and prioritize environmentally safer materials early in their development. The developed model is based on the Apriori market-basket/frequent itemset mining approach to identify probabilistic prediction rules using chemical atom-pairs and the lethality data for 57 compounds from a fathead minnow acute toxicity assay. Lethality data were discretized into four categories based on the Globally Harmonized System of Classification and Labelling of Chemicals. Apriori identified toxicophores for categories two and three. The model classified 32 of the 57 compounds correctly, with a fivefold cross-validation classification rate of 74 %. A structure-based surrogate approach classified the remaining 25 chemicals correctly at 48 %. This result is unsurprising as these 25 chemicals were fairly unique within the larger set.

  9. A Systematic Approach to Predicting Spring Force for Sagittal Craniosynostosis Surgery.

    PubMed

    Zhang, Guangming; Tan, Hua; Qian, Xiaohua; Zhang, Jian; Li, King; David, Lisa R; Zhou, Xiaobo

    2016-05-01

    Spring-assisted surgery (SAS) can effectively treat scaphocephaly by reshaping crania with the appropriate spring force. However, it is difficult to accurately estimate spring force without considering biomechanical properties of tissues. This study presents and validates a reliable system to accurately predict the spring force for sagittal craniosynostosis surgery. The authors randomly chose 23 patients who underwent SAS and had been followed for at least 2 years. An elastic model was designed to characterize the biomechanical behavior of calvarial bone tissue for each individual. After simulating the contact force on accurate position of the skull strip with the springs, the finite element method was applied to calculating the stress of each tissue node based on the elastic model. A support vector regression approach was then used to model the relationships between biomechanical properties generated from spring force, bone thickness, and the change of cephalic index after surgery. Therefore, for a new patient, the optimal spring force can be predicted based on the learned model with virtual spring simulation and dynamic programming approach prior to SAS. Leave-one-out cross-validation was implemented to assess the accuracy of our prediction. As a result, the mean prediction accuracy of this model was 93.35%, demonstrating the great potential of this model as a useful adjunct for preoperative planning tool.

  10. Denoising peptide tandem mass spectra for spectral libraries: a Bayesian approach.

    PubMed

    Shao, Wenguang; Lam, Henry

    2013-07-01

    With the rapid accumulation of data from shotgun proteomics experiments, it has become feasible to build comprehensive and high-quality spectral libraries of tandem mass spectra of peptides. A spectral library condenses experimental data into a retrievable format and can be used to aid peptide identification by spectral library searching. A key step in spectral library building is spectrum denoising, which is best accomplished by merging multiple replicates of the same peptide ion into a consensus spectrum. However, this approach cannot be applied to "singleton spectra," for which only one observed spectrum is available for the peptide ion. We developed a method, based on a Bayesian classifier, for denoising peptide tandem mass spectra. The classifier accounts for relationships between peaks, and can be trained on the fly from consensus spectra and immediately applied to denoise singleton spectra, without hard-coded knowledge about peptide fragmentation. A linear regression model was also trained to predict the number of useful "signal" peaks in a spectrum, thereby obviating the need for arbitrary thresholds for peak filtering. This Bayesian approach accumulates weak evidence systematically to boost the discrimination power between signal and noise peaks, and produces readily interpretable conditional probabilities that offer valuable insights into peptide fragmentation behaviors. By cross validation, spectra denoised by this method were shown to retain more signal peaks, and have higher spectral similarities to replicates, than those filtered by intensity only.

  11. Neural Network Based Response Prediction of rTMS in Major Depressive Disorder Using QEEG Cordance

    PubMed Central

    Ozekes, Serhat; Gultekin, Selahattin; Tarhan, Nevzat; Hizli Sayar, Gokben; Bayram, Ali

    2015-01-01

    Objective The combination of repetitive transcranial magnetic stimulation (rTMS), a non-pharmacological form of therapy for treating major depressive disorder (MDD), and electroencephalogram (EEG) is a valuable tool for investigating the functional connectivity in the brain. This study aims to explore whether pre-treating frontal quantitative EEG (QEEG) cordance is associated with response to rTMS treatment among MDD patients by using an artificial intelligence approach, artificial neural network (ANN). Methods The artificial neural network using pre-treatment cordance of frontal QEEG classification was carried out to identify responder or non-responder to rTMS treatment among 55 MDD subjects. The classification performance was evaluated using k-fold cross-validation. Results The ANN classification identified responders to rTMS treatment with a sensitivity of 93.33%, and its overall accuracy reached to 89.09%. Area under Receiver Operating Characteristic (ROC) curve (AUC) value for responder detection using 6, 8 and 10 fold cross validation were 0.917, 0.823 and 0.894 respectively. Conclusion Potential utility of ANN approach method can be used as a clinical tool in administering rTMS therapy to a targeted group of subjects suffering from MDD. This methodology is more potentially useful to the clinician as prediction is possible using EEG data collected before this treatment process is initiated. It is worth using feature selection algorithms to raise the sensitivity and accuracy values. PMID:25670947

  12. The Object-analogue approach for probabilistic forecasting

    NASA Astrophysics Data System (ADS)

    Frediani, M. E.; Hopson, T. M.; Anagnostou, E. N.; Hacker, J.

    2015-12-01

    The object-analogue is a new method to estimate forecast uncertainty and to derive probabilistic predictions of gridded forecast fields over larger regions rather than point locations. The method has been developed for improving the forecast of 10-meter wind speed over the northeast US, and it can be extended to other forecast variables, vertical levels, and other regions. The object-analogue approach combines the analog post-processing technique (Hopson 2005; Hamill 2006; Delle Monache 2011) with the Method for Object-based Diagnostic Evaluation (MODE) for forecast verification (Davis et al 2006a, b). Originally, MODE is used to verify mainly precipitation forecasts using features of a forecast region represented by an object. The analog technique is used to reduce the NWP systematic and random errors of a gridded forecast field. In this study we use MODE-derived objects to characterize the wind fields forecasts into attributes such as object area, centroid location, and intensity percentiles, and apply the analogue concept to these objects. The object-analogue method uses a database of objects derived from reforecasts and their respective reanalysis. Given a real-time forecast field, it searches the database and selects the top-ranked objects with the most similar set of attributes using the MODE fuzzy logic algorithm for object matching. The attribute probabilities obtained with the set of selected object-analogues are used to derive a multi-layer probabilistic prediction. The attribute probabilities are combined into three uncertainty layers that address the main concerns of most applications: location, area, and magnitude. The multi-layer uncertainty can be weighted and combined or used independently in such that it provides a more accurate prediction, adjusted according to the application interest. In this study we present preliminary results of the object-analogue method. Using a database with one hundred storms we perform a leave-one-out cross-validation to

  13. Extreme rainfall distribution mapping: Comparison of two approaches in West Africa

    NASA Astrophysics Data System (ADS)

    Panthou, G.; Vischel, T.; Lebel, T.; Blanchet, J.; Quantin, G.; Ali, A.

    2012-12-01

    In a world where populations are increasingly exposed to natural hazards, extreme rainfall mapping remains an important subject of research. Extreme rainfall maps are required for both flood risk management and civil engineering structure design, the challenge being to take into account the local information provided by point rainfall series as well as the necessity of some regional coherency. Such a coherency is not guaranteed when extreme value distributions are fitted separately to individual maximum rainfall series. Two approaches based on the extreme value theory (Block Maxima Analysis) are compared here, with an application to extreme rainfall mapping in West Africa. Annual daily rainfall maxima are extracted from rain-gauges series and modeled over the study region by GEV (Generalized Extreme Value) distributions. These two approaches are the following: (i) The Local Fit and Interpolation (LFI) approach which consists of a spatial interpolation of the GEV distribution parameters estimated independently at each raingauge serie. (ii) The Spatial Maximum Likelihood Estimation (SMLE) which directly estimates the GEV distribution over the entire region by a single maximum likelihood fit using jointly all measurements combined with spatial covariates. Five LFI and three SMLE methods are considered, using the information provided by 126 daily rainfall series covering the period 1950-1990. The methods are first evaluated in calibration. Then the predictive skills and the robustness are assessed through a cross validation and an independent network validation process. The SMLE approach, especially when using the mean annual rainfall as covariate, appears to perform better for most of the scores computed. Using a reference series of 104 years of daily data recorded at Niamey (Niger), it is also shown that the SMLE approach has the capacity to deal more efficiently with the effect of local outliers by using the spatial information provided by nearby stations.

  14. A practical and automated approach to large area forest disturbance mapping with remote sensing.

    PubMed

    Ozdogan, Mutlu

    2014-01-01

    In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover

  15. A Multiscale Approach to InSAR Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Simons, M.; Hetland, E. A.; Muse, P.; Lin, Y.; Dicaprio, C. J.

    2009-12-01

    We describe progress in the development of MInTS (Multiscale analysis of InSAR Time Series), an approach to constructed self-consistent time-dependent deformation observations from repeated satellite-based InSAR images of a given region. MInTS relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. In essence, MInTS allows one to considers all data at the same time as opposed to one pixel at a time, thereby taking advantage of both spatial and temporal characteristics of the deformation field. This approach also permits a consistent treatment of all data independent of the presence of localized holes due to unwrapping issues in any given interferogram. Specifically, the presence of holes is accounted for through a weighting scheme that accounts for the extent of actual data versus the area of holes associated with any given wavelet. In terms of the temporal representation, we have the flexibility to explicitly parametrize known processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). Our approach also allows for the temporal parametrization to include a set of general functions in order to account for unexpected processes. We allow for various forms of model regularization using a cross-validation approach to select penalty parameters. We also experiment with the use of sparsity inducing regularization as a way to select from a large dictionary of time functions. The multiscale analysis allows us to consider various contributions (e.g., orbit errors) that may affect specific scales but not others. The methods described here are all embarrassingly parallel and suitable for implementation on a cluster computer. We demonstrate the use of MInTS using a large suite of ERS-1/2 and Envisat interferograms for Long Valley Caldera, and validate

  16. A Machine Learning Approach to Automated Structural Network Analysis: Application to Neonatal Encephalopathy

    PubMed Central

    Ziv, Etay; Tymofiyeva, Olga; Ferriero, Donna M.; Barkovich, A. James; Hess, Chris P.; Xu, Duan

    2013-01-01

    Neonatal encephalopathy represents a heterogeneous group of conditions associated with life-long developmental disabilities and neurological deficits. Clinical measures and current anatomic brain imaging remain inadequate predictors of outcome in children with neonatal encephalopathy. Some studies have suggested that brain development and, therefore, brain connectivity may be altered in the subgroup of patients who subsequently go on to develop clinically significant neurological abnormalities. Large-scale structural brain connectivity networks constructed using diffusion tractography have been posited to reflect organizational differences in white matter architecture at the mesoscale, and thus offer a unique tool for characterizing brain development in patients with neonatal encephalopathy. In this manuscript we use diffusion tractography to construct structural networks for a cohort of patients with neonatal encephalopathy. We systematically map these networks to a high-dimensional space and then apply standard machine learning algorithms to predict neurological outcome in the cohort. Using nested cross-validation we demonstrate high prediction accuracy that is both statistically significant and robust over a broad range of thresholds. Our algorithm offers a novel tool to evaluate neonates at risk for developing neurological deficit. The described approach can be applied to any brain pathology that affects structural connectivity. PMID:24282501

  17. Vocal individuality cues in the African penguin (Spheniscus demersus): a source-filter theory approach.

    PubMed

    Favaro, Livio; Gamba, Marco; Alfieri, Chiara; Pessani, Daniela; McElligott, Alan G

    2015-01-01

    The African penguin is a nesting seabird endemic to southern Africa. In penguins of the genus Spheniscus vocalisations are important for social recognition. However, it is not clear which acoustic features of calls can encode individual identity information. We recorded contact calls and ecstatic display songs of 12 adult birds from a captive colony. For each vocalisation, we measured 31 spectral and temporal acoustic parameters related to both source and filter components of calls. For each parameter, we calculated the Potential of Individual Coding (PIC). The acoustic parameters showing PIC ≥ 1.1 were used to perform a stepwise cross-validated discriminant function analysis (DFA). The DFA correctly classified 66.1% of the contact calls and 62.5% of display songs to the correct individual. The DFA also resulted in the further selection of 10 acoustic features for contact calls and 9 for display songs that were important for vocal individuality. Our results suggest that studying the anatomical constraints that influence nesting penguin vocalisations from a source-filter perspective, can lead to a much better understanding of the acoustic cues of individuality contained in their calls. This approach could be further extended to study and understand vocal communication in other bird species. PMID:26602001

  18. TransportTP: A two-phase classification approach for membrane transporter prediction and characterization

    PubMed Central

    2009-01-01

    Background Membrane transporters play crucial roles in living cells. Experimental characterization of transporters is costly and time-consuming. Current computational methods for transporter characterization still require extensive curation efforts, especially for eukaryotic organisms. We developed a novel genome-scale transporter prediction and characterization system called TransportTP that combined homology-based and machine learning methods in a two-phase classification approach. First, traditional homology methods were employed to predict novel transporters based on sequence similarity to known classified proteins in the Transporter Classification Database (TCDB). Second, machine learning methods were used to integrate a variety of features to refine the initial predictions. A set of rules based on transporter features was developed by machine learning using well-curated proteomes as guides. Results In a cross-validation using the yeast proteome for training and the proteomes of ten other organisms for testing, TransportTP achieved an equivalent recall and precision of 81.8%, based on TransportDB, a manually annotated transporter database. In an independent test using the Arabidopsis proteome for training and four recently sequenced plant proteomes for testing, it achieved a recall of 74.6% and a precision of 73.4%, according to our manual curation. Conclusions TransportTP is the most effective tool for eukaryotic transporter characterization up to date. PMID:20003433

  19. Prediction of community prevalence of human onchocerciasis in the Amazonian onchocerciasis focus: Bayesian approach.

    PubMed Central

    Carabin, Hélène; Escalona, Marisela; Marshall, Clare; Vivas-Martínez, Sarai; Botto, Carlos; Joseph, Lawrence; Basáñez, María-Gloria

    2003-01-01

    OBJECTIVE: To develop a Bayesian hierarchical model for human onchocerciasis with which to explore the factors that influence prevalence of microfilariae in the Amazonian focus of onchocerciasis and predict the probability of any community being at least mesoendemic (>20% prevalence of microfilariae), and thus in need of priority ivermectin treatment. METHODS: Models were developed with data from 732 individuals aged > or =15 years who lived in 29 Yanomami communities along four rivers of the south Venezuelan Orinoco basin. The models' abilities to predict prevalences of microfilariae in communities were compared. The deviance information criterion, Bayesian P-values, and residual values were used to select the best model with an approximate cross-validation procedure. FINDINGS: A three-level model that acknowledged clustering of infection within communities performed best, with host age and sex included at the individual level, a river-dependent altitude effect at the community level, and additional clustering of communities along rivers. This model correctly classified 25/29 (86%) villages with respect to their need for priority ivermectin treatment. CONCLUSION: Bayesian methods are a flexible and useful approach for public health research and control planning. Our model acknowledges the clustering of infection within communities, allows investigation of links between individual- or community-specific characteristics and infection, incorporates additional uncertainty due to missing covariate data, and informs policy decisions by predicting the probability that a new community is at least mesoendemic. PMID:12973640

  20. An Approach for Predicting Essential Genes Using Multiple Homology Mapping and Machine Learning Algorithms

    PubMed Central

    Hua, Hong-Li; Zhang, Fa-Zhan; Labena, Abraham Alemayehu; Dong, Chuan; Jin, Yan-Ting

    2016-01-01

    Investigation of essential genes is significant to comprehend the minimal gene sets of cell and discover potential drug targets. In this study, a novel approach based on multiple homology mapping and machine learning method was introduced to predict essential genes. We focused on 25 bacteria which have characterized essential genes. The predictions yielded the highest area under receiver operating characteristic (ROC) curve (AUC) of 0.9716 through tenfold cross-validation test. Proper features were utilized to construct models to make predictions in distantly related bacteria. The accuracy of predictions was evaluated via the consistency of predictions and known essential genes of target species. The highest AUC of 0.9552 and average AUC of 0.8314 were achieved when making predictions across organisms. An independent dataset from Synechococcus elongatus, which was released recently, was obtained for further assessment of the performance of our model. The AUC score of predictions is 0.7855, which is higher than other methods. This research presents that features obtained by homology mapping uniquely can achieve quite great or even better results than those integrated features. Meanwhile, the work indicates that machine learning-based method can assign more efficient weight coefficients than using empirical formula based on biological knowledge. PMID:27660763

  1. A machine learning approach for classification of anatomical coverage in CT

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoyong; Lo, Pechin; Ramakrishna, Bharath; Goldin, Johnathan; Brown, Matthew

    2016-03-01

    Automatic classification of anatomical coverage of medical images is critical for big data mining and as a pre-processing step to automatically trigger specific computer aided diagnosis systems. The traditional way to identify scans through DICOM headers has various limitations due to manual entry of series descriptions and non-standardized naming conventions. In this study, we present a machine learning approach where multiple binary classifiers were used to classify different anatomical coverages of CT scans. A one-vs-rest strategy was applied. For a given training set, a template scan was selected from the positive samples and all other scans were registered to it. Each registered scan was then evenly split into k × k × k non-overlapping blocks and for each block the mean intensity was computed. This resulted in a 1 × k3 feature vector for each scan. The feature vectors were then used to train a SVM based classifier. In this feasibility study, four classifiers were built to identify anatomic coverages of brain, chest, abdomen-pelvis, and chest-abdomen-pelvis CT scans. Each classifier was trained and tested using a set of 300 scans from different subjects, composed of 150 positive samples and 150 negative samples. Area under the ROC curve (AUC) of the testing set was measured to evaluate the performance in a two-fold cross validation setting. Our results showed good classification performance with an average AUC of 0.96.

  2. Vocal individuality cues in the African penguin (Spheniscus demersus): a source-filter theory approach

    PubMed Central

    Favaro, Livio; Gamba, Marco; Alfieri, Chiara; Pessani, Daniela; McElligott, Alan G.

    2015-01-01

    The African penguin is a nesting seabird endemic to southern Africa. In penguins of the genus Spheniscus vocalisations are important for social recognition. However, it is not clear which acoustic features of calls can encode individual identity information. We recorded contact calls and ecstatic display songs of 12 adult birds from a captive colony. For each vocalisation, we measured 31 spectral and temporal acoustic parameters related to both source and filter components of calls. For each parameter, we calculated the Potential of Individual Coding (PIC). The acoustic parameters showing PIC ≥ 1.1 were used to perform a stepwise cross-validated discriminant function analysis (DFA). The DFA correctly classified 66.1% of the contact calls and 62.5% of display songs to the correct individual. The DFA also resulted in the further selection of 10 acoustic features for contact calls and 9 for display songs that were important for vocal individuality. Our results suggest that studying the anatomical constraints that influence nesting penguin vocalisations from a source-filter perspective, can lead to a much better understanding of the acoustic cues of individuality contained in their calls. This approach could be further extended to study and understand vocal communication in other bird species. PMID:26602001

  3. Neural network approach to quantum-chemistry data: Accurate prediction of density functional theory energies

    NASA Astrophysics Data System (ADS)

    Balabin, Roman M.; Lomakina, Ekaterina I.

    2009-08-01

    Artificial neural network (ANN) approach has been applied to estimate the density functional theory (DFT) energy with large basis set using lower-level energy values and molecular descriptors. A total of 208 different molecules were used for the ANN training, cross validation, and testing by applying BLYP, B3LYP, and BMK density functionals. Hartree-Fock results were reported for comparison. Furthermore, constitutional molecular descriptor (CD) and quantum-chemical molecular descriptor (QD) were used for building the calibration model. The neural network structure optimization, leading to four to five hidden neurons, was also carried out. The usage of several low-level energy values was found to greatly reduce the prediction error. An expected error, mean absolute deviation, for ANN approximation to DFT energies was 0.6±0.2 kcal mol-1. In addition, the comparison of the different density functionals with the basis sets and the comparison of multiple linear regression results were also provided. The CDs were found to overcome limitation of the QD. Furthermore, the effective ANN model for DFT/6-311G(3df,3pd) and DFT/6-311G(2df,2pd) energy estimation was developed, and the benchmark results were provided.

  4. Vocal individuality cues in the African penguin (Spheniscus demersus): a source-filter theory approach.

    PubMed

    Favaro, Livio; Gamba, Marco; Alfieri, Chiara; Pessani, Daniela; McElligott, Alan G

    2015-01-01

    The African penguin is a nesting seabird endemic to southern Africa. In penguins of the genus Spheniscus vocalisations are important for social recognition. However, it is not clear which acoustic features of calls can encode individual identity information. We recorded contact calls and ecstatic display songs of 12 adult birds from a captive colony. For each vocalisation, we measured 31 spectral and temporal acoustic parameters related to both source and filter components of calls. For each parameter, we calculated the Potential of Individual Coding (PIC). The acoustic parameters showing PIC ≥ 1.1 were used to perform a stepwise cross-validated discriminant function analysis (DFA). The DFA correctly classified 66.1% of the contact calls and 62.5% of display songs to the correct individual. The DFA also resulted in the further selection of 10 acoustic features for contact calls and 9 for display songs that were important for vocal individuality. Our results suggest that studying the anatomical constraints that influence nesting penguin vocalisations from a source-filter perspective, can lead to a much better understanding of the acoustic cues of individuality contained in their calls. This approach could be further extended to study and understand vocal communication in other bird species.

  5. An Approach for Predicting Essential Genes Using Multiple Homology Mapping and Machine Learning Algorithms

    PubMed Central

    Hua, Hong-Li; Zhang, Fa-Zhan; Labena, Abraham Alemayehu; Dong, Chuan; Jin, Yan-Ting

    2016-01-01

    Investigation of essential genes is significant to comprehend the minimal gene sets of cell and discover potential drug targets. In this study, a novel approach based on multiple homology mapping and machine learning method was introduced to predict essential genes. We focused on 25 bacteria which have characterized essential genes. The predictions yielded the highest area under receiver operating characteristic (ROC) curve (AUC) of 0.9716 through tenfold cross-validation test. Proper features were utilized to construct models to make predictions in distantly related bacteria. The accuracy of predictions was evaluated via the consistency of predictions and known essential genes of target species. The highest AUC of 0.9552 and average AUC of 0.8314 were achieved when making predictions across organisms. An independent dataset from Synechococcus elongatus, which was released recently, was obtained for further assessment of the performance of our model. The AUC score of predictions is 0.7855, which is higher than other methods. This research presents that features obtained by homology mapping uniquely can achieve quite great or even better results than those integrated features. Meanwhile, the work indicates that machine learning-based method can assign more efficient weight coefficients than using empirical formula based on biological knowledge.

  6. A Bayesian Approach to the Design and Analysis of Computer Experiments

    SciTech Connect

    Currin, C.

    1988-01-01

    We consider the problem of designing and analyzing experiments for prediction of the function y(f), t {element_of} T, where y is evaluated by means of a computer code (typically by solving complicated equations that model a physical system), and T represents the domain of inputs to the code. We use a Bayesian approach, in which uncertainty about y is represented by a spatial stochastic process (random function); here we restrict attention to stationary Gaussian processes. The posterior mean function can be used as an interpolating function, with uncertainties given by the posterior standard deviations. Instead of completely specifying the prior process, we consider several families of priors, and suggest some cross-validational methods for choosing one that performs relatively well on the function at hand. As a design criterion, we use the expected reduction in the entropy of the random vector y (T*), where T* {contained_in} T is a given finite set of ''sites'' (input configurations) at which predictions are to be made. We describe an exchange algorithm for constructing designs that are optimal with respect to this criterion. To demonstrate the use of these design and analysis methods, several examples are given, including one experiment on a computer model of a thermal energy storage device and another on an integrated circuit simulator.

  7. A Metabolomic Approach to Target Compounds from the Asteraceae Family for Dual COX and LOX Inhibition

    PubMed Central

    Chagas-Paula, Daniela A.; Zhang, Tong; Da Costa, Fernando B.; Edrada-Ebel, RuAngelie

    2015-01-01

    The application of metabolomics in phytochemical analysis is an innovative strategy for targeting active compounds from a complex plant extract. Species of the Asteraceae family are well-known to exhibit potent anti-inflammatory (AI) activity. Dual inhibition of the enzymes COX-1 and 5-LOX is essential for the treatment of several inflammatory diseases, but there is not much investigation reported in the literature for natural products. In this study, 57 leaf extracts (EtOH-H2O 7:3, v/v) from different genera and species of the Asteraceae family were tested against COX-1 and 5-LOX while HPLC-ESI-HRMS analysis of the extracts indicated high diversity in their chemical compositions. Using O2PLS-DA (R2 > 0.92; VIP > 1 and positive Y-correlation values), dual inhibition potential of low-abundance metabolites was determined. The O2PLS-DA results exhibited good validation values (cross-validation = Q2 > 0.7 and external validation = P2 > 0.6) with 0% of false positive predictions. The metabolomic approach determined biomarkers for the required biological activity and detected active compounds in the extracts displaying unique mechanisms of action. In addition, the PCA data also gave insights on the chemotaxonomy of the family Asteraceae across its diverse range of genera and tribes. PMID:26184333

  8. Predicting dissolved oxygen concentration using kernel regression modeling approaches with nonlinear hydro-chemical data.

    PubMed

    Singh, Kunwar P; Gupta, Shikha; Rai, Premanjali

    2014-05-01

    Kernel function-based regression models were constructed and applied to a nonlinear hydro-chemical dataset pertaining to surface water for predicting the dissolved oxygen levels. Initial features were selected using nonlinear approach. Nonlinearity in the data was tested using BDS statistics, which revealed the data with nonlinear structure. Kernel ridge regression, kernel principal component regression, kernel partial least squares regression, and support vector regression models were developed using the Gaussian kernel function and their generalization and predictive abilities were compared in terms of several statistical parameters. Model parameters were optimized using the cross-validation procedure. The proposed kernel regression methods successfully captured the nonlinear features of the original data by transforming it to a high dimensional feature space using the kernel function. Performance of all the kernel-based modeling methods used here were comparable both in terms of predictive and generalization abilities. Values of the performance criteria parameters suggested for the adequacy of the constructed models to fit the nonlinear data and their good predictive capabilities. PMID:24338099

  9. PDP-CON: prediction of domain/linker residues in protein sequences using a consensus approach.

    PubMed

    Chatterjee, Piyali; Basu, Subhadip; Zubek, Julian; Kundu, Mahantapas; Nasipuri, Mita; Plewczynski, Dariusz

    2016-04-01

    The prediction of domain/linker residues in protein sequences is a crucial task in the functional classification of proteins, homology-based protein structure prediction, and high-throughput structural genomics. In this work, a novel consensus-based machine-learning technique was applied for residue-level prediction of the domain/linker annotations in protein sequences using ordered/disordered regions along protein chains and a set of physicochemical properties. Six different classifiers-decision tree, Gaussian naïve Bayes, linear discriminant analysis, support vector machine, random forest, and multilayer perceptron-were exhaustively explored for the residue-level prediction of domain/linker regions. The protein sequences from the curated CATH database were used for training and cross-validation experiments. Test results obtained by applying the developed PDP-CON tool to the mutually exclusive, independent proteins of the CASP-8, CASP-9, and CASP-10 databases are reported. An n-star quality consensus approach was used to combine the results yielded by different classifiers. The average PDP-CON accuracy and F-measure values for the CASP targets were found to be 0.86 and 0.91, respectively. The dataset, source code, and all supplementary materials for this work are available at https://cmaterju.org/cmaterbioinfo/ for noncommercial use.

  10. PancreApp: An Innovative Approach to Computational Individualization of Nutritional Therapy in Chronic Gastrointestinal Disorders.

    PubMed

    Stawiski, Konrad; Strzałka, Alicja; Puła, Anna; Bijakowski, Krzysztof

    2015-01-01

    Medical nutrition therapy has a pivotal role in the management of chronic gastrointestinal disorders, like chronic pancreatitis, inflammatory bowel diseases (Leśniowski-Crohn's disease and ulcerative colitis) or irritable bowel syndrome. The aim of this study is to develop, deploy and evaluate an interactive application for Windows and Android operating systems, which could serve as a digital diet diary and as an analysis and a prediction tool both for the patient and the doctor. The software is gathering details about patients' diet and associated fettle in order to estimate fettle change after future meals, specifically for an individual patient. In this paper we have described the process of idea development and application design, feasibility assessment using a phone survey, a preliminary evaluation on 6 healthy individuals and early results of a clinical trial, which is still an ongoing study. Results suggest that applied approximative approach (Shepard's method of 6-dimensional metric interpolation) has a potential to predict the fettle accurately; as shown in leave-one-out cross-validation (LOOCV). PMID:26262064

  11. Identification of Real MicroRNA Precursors with a Pseudo Structure Status Composition Approach

    PubMed Central

    Liu, Bin; Fang, Longyun; Liu, Fule; Wang, Xiaolong; Chen, Junjie; Chou, Kuo-Chen

    2015-01-01

    Containing about 22 nucleotides, a micro RNA (abbreviated miRNA) is a small non-coding RNA molecule, functioning in transcriptional and post-transcriptional regulation of gene expression. The human genome may encode over 1000 miRNAs. Albeit poorly characterized, miRNAs are widely deemed as important regulators of biological processes. Aberrant expression of miRNAs has been observed in many cancers and other disease states, indicating they are deeply implicated with these diseases, particularly in carcinogenesis. Therefore, it is important for both basic research and miRNA-based therapy to discriminate the real pre-miRNAs from the false ones (such as hairpin sequences with similar stem-loops). Particularly, with the avalanche of RNA sequences generated in the postgenomic age, it is highly desired to develop computational sequence-based methods in this regard. Here two new predictors, called “iMcRNA-PseSSC” and “iMcRNA-ExPseSSC”, were proposed for identifying the human pre-microRNAs by incorporating the global or long-range structure-order information using a way quite similar to the pseudo amino acid composition approach. Rigorous cross-validations on a much larger and more stringent newly constructed benchmark dataset showed that the two new predictors (accessible at http://bioinformatics.hitsz.edu.cn/iMcRNA/) outperformed or were highly comparable with the best existing predictors in this area. PMID:25821974

  12. Evaluating fossil calibrations for dating phylogenies in light of rates of molecular evolution: a comparison of three approaches.

    PubMed

    Lukoschek, Vimoksalehi; Scott Keogh, J; Avise, John C

    2012-01-01

    Evolutionary and biogeographic studies increasingly rely on calibrated molecular clocks to date key events. Although there has been significant recent progress in development of the techniques used for molecular dating, many issues remain. In particular, controversies abound over the appropriate use and placement of fossils for calibrating molecular clocks. Several methods have been proposed for evaluating candidate fossils; however, few studies have compared the results obtained by different approaches. Moreover, no previous study has incorporated the effects of nucleotide saturation from different data types in the evaluation of candidate fossils. In order to address these issues, we compared three approaches for evaluating fossil calibrations: the single-fossil cross-validation method of Near, Meylan, and Shaffer (2005. Assessing concordance of fossil calibration points in molecular clock studies: an example using turtles. Am. Nat. 165:137-146), the empirical fossil coverage method of Marshall (2008. A simple method for bracketing absolute divergence times on molecular phylogenies using multiple fossil calibration points. Am. Nat. 171:726-742), and the Bayesian multicalibration method of Sanders and Lee (2007. Evaluating molecular clock calibrations using Bayesian analyses with soft and hard bounds. Biol. Lett. 3:275-279) and explicitly incorporate the effects of data type (nuclear vs. mitochondrial DNA) for identifying the most reliable or congruent fossil calibrations. We used advanced (Caenophidian) snakes as a case study; however, our results are applicable to any taxonomic group with multiple candidate fossils, provided appropriate taxon sampling and sufficient molecular sequence data are available. We found that data type strongly influenced which fossil calibrations were identified as outliers, regardless of which method was used. Despite the use of complex partitioned models of sequence evolution and multiple calibrations throughout the tree, saturation

  13. A glucose-sensing contact lens: a new approach to noninvasive continuous physiological glucose monitoring

    NASA Astrophysics Data System (ADS)

    Badugu, Ramachandram; Lakowicz, Joseph R.; Geddes, Chris D.

    2004-06-01

    We have developed a new technology for the non-invasive continuous monitoring of tear glucose using a daily use, disposable contact lens, embedded with sugar-sensing boronic acid containing fluorophores. Our findings show that our approach may be suitable for the continuous monitoring of tear glucose levels in the range 50 - 500 μM, which track blood glucose levels that are typically ~ 5-10 fold higher. We initially tested the sensing concept with well-established, previously published, boronic acid probes and the results could conclude the used probes, with higher pKa values, are almost insensitive toward glucose within the contact lens, attributed to the low pH and polarity inside the lens. Subsequently, we have developed a range of probes based on the quinolinium backbone, having considerably lower pKa values, which enables them to be suitable to sense the physiological glucose in the acidic pH contact lens. Herein we describe the results based on our findings towards the development of glucose sensing contact lens and therefore an approach to non-invasive continuous monitoring of tear glucose using a contact lens.

  14. A Hybrid Approach Using Case-Based Reasoning and Rule-Based Reasoning to Support Cancer Diagnosis: A Pilot Study.

    PubMed

    Saraiva, Renata M; Bezerra, João; Perkusich, Mirko; Almeida, Hyggo; Siebra, Clauirton

    2015-01-01

    Recently there has been an increasing interest in applying information technology to support the diagnosis of diseases such as cancer. In this paper, we present a hybrid approach using case-based reasoning (CBR) and rule-based reasoning (RBR) to support cancer diagnosis. We used symptoms, signs, and personal information from patients as inputs to our model. To form specialized diagnoses, we used rules to define the input factors' importance according to the patient's characteristics. The model's output presents the probability of the patient having a type of cancer. To carry out this research, we had the approval of the ethics committee at Napoleão Laureano Hospital, in João Pessoa, Brazil. To define our model's cases, we collected real patient data at Napoleão Laureano Hospital. To define our model's rules and weights, we researched specialized literature and interviewed health professional. To validate our model, we used K-fold cross validation with the data collected at Napoleão Laureano Hospital. The results showed that our approach is an effective CBR system to diagnose cancer. PMID:26262174

  15. A Multilayer Network Approach for Guiding Drug Repositioning in Neglected Diseases

    PubMed Central

    Chernomoretz, Ariel; Agüero, Fernán

    2016-01-01

    Drug development for neglected diseases has been historically hampered due to lack of market incentives. The advent of public domain resources containing chemical information from high throughput screenings is changing the landscape of drug discovery for these diseases. In this work we took advantage of data from extensively studied organisms like human, mouse, E. coli and yeast, among others, to develop a novel integrative network model to prioritize and identify candidate drug targets in neglected pathogen proteomes, and bioactive drug-like molecules. We modeled genomic (proteins) and chemical (bioactive compounds) data as a multilayer weighted network graph that takes advantage of bioactivity data across 221 species, chemical similarities between 1.7 105 compounds and several functional relations among 1.67 105 proteins. These relations comprised orthology, sharing of protein domains, and shared participation in defined biochemical pathways. We showcase the application of this network graph to the problem of prioritization of new candidate targets, based on the information available in the graph for known compound-target associations. We validated this strategy by performing a cross validation procedure for known mouse and Trypanosoma cruzi targets and showed that our approach outperforms classic alignment-based approaches. Moreover, our model provides additional flexibility as two different network definitions could be considered, finding in both cases qualitatively different but sensible candidate targets. We also showcase the application of the network to suggest targets for orphan compounds that are active against Plasmodium falciparum in high-throughput screens. In this case our approach provided a reduced prioritization list of target proteins for the query molecules and showed the ability to propose new testable hypotheses for each compound. Moreover, we found that some predictions highlighted by our network model were supported by independent

  16. A Multilayer Network Approach for Guiding Drug Repositioning in Neglected Diseases.

    PubMed

    Berenstein, Ariel José; Magariños, María Paula; Chernomoretz, Ariel; Agüero, Fernán

    2016-01-01

    Drug development for neglected diseases has been historically hampered due to lack of market incentives. The advent of public domain resources containing chemical information from high throughput screenings is changing the landscape of drug discovery for these diseases. In this work we took advantage of data from extensively studied organisms like human, mouse, E. coli and yeast, among others, to develop a novel integrative network model to prioritize and identify candidate drug targets in neglected pathogen proteomes, and bioactive drug-like molecules. We modeled genomic (proteins) and chemical (bioactive compounds) data as a multilayer weighted network graph that takes advantage of bioactivity data across 221 species, chemical similarities between 1.7 105 compounds and several functional relations among 1.67 105 proteins. These relations comprised orthology, sharing of protein domains, and shared participation in defined biochemical pathways. We showcase the application of this network graph to the problem of prioritization of new candidate targets, based on the information available in the graph for known compound-target associations. We validated this strategy by performing a cross validation procedure for known mouse and Trypanosoma cruzi targets and showed that our approach outperforms classic alignment-based approaches. Moreover, our model provides additional flexibility as two different network definitions could be considered, finding in both cases qualitatively different but sensible candidate targets. We also showcase the application of the network to suggest targets for orphan compounds that are active against Plasmodium falciparum in high-throughput screens. In this case our approach provided a reduced prioritization list of target proteins for the query molecules and showed the ability to propose new testable hypotheses for each compound. Moreover, we found that some predictions highlighted by our network model were supported by independent

  17. Novel Approaches for the Accumulation of Oxygenated Intermediates to Multi-Millimolar Concentrations

    PubMed Central

    Krebs, Carsten; Dassama, Laura M. K.; Matthews, Megan L.; Jiang, Wei; Price, John C.; Korboukh, Victoria; Li, Ning; Bollinger, J. Martin

    2012-01-01

    Metalloenzymes that utilize molecular oxygen as a co-substrate catalyze a wide variety of chemically difficult oxidation reactions. Significant insight into the reaction mechanisms of these enzymes can be obtained by the application of a combination of rapid kinetic and spectroscopic methods to the direct structural characterization of intermediate states. A key limitation of this approach is the low aqueous solubility (< 2 mM) of the co-substrate, O2, which undergoes further dilution (typically by one-third or one-half) upon initiation of reactions by rapid-mixing. This situation imposes a practical upper limit on [O2] (and therefore on the concentration of reactive intermediate(s) that can be rapidly accumulated) of ∼1-1.3 mM in such experiments as they are routinely carried out. However, many spectroscopic methods benefit from or require significantly greater concentrations of the species to be studied. To overcome this problem, we have recently developed two new approaches for the preparation of samples of oxygenated intermediates: (1) direct oxygenation of reduced metalloenzymes using gaseous O2 and (2) the in situ generation of O2 from chlorite catalyzed by the enzyme chlorite dismutase (Cld). Whereas the former method is applicable only to intermediates with half lives of several minutes, owing to the sluggishness of transport of O2 across the gas-liquid interface, the latter approach has been successfully applied to trap several intermediates at high concentration and purity by the freeze-quench method. The in situ approach permits generation of a pulse of at least 5 mM O2 within ∼ 1 ms and accumulation of O2 to effective concentrations of up to ∼ 11 mM (i.e. ∼ 10-fold greater than by the conventional approach). The use of these new techniques for studies of oxygenases and oxidases is discussed. PMID:24368870

  18. Estimation of extreme daily precipitation: comparison between regional and geostatistical approaches.

    NASA Astrophysics Data System (ADS)

    Hellies, Matteo; Deidda, Roberto; Langousis, Andreas

    2016-04-01

    addition, KUD avoids separation of the study region in contiguous areas, allowing for a continuous representation of the spatial variation of distribution parameters. Comparisons based on different error metrics, conducted with the method of cross-validation, show better performance of the geostatistical approach relative to the regional one. In addition, the geostatistical approach better represents local features of the spatial variability of rainfall, while overcoming the issue of abrupt shifts of distribution parameters at the boundaries between contiguous homogeneous regions.

  19. The Treatment of Differentiated Thyroid Cancer in Children: Emphasis on Surgical Approach and Radioactive Iodine Therapy

    PubMed Central

    Mazzaferri, Ernest L.; Verburg, Frederik A.; Reiners, Christoph; Luster, Markus; Breuer, Christopher K.; Dinauer, Catherine A.; Udelsman, Robert

    2011-01-01

    Pediatric thyroid cancer is a rare disease with an excellent prognosis. Compared with adults, epithelial-derived differentiated thyroid cancer (DTC), which includes papillary and follicular thyroid cancer, presents at more advanced stages in children and is associated with higher rates of recurrence. Because of its uncommon occurrence, randomized trials have not been applied to test best-care options in children. Even in adults that have a 10-fold or higher incidence of thyroid cancer than children, few prospective trials have been executed to compare treatment approaches. We recognize that treatment recommendations have changed over the past few decades and will continue to do so. Respecting the aggressiveness of pediatric thyroid cancer, high recurrence rates, and the problems associated with decades of long-term follow-up, a premium should be placed on treatments that minimize risk of recurrence and the adverse effects of treatments and facilitate follow-up. We recommend that total thyroidectomy and central compartment lymph node dissection is the surgical procedure of choice for children with DTC if it can be performed by a high-volume thyroid surgeon. We recommend radioactive iodine therapy for remnant ablation or residual disease for most children with DTC. We recommend long-term follow-up because disease can recur decades after initial diagnosis and therapy. Considering the complexity of DTC management and the potential complications associated with therapy, it is essential that pediatric DTC be managed by physicians with expertise in this area. PMID:21880704

  20. Spatial Analysis of Geothermal Resource Potential in New York and Pennsylvania: A Stratified Kriging Approach

    NASA Astrophysics Data System (ADS)

    Smith, J. D.; Whealton, C. A.; Stedinger, J. R.

    2014-12-01

    Resource assessments for low-grade geothermal applications employ available well temperature measurements to determine if the resource potential is sufficient for supporting district heating opportunities. This study used a compilation of bottomhole temperature (BHT) data from recent unconventional shale oil and gas wells, along with legacy oil, gas, and storage wells, in Pennsylvania (PA) and New York (NY). Our study's goal was to predict the geothermal resource potential and associated uncertainty for the NY-PA region using kriging interpolation. The dataset was scanned for outliers, and some observations were removed. Because these wells were drilled for reasons other than geothermal resource assessment, their spatial density varied widely. An exploratory spatial statistical analysis revealed differences in the spatial structure of the geothermal gradient data (the kriging semi-variogram and its nugget variance, shape, sill, and the degree of anisotropy). As a result, a stratified kriging procedure was adopted to better capture the statistical structure of the data, to generate an interpolated surface, and to quantify the uncertainty of the computed surface. The area was stratified reflecting different physiographic provinces in NY and PA that have geologic properties likely related to variations in the value of the geothermal gradient. The kriging prediction and the variance-of-prediction were determined for each province by the generation of a semi-variogram using only the wells that were located within that province. A leave-one-out cross validation (LOOCV) was conducted as a diagnostic tool. The results of stratified kriging were compared to kriging using the whole region to determine the impact of stratification. The two approaches provided similar predictions of the geothermal gradient. However, the variance-of-prediction was different. The stratified approach is recommended because it gave a more appropriate site-specific characterization of uncertainty

  1. Prediction of selective estrogen receptor beta agonist using open data and machine learning approach

    PubMed Central

    Niu, Ai-qin; Xie, Liang-jun; Wang, Hui; Zhu, Bing; Wang, Sheng-qi

    2016-01-01

    Background Estrogen receptors (ERs) are nuclear transcription factors that are involved in the regulation of many complex physiological processes in humans. ERs have been validated as important drug targets for the treatment of various diseases, including breast cancer, ovarian cancer, osteoporosis, and cardiovascular disease. ERs have two subtypes, ER-α and ER-β. Emerging data suggest that the development of subtype-selective ligands that specifically target ER-β could be a more optimal approach to elicit beneficial estrogen-like activities and reduce side effects. Methods Herein, we focused on ER-β and developed its in silico quantitative structure-activity relationship models using machine learning (ML) methods. Results The chemical structures and ER-β bioactivity data were extracted from public chemogenomics databases. Four types of popular fingerprint generation methods including MACCS fingerprint, PubChem fingerprint, 2D atom pairs, and Chemistry Development Kit extended fingerprint were used as descriptors. Four ML methods including Naïve Bayesian classifier, k-nearest neighbor, random forest, and support vector machine were used to train the models. The range of classification accuracies was 77.10% to 88.34%, and the range of area under the ROC (receiver operating characteristic) curve values was 0.8151 to 0.9475, evaluated by the 5-fold cross-validation. Comparison analysis suggests that both the random forest and the support vector machine are superior for the classification of selective ER-β agonists. Chemistry Development Kit extended fingerprints and MACCS fingerprint performed better in structural representation between active and inactive agonists. Conclusion These results demonstrate that combining the fingerprint and ML approaches leads to robust ER-β agonist prediction models, which are potentially applicable to the identification of selective ER-β agonists. PMID:27486309

  2. A novel meta-analytic approach: mining frequent co-activation patterns in neuroimaging databases.

    PubMed

    Caspers, Julian; Zilles, Karl; Beierle, Christoph; Rottschy, Claudia; Eickhoff, Simon B

    2014-04-15

    In recent years, coordinate-based meta-analyses have become a powerful and widely used tool to study co-activity across neuroimaging experiments, a development that was supported by the emergence of large-scale neuroimaging databases like BrainMap. However, the evaluation of co-activation patterns is constrained by the fact that previous coordinate-based meta-analysis techniques like Activation Likelihood Estimation (ALE) and Multilevel Kernel Density Analysis (MKDA) reveal all brain regions that show convergent activity within a dataset without taking into account actual within-experiment co-occurrence patterns. To overcome this issue we here propose a novel meta-analytic approach named PaMiNI that utilizes a combination of two well-established data-mining techniques, Gaussian mixture modeling and the Apriori algorithm. By this, PaMiNI enables a data-driven detection of frequent co-activation patterns within neuroimaging datasets. The feasibility of the method is demonstrated by means of several analyses on simulated data as well as a real application. The analyses of the simulated data show that PaMiNI identifies the brain regions underlying the simulated activation foci and perfectly separates the co-activation patterns of the experiments in the simulations. Furthermore, PaMiNI still yields good results when activation foci of distinct brain regions become closer together or if they are non-Gaussian distributed. For the further evaluation, a real dataset on working memory experiments is used, which was previously examined in an ALE meta-analysis and hence allows a cross-validation of both methods. In this latter analysis, PaMiNI revealed a fronto-parietal "core" network of working memory and furthermore indicates a left-lateralization in this network. Finally, to encourage a widespread usage of this new method, the PaMiNI approach was implemented into a publicly available software system. PMID:24365675

  3. Modeling particulate matter concentrations measured through mobile monitoring in a deletion/substitution/addition approach

    NASA Astrophysics Data System (ADS)

    Su, Jason G.; Hopke, Philip K.; Tian, Yilin; Baldwin, Nichole; Thurston, Sally W.; Evans, Kristin; Rich, David Q.

    2015-12-01

    Land use regression modeling (LUR) through local scale circular modeling domains has been used to predict traffic-related air pollution such as nitrogen oxides (NOX). LUR modeling for fine particulate matters (PM), which generally have smaller spatial gradients than NOX, has been typically applied for studies involving multiple study regions. To increase the spatial coverage for fine PM and key constituent concentrations, we designed a mobile monitoring network in Monroe County, New York to measure pollutant concentrations of black carbon (BC, wavelength at 880 nm), ultraviolet black carbon (UVBC, wavelength at 3700 nm) and Delta-C (the difference between the UVBC and BC concentrations) using the Clarkson University Mobile Air Pollution Monitoring Laboratory (MAPL). A Deletion/Substitution/Addition (D/S/A) algorithm was conducted, which used circular buffers as a basis for statistics. The algorithm maximizes the prediction accuracy for locations without measurements using the V-fold cross-validation technique, and it reduces overfitting compared to other approaches. We found that the D/S/A LUR modeling approach could achieve good results, with prediction powers of 60%, 63%, and 61%, respectively, for BC, UVBC, and Delta-C. The advantage of mobile monitoring is that it can monitor pollutant concentrations at hundreds of spatial points in a region, rather than the typical less than 100 points from a fixed site saturation monitoring network. This research indicates that a mobile saturation sampling network, when combined with proper modeling techniques, can uncover small area variations (e.g., 10 m) in particulate matter concentrations.

  4. A novel approach for honey pollen profile assessment using an electronic tongue and chemometric tools.

    PubMed

    Dias, Luís G; Veloso, Ana C A; Sousa, Mara E B C; Estevinho, Letícia; Machado, Adélio A S C; Peres, António M

    2015-11-01

    Nowadays the main honey producing countries require accurate labeling of honey before commercialization, including floral classification. Traditionally, this classification is made by melissopalynology analysis, an accurate but time-consuming task requiring laborious sample pre-treatment and high-skilled technicians. In this work the potential use of a potentiometric electronic tongue for pollinic assessment is evaluated, using monofloral and polyfloral honeys. The results showed that after splitting honeys according to color (white, amber and dark), the novel methodology enabled quantifying the relative percentage of the main pollens (Castanea sp., Echium sp., Erica sp., Eucaliptus sp., Lavandula sp., Prunus sp., Rubus sp. and Trifolium sp.). Multiple linear regression models were established for each type of pollen, based on the best sensors' sub-sets selected using the simulated annealing algorithm. To minimize the overfitting risk, a repeated K-fold cross-validation procedure was implemented, ensuring that at least 10-20% of the honeys were used for internal validation. With this approach, a minimum average determination coefficient of 0.91 ± 0.15 was obtained. Also, the proposed technique enabled the correct classification of 92% and 100% of monofloral and polyfloral honeys, respectively. The quite satisfactory performance of the novel procedure for quantifying the relative pollen frequency may envisage its applicability for honey labeling and geographical origin identification. Nevertheless, this approach is not a full alternative to the traditional melissopalynologic analysis; it may be seen as a practical complementary tool for preliminary honey floral classification, leaving only problematic cases for pollinic evaluation. PMID:26572837

  5. A multivariate approach for assessing leaf photo-assimilation performance using the IPL index.

    PubMed

    Losciale, Pasquale; Manfrini, Luigi; Morandi, Brunella; Pierpaoli, Emanuele; Zibordi, Marco; Stellacci, Anna Maria; Salvati, Luca; Corelli Grappadelli, Luca

    2015-08-01

    The detection of leaf functionality is of pivotal importance for plant scientists from both theoretical and practical point of view. Leaves are the sources of dry matter and food, and they sequester CO2 as well. Under the perspective of climate change and primary resource scarcity (i.e. water, fertilizers and soil), assessing leaf photo-assimilation in a rapid but comprehensive way can be helpful for understanding plant behavior under different environmental conditions and for managing the agricultural practices properly. Several approaches have been proposed for this goal, however, some of them resulted very efficient but little reliable. On the other hand, the high reliability and exhaustive information of some models used for estimating net photosynthesis are at the expense of time and ease of measurement. The present study employs a multivariate statistical approach to assess a model aiming at estimating leaf photo-assimilation performance, using few and easy-to-measure variables. The model, parameterized for apple and pear and subjected to internal and external cross validation, involves chlorophyll fluorescence, carboxylative activity of ribulose-1,5-bisphosphate carboxylase/oxygenase (RuBisCo), air and leaf temperature. Results prove that this is a fair-predictive model allowing reliable variable assessment. The dependent variable, called IPL index, was found strongly and linearly correlated to net photosynthesis. IPL and the model behind it seem to be (1) reliable, (2) easy and fast to measure and (3) usable in vivo and in the field for such cases where high amount of data is required (e.g. precision agriculture and phenotyping studies).

  6. Computational chemistry approach for the early detection of drug-induced idiosyncratic liver toxicity.

    PubMed

    Cruz-Monteagudo, Maykel; Cordeiro, M Natália D S; Borges, Fernanda

    2008-03-01

    Idiosyncratic drug toxicity (IDT), considered as a toxic host-dependent event, with an apparent lack of dose response relationship, is usually not predictable from early phases of clinical trials, representing a particularly confounding complication in drug development. Albeit a rare event (usually <1/5000), IDT is often life threatening and is one of the major reasons new drugs never reach the market or are withdrawn post marketing. Computational methodologies, like the computer-based approach proposed in the present study, can play an important role in addressing IDT in early drug discovery. We report for the first time a systematic evaluation of classification models to predict idiosyncratic hepatotoxicity based on linear discriminant analysis (LDA), artificial neural networks (ANN), and machine learning algorithms (OneR) in conjunction with a 3D molecular structure representation and feature selection methods. These modeling techniques (LDA, feature selection to prevent over-fitting and multicollinearity, ANN to capture nonlinear relationships in the data, as well as the simple OneR classifier) were found to produce QSTR models with satisfactory internal cross-validation statistics and predictivity on an external subset of chemicals. More specifically, the models reached values of accuracy/sensitivity/specificity over 84%/78%/90%, respectively in the training series along with predictivity values ranging from ca. 78 to 86% of correctly classified drugs. An LDA-based desirability analysis was carried out in order to select the levels of the predictor variables needed to trigger the more desirable drug, i.e. the drug with lower potential for idiosyncratic hepatotoxicity. Finally, two external test sets were used to evaluate the ability of the models in discriminating toxic from nontoxic structurally and pharmacologically related drugs and the ability of the best model (LDA) in detecting potential idiosyncratic hepatotoxic drugs, respectively. The computational

  7. A novel approach for honey pollen profile assessment using an electronic tongue and chemometric tools.

    PubMed

    Dias, Luís G; Veloso, Ana C A; Sousa, Mara E B C; Estevinho, Letícia; Machado, Adélio A S C; Peres, António M

    2015-11-01

    Nowadays the main honey producing countries require accurate labeling of honey before commercialization, including floral classification. Traditionally, this classification is made by melissopalynology analysis, an accurate but time-consuming task requiring laborious sample pre-treatment and high-skilled technicians. In this work the potential use of a potentiometric electronic tongue for pollinic assessment is evaluated, using monofloral and polyfloral honeys. The results showed that after splitting honeys according to color (white, amber and dark), the novel methodology enabled quantifying the relative percentage of the main pollens (Castanea sp., Echium sp., Erica sp., Eucaliptus sp., Lavandula sp., Prunus sp., Rubus sp. and Trifolium sp.). Multiple linear regression models were established for each type of pollen, based on the best sensors' sub-sets selected using the simulated annealing algorithm. To minimize the overfitting risk, a repeated K-fold cross-validation procedure was implemented, ensuring that at least 10-20% of the honeys were used for internal validation. With this approach, a minimum average determination coefficient of 0.91 ± 0.15 was obtained. Also, the proposed technique enabled the correct classification of 92% and 100% of monofloral and polyfloral honeys, respectively. The quite satisfactory performance of the novel procedure for quantifying the relative pollen frequency may envisage its applicability for honey labeling and geographical origin identification. Nevertheless, this approach is not a full alternative to the traditional melissopalynologic analysis; it may be seen as a practical complementary tool for preliminary honey floral classification, leaving only problematic cases for pollinic evaluation.

  8. A high-resolution approach to estimating ecosystem respiration at continental scales using operational satellite data.

    PubMed

    Jägermeyr, Jonas; Gerten, Dieter; Lucht, Wolfgang; Hostert, Patrick; Migliavacca, Mirco; Nemani, Ramakrishna

    2014-04-01

    A better understanding of the local variability in land-atmosphere carbon fluxes is crucial to improving the accuracy of global carbon budgets. Operational satellite data backed by ground measurements at Fluxnet sites proved valuable in monitoring local variability of gross primary production at highly resolved spatio-temporal resolutions. Yet, we lack similar operational estimates of ecosystem respiration (Re) to calculate net carbon fluxes. If successful, carbon fluxes from such a remote sensing approach would form an independent and sought after measure to complement widely used dynamic global vegetation models (DGVMs). Here, we establish an operational semi-empirical Re model, based only on data from the Moderate Resolution Imaging Spectroradiometer (MODIS) with a resolution of 1 km and 8 days. Fluxnet measurements between 2000 and 2009 from 100 sites across North America and Europe are used for parameterization and validation. Our analysis shows that Re is closely tied to temperature and plant productivity. By separating temporal and intersite variation, we find that MODIS land surface temperature (LST) and enhanced vegetation index (EVI) are sufficient to explain observed Re across most major biomes with a negligible bias [R² = 0.62, RMSE = 1.32 (g C m(-2) d(-1)), MBE = 0.05 (g C m(-2) d(-1))]. A comparison of such satellite-derived Re with those simulated by the DGVM LPJmL reveals similar spatial patterns. However, LPJmL shows higher temperature sensitivities and consistently simulates higher Re values, in high-latitude and subtropical regions. These differences remain difficult to explain and they are likely associated either with LPJmL parameterization or with systematic errors in the Fluxnet sampling technique. While uncertainties remain with Re estimates, the model formulated in this study provides an operational, cross-validated and unbiased approach to scale Fluxnet Re to the continental scale and advances knowledge of spatio-temporal Re variability

  9. A Novel Approach of Understanding and Incorporating Error of Chemical Transport Models into a Geostatistical Framework

    NASA Astrophysics Data System (ADS)

    Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.

    2015-12-01

    The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.

  10. The cleverSuite approach for protein characterization: predictions of structural properties, solubility, chaperone requirements and RNA-binding abilities

    PubMed Central

    Klus, Petr; Bolognesi, Benedetta; Agostini, Federico; Marchese, Domenica; Zanzoni, Andreas; Tartaglia, Gian Gaetano

    2014-01-01

    Motivation: The recent shift towards high-throughput screening is posing new challenges for the interpretation of experimental results. Here we propose the cleverSuite approach for large-scale characterization of protein groups. Description: The central part of the cleverSuite is the cleverMachine (CM), an algorithm that performs statistics on protein sequences by comparing their physico-chemical propensities. The second element is called cleverClassifier and builds on top of the models generated by the CM to allow classification of new datasets. Results: We applied the cleverSuite to predict secondary structure properties, solubility, chaperone requirements and RNA-binding abilities. Using cross-validation and independent datasets, the cleverSuite reproduces experimental findings with great accuracy and provides models that can be used for future investigations. Availability: The intuitive interface for dataset exploration, analysis and prediction is available at http://s.tartaglialab.com/clever_suite. Contact: gian.tartaglia@crg.es Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24493033

  11. A topological restricted maximum likelihood (TopREML) approach to regionalize trended runoff signatures in stream networks

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2015-01-01

    We introduce TopREML as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML) framework generates the best linear unbiased predictor (BLUP) of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged) and Austria (densely gauged), where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. TopREML's ability to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable thanks to remote sensing technology.

  12. TopREML: a topological restricted maximum likelihood approach to regionalize trended runoff signatures in stream networks

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2015-06-01

    We introduce topological restricted maximum likelihood (TopREML) as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML) framework generates the best linear unbiased predictor (BLUP) of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross-validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged) and Austria (densely gauged), where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. The ability of TopREML to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable via remote-sensing technology.

  13. Automatic approach to solve the morphological galaxy classification problem using the sparse representation technique and dictionary learning

    NASA Astrophysics Data System (ADS)

    Diaz-Hernandez, R.; Ortiz-Esquivel, A.; Peregrina-Barreto, H.; Altamirano-Robles, L.; Gonzalez-Bernal, J.

    2016-06-01

    The observation of celestial objects in the sky is a practice that helps astronomers to understand the way in which the Universe is structured. However, due to the large number of observed objects with modern telescopes, the analysis of these by hand is a difficult task. An important part in galaxy research is the morphological structure classification based on the Hubble sequence. In this research, we present an approach to solve the morphological galaxy classification problem in an automatic way by using the Sparse Representation technique and dictionary learning with K-SVD. For the tests in this work, we use a database of galaxies extracted from the Principal Galaxy Catalog (PGC) and the APM Equatorial Catalogue of Galaxies obtaining a total of 2403 useful galaxies. In order to represent each galaxy frame, we propose to calculate a set of 20 features such as Hu's invariant moments, galaxy nucleus eccentricity, gabor galaxy ratio and some other features commonly used in galaxy classification. A stage of feature relevance analysis was performed using Relief-f in order to determine which are the best parameters for the classification tests using 2, 3, 4, 5, 6 and 7 galaxy classes making signal vectors of different length values with the most important features. For the classification task, we use a 20-random cross-validation technique to evaluate classification accuracy with all signal sets achieving a score of 82.27 % for 2 galaxy classes and up to 44.27 % for 7 galaxy classes.

  14. Suicidal Ideation, Parent-Child Relationships, and Adverse Childhood Experiences: A Cross-Validation Study Using a Graphical Markov Model

    ERIC Educational Resources Information Center

    Hardt, Jochen; Herke, Max; Schier, Katarzyna

    2011-01-01

    Suicide is one of the leading causes of death in many Western countries. An exploration of factors associated with suicidality may help to understand the mechanisms that lead to suicide. Two samples in Germany (n = 500 and n = 477) were examined via Internet regarding suicidality, depression, alcohol abuse, adverse childhood experiences, and…

  15. A fast cross-validation method for alignment of electron tomography images based on Beer-Lambert law.

    PubMed

    Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen

    2015-11-01

    In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography. PMID:26455556

  16. Cross validation of geotechnical and geophysical site characterization methods: near surface data from selected accelerometric stations in Crete (Greece)

    NASA Astrophysics Data System (ADS)

    Loupasakis, C.; Tsangaratos, P.; Rozos, D.; Rondoyianni, Th.; Vafidis, A.; Kritikakis, G.; Steiakakis, M.; Agioutantis, Z.; Savvaidis, A.; Soupios, P.; Papadopoulos, I.; Papadopoulos, N.; Sarris, A.; Mangriotis, M.-D.; Dikmen, U.

    2015-06-01

    The specification of the near surface ground conditions is highly important for the design of civil constructions. These conditions determine primarily the ability of the foundation formations to bear loads, the stress - strain relations and the corresponding settlements, as well as the soil amplification and corresponding peak ground motion in case of dynamic loading. The static and dynamic geotechnical parameters as well as the ground-type/soil-category can be determined by combining geotechnical and geophysical methods, such as engineering geological surface mapping, geotechnical drilling, in situ and laboratory testing and geophysical investigations. The above mentioned methods were combined, through the Thalis ″Geo-Characterization″ project, for the site characterization in selected sites of the Hellenic Accelerometric Network (HAN) in the area of Crete Island. The combination of the geotechnical and geophysical methods in thirteen (13) sites provided sufficient information about their limitations, setting up the minimum tests requirements in relation to the type of the geological formations. The reduced accuracy of the surface mapping in urban sites, the uncertainties introduced by the geophysical survey in sites with complex geology and the 1D data provided by the geotechnical drills are some of the causes affecting the right order and the quantity of the necessary investigation methods. Through this study the gradual improvement on the accuracy of site characterization data is going to be presented by providing characteristic examples from a total number of thirteen sites. Selected examples present sufficiently the ability, the limitations and the right order of the investigation methods.

  17. Two-Receiver Measurements of Phase Velocity: Cross-Validation of Ambient-Noise and Earthquake-Based Observations

    NASA Astrophysics Data System (ADS)

    Kästle, Emanuel D.; Soomro, Riaz; Weemstra, Cornelis; Boschi, Lapo; Meier, Thomas

    2016-09-01

    Phase velocities derived from ambient-noise cross-correlation are compared with phase velocities calculated from cross-correlations of waveform recordings of teleseismic earthquakes whose epicenters are approximately on the station-station great circle. The comparison is conducted both for Rayleigh and Love waves using over 1000 station pairs in central Europe. We describe in detail our signal-processing method which allows for automated processing of large amounts of data. Ambient-noise data are collected in the 5 to 80 s, period range, whereas teleseismic data are available between about 8 and 250 s, resulting in a broad common period range between 8 and 80 s. At intermediate periods around 30 s and for shorter inter-station distances, phase velocities measured from ambient noise are on average between 0.5% and 1.5% lower than those observed via the earthquake-based method. This discrepancy is small compared to typical phase-velocity heterogeneities ( 10% peak-to-peak or more: see, e.g., Ek18 stroöm (2014)), observed in this period range. We nevertheless conduct a suite of synthetic tests to evaluate whether known biases in ambient-noise cross-correlation measurements could account for this discrepancy; we specifically evaluate the effects of heterogeneities in source distribution, of azimuthal anisotropy in surface-wave velocity, and of the presence of near-field, rather than far-field only, sources of seismic noise. We find that these effects can be quite important comparing individual station pairs. The systematic discrepancy is presumably due to a combination of factors, related to differences in sensitivity of earthquake vs. noise data to lateral heterogeneity. The datasets from both methods are used to create some preliminary tomographic maps that are characterized by velocity heterogeneities of similar amplitude and pattern, confirming the overall agreement between the two measurement methods.

  18. The Relationships Between Low-Inference Measures of Classroom Behavior and Pupil Growth: A Cross-Validation.

    ERIC Educational Resources Information Center

    Lorentz, Jeffrey L.; Coker, Homer

    As part of the Competency Based Teacher Certification Project in Carroll County, Georgia, large samples of elementary and secondary school teachers and students were observed during a two-year period. Four low-inference observation measures were used to record teacher behaviors and student-teacher interactions: (1) Teacher Practices Observation…

  19. Cross-Validation of the YMCA Submaximal Cycle Ergometer Test to Predict V[o.sub.2] Max

    ERIC Educational Resources Information Center

    Beekley, Matthew D.; Brechue, William F.; deHoyos, Diego V.; Garzarella, Linda; Werber-Zion, Galila; Pollock, Michael L.

    2004-01-01

    Maximal oxygen uptake (V[O.sub.2]max) is an important indicator of health-risk status, specifically for coronary heart disease (Blair et al., 1989). Direct measurement of V[O.sub.2]max is considered to be the most accurate means of determining cardiovascular fitness level. Typically, this measurement is taken using a progressive exercise test on a…

  20. A fast cross-validation method for alignment of electron tomography images based on Beer-Lambert law.

    PubMed

    Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen

    2015-11-01

    In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography.

  1. Homology modeling and virtual screening of inhibitors against TEM- and SHV-type-resistant mutants: A multilayer filtering approach.

    PubMed

    Baig, Mohammad H; Balaramnavar, Vishal M; Wadhwa, Gulshan; Khan, Asad U

    2015-01-01

    TEM and SHV are class-A-type β-lactamases commonly found in Escherichia coli and Klebsiella pneumoniae. Previous studies reported S130G and K234R mutations in SHVs to be 41- and 10-fold more resistant toward clavulanic acid than SHV-1, respectively, whereas TEM S130G and R244S also showed the same level of resistance. These selected mutants confer higher level of resistance against clavulanic acid. They also show little susceptibility against other commercially available β-lactamase inhibitors. In this study, we have used docking-based virtual screening approach in order to screen potential inhibitors against some of the major resistant mutants of SHV and TEM types β-lactamase. Two different inhibitor-resistant mutants from SHV and TEM were selected. Moreover, we have retained the active site water molecules within each enzyme. Active site water molecules were placed within modeled structure of the mutant whose structure was unavailable with protein databank. The novelty of this work lies in the use of multilayer virtual screening approach for the prediction of best and accurate results. We are reporting five inhibitors on the basis of their efficacy against all the selected resistant mutants. These inhibitors were selected on the basis of their binding efficacies and pharmacophore features.

  2. iLOGP: a simple, robust, and efficient description of n-octanol/water partition coefficient for drug design using the GB/SA approach.

    PubMed

    Daina, Antoine; Michielin, Olivier; Zoete, Vincent

    2014-12-22

    The n-octanol/water partition coefficient (log Po/w) is a key physicochemical parameter for drug discovery, design, and development. Here, we present a physics-based approach that shows a strong linear correlation between the computed solvation free energy in implicit solvents and the experimental log Po/w on a cleansed data set of more than 17,500 molecules. After internal validation by five-fold cross-validation and data randomization, the predictive power of the most interesting multiple linear model, based on two GB/SA parameters solely, was tested on two different external sets of molecules. On the Martel druglike test set, the predictive power of the best model (N = 706, r = 0.64, MAE = 1.18, and RMSE = 1.40) is similar to six well-established empirical methods. On the 17-drug test set, our model outperformed all compared empirical methodologies (N = 17, r = 0.94, MAE = 0.38, and RMSE = 0.52). The physical basis of our original GB/SA approach together with its predictive capacity, computational efficiency (1 to 2 s per molecule), and tridimensional molecular graphics capability lay the foundations for a promising predictor, the implicit log P method (iLOGP), to complement the portfolio of drug design tools developed and provided by the SIB Swiss Institute of Bioinformatics.

  3. Development of a human dihydroorotate dehydrogenase (hDHODH) pharma-similarity index approach with scaffold-hopping strategy for the design of novel potential inhibitors.

    PubMed

    Shih, Kuei-Chung; Lee, Chi-Ching; Tsai, Chi-Neu; Lin, Yu-Shan; Tang, Chuan-Yi

    2014-01-01

    Human dihydroorotate dehydrogenase (hDHODH) is a class-2 dihydroorotate dehydrogenase. Because it is extensively used by proliferating cells, its inhibition in autoimmune and inflammatory diseases, cancers, and multiple sclerosis is of substantial clinical importance. In this study, we had two aims. The first was to develop an hDHODH pharma-similarity index approach (PhSIA) using integrated molecular dynamics calculations, pharmacophore hypothesis, and comparative molecular similarity index analysis (CoMSIA) contour information techniques. The approach, for the discovery and design of novel inhibitors, was based on 25 diverse known hDHODH inhibitors. Three statistical methods were used to verify the performance of hDHODH PhSIA. Fischer's cross-validation test provided a 98% confidence level and the goodness of hit (GH) test score was 0.61. The q(2), r(2), and predictive r(2) values were 0.55, 0.97, and 0.92, respectively, for a partial least squares validation method. In our approach, each diverse inhibitor structure could easily be aligned with contour information, and common substructures were unnecessary. For our second aim, we used the proposed approach to design 13 novel hDHODH inhibitors using a scaffold-hopping strategy. Chemical features of the approach were divided into two groups, and the Vitas-M Laboratory fragment was used to create de novo inhibitors. This approach provides a useful tool for the discovery and design of potential inhibitors of hDHODH, and does not require docking analysis; thus, our method can assist medicinal chemists in their efforts to identify novel inhibitors.

  4. Advances in the regionalization approach: geostatistical techniques for estimating flood quantiles

    NASA Astrophysics Data System (ADS)

    Chiarello, Valentina; Caporali, Enrica; Matthies, Hermann G.

    2015-04-01

    The knowledge of peak flow discharges and associated floods is of primary importance in engineering practice for planning of water resources and risk assessment. Streamflow characteristics are usually estimated starting from measurements of river discharges at stream gauging stations. However, the lack of observations at site of interest as well as the measurement inaccuracies, bring inevitably to the necessity of developing predictive models. Regional analysis is a classical approach to estimate river flow characteristics at sites where little or no data exists. Specific techniques are needed to regionalize the hydrological variables over the considered area. Top-kriging or topological kriging, is a kriging interpolation procedure that takes into account the geometric organization and structure of hydrographic network, the catchment area and the nested nature of catchments. The continuous processes in space defined for the point variables are represented by a variogram. In Top-kriging, the measurements are not point values but are defined over a non-zero catchment area. Top-kriging is applied here over the geographical space of Tuscany Region, in Central Italy. The analysis is carried out on the discharge data of 57 consistent runoff gauges, recorded from 1923 to 2014. Top-kriging give also an estimation of the prediction uncertainty in addition to the prediction itself. The results are validated using a cross-validation procedure implemented in the package rtop of the open source statistical environment R The results are compared through different error measurement methods. Top-kriging seems to perform better in nested catchments and larger scale catchments but no for headwater or where there is a high variability for neighbouring catchments.

  5. A volatolomic approach for studying plant variability: the case of selected Helichrysum species (Asteraceae).

    PubMed

    Giuliani, Claudia; Lazzaro, Lorenzo; Calamassi, Roberto; Calamai, Luca; Romoli, Riccardo; Fico, Gelsomina; Foggi, Bruno; Mariotti Lippi, Marta

    2016-10-01

    The species of Helichrysum sect. Stoechadina (Asteraceae) are well-known for their secondary metabolite content and the characteristic aromatic bouquets. In the wild, populations exhibit a wide phenotypic plasticity which makes critical the circumscription of species and infraspecific ranks. Previous investigations on Helichrysum italicum complex focused on a possible phytochemical typification based on hydrodistilled essential oils. Aims of this paper are three-fold: (i) characterizing the volatile profiles of different populations, testing (ii) how these profiles vary across populations and (iii) how the phytochemical diversity may contribute in solving taxonomic problems. Nine selected Helichrysum populations, included within the H. italicum complex, Helichrysum litoreum and Helichrysum stoechas, were investigated. H. stoechas was chosen as outgroup for validating the method. After collection in the wild, plants were cultivated in standard growing conditions for over one year. Annual leafy shoots were screened in the post-blooming period for the emissions of volatile organic compounds (VOCs) by means of headspace solid phase microextraction coupled with gas-chromatography and mass spectrometry (HS-SPME-GC/MS). The VOC composition analysis revealed the production of overall 386 different compounds, with terpenes being the most represented compound class. Statistical data processing allowed the identification of the indicator compounds that differentiate the single populations, revealing the influence of the geographical provenance area in determining the volatile profiles. These results suggested the potential use of VOCs as valuable diacritical characters in discriminating the Helichrysum populations. In addition, the cross-validation analysis hinted the potentiality of this volatolomic study in the discrimination of the Helichrysum species and subspecies, highlighting a general congruence with the current taxonomic treatment of the genus. The consistency

  6. Smart-card-based automatic meal record system intervention tool for analysis using data mining approach.

    PubMed

    Zenitani, Satoko; Nishiuchi, Hiromu; Kiuchi, Takahiro

    2010-04-01

    The Smart-card-based Automatic Meal Record system for company cafeterias (AutoMealRecord system) was recently developed and used to monitor employee eating habits. The system could be a unique nutrition assessment tool for automatically monitoring the meal purchases of all employees, although it only focuses on company cafeterias and has never been validated. Before starting an interventional study, we tested the reliability of the data collected by the system using the data mining approach. The AutoMealRecord data were examined to determine if it could predict current obesity. All data used in this study (n = 899) were collected by a major electric company based in Tokyo, which has been operating the AutoMealRecord system for several years. We analyzed dietary patterns by principal component analysis using data from the system and extracted 5 major dietary patterns: healthy, traditional Japanese, Chinese, Japanese noodles, and pasta. The ability to predict current body mass index (BMI) with dietary preference was assessed with multiple linear regression analyses, and in the current study, BMI was positively correlated with male gender, preference for "Japanese noodles," mean energy intake, protein content, and frequency of body measurement at a body measurement booth in the cafeteria. There was a negative correlation with age, dietary fiber, and lunchtime cafeteria use (R(2) = 0.22). This regression model predicted "would-be obese" participants (BMI >or= 23) with 68.8% accuracy by leave-one-out cross validation. This shows that there was sufficient predictability of BMI based on data from the AutoMealRecord System. We conclude that the AutoMealRecord system is valuable for further consideration as a health care intervention tool.

  7. A volatolomic approach for studying plant variability: the case of selected Helichrysum species (Asteraceae).

    PubMed

    Giuliani, Claudia; Lazzaro, Lorenzo; Calamassi, Roberto; Calamai, Luca; Romoli, Riccardo; Fico, Gelsomina; Foggi, Bruno; Mariotti Lippi, Marta

    2016-10-01

    The species of Helichrysum sect. Stoechadina (Asteraceae) are well-known for their secondary metabolite content and the characteristic aromatic bouquets. In the wild, populations exhibit a wide phenotypic plasticity which makes critical the circumscription of species and infraspecific ranks. Previous investigations on Helichrysum italicum complex focused on a possible phytochemical typification based on hydrodistilled essential oils. Aims of this paper are three-fold: (i) characterizing the volatile profiles of different populations, testing (ii) how these profiles vary across populations and (iii) how the phytochemical diversity may contribute in solving taxonomic problems. Nine selected Helichrysum populations, included within the H. italicum complex, Helichrysum litoreum and Helichrysum stoechas, were investigated. H. stoechas was chosen as outgroup for validating the method. After collection in the wild, plants were cultivated in standard growing conditions for over one year. Annual leafy shoots were screened in the post-blooming period for the emissions of volatile organic compounds (VOCs) by means of headspace solid phase microextraction coupled with gas-chromatography and mass spectrometry (HS-SPME-GC/MS). The VOC composition analysis revealed the production of overall 386 different compounds, with terpenes being the most represented compound class. Statistical data processing allowed the identification of the indicator compounds that differentiate the single populations, revealing the influence of the geographical provenance area in determining the volatile profiles. These results suggested the potential use of VOCs as valuable diacritical characters in discriminating the Helichrysum populations. In addition, the cross-validation analysis hinted the potentiality of this volatolomic study in the discrimination of the Helichrysum species and subspecies, highlighting a general congruence with the current taxonomic treatment of the genus. The consistency

  8. A pragmatic approach to estimate the number of days in exceedance of PM10 limit value

    NASA Astrophysics Data System (ADS)

    Beauchamp, Maxime; Malherbe, Laure; de Fouquet, Chantal

    2015-06-01

    European legislation on ambient air quality requests that Member States report the annual number of exceedances of short-term concentration regulatory thresholds for PM10 and delimit the concerned areas. Measurements at the monitoring stations do not allow to fully describe those areas. We present a methodology to estimate the number of exceedances of the daily limit value over a year, that can be extended to any similar issue. This methodology is applied to PM10 concentrations in France for which the daily limit value is 50 μg m-3, not to be exceeded more that 35 days. A probabilistic model is built using preliminary mapping of daily mean concentrations. First, daily atmospheric concentration fields are estimated at 1 km resolution by external drift kriging, combining surface monitoring observations and outputs from the CHIMERE chemistry transport model. Setting a conventional Gaussian hypothesis for the estimation error, the kriging variance is used to compute the probability of exceeding the daily limit value and to identify three areas: those where we can suppose as certain that the concentrations exceed or not the daily limit value and those where the situation is indeterminate because of the estimation uncertainty. Then, from the set of 365 daily mappings of the probability to exceed the daily limit value, the parameters of a translated Poisson distribution is fitted on the annual number of exceedances of the daily limit value at each grid cell, which enables to compute the probability for this number to exceed 35. The methodology is tested for three years (2007, 2009 and 2011) which present numerous exceedances of the daily limit concentration at some monitoring stations. A cross-validation analysis is carried out to check the efficiency of the methodology. The way to interpret probability maps is discussed. A comparison is made with simpler kriging approaches using indicator kriging of exceedances. Lastly, estimation of the population exposed to PM10

  9. Smart-card-based automatic meal record system intervention tool for analysis using data mining approach.

    PubMed

    Zenitani, Satoko; Nishiuchi, Hiromu; Kiuchi, Takahiro

    2010-04-01

    The Smart-card-based Automatic Meal Record system for company cafeterias (AutoMealRecord system) was recently developed and used to monitor employee eating habits. The system could be a unique nutrition assessment tool for automatically monitoring the meal purchases of all employees, although it only focuses on company cafeterias and has never been validated. Before starting an interventional study, we tested the reliability of the data collected by the system using the data mining approach. The AutoMealRecord data were examined to determine if it could predict current obesity. All data used in this study (n = 899) were collected by a major electric company based in Tokyo, which has been operating the AutoMealRecord system for several years. We analyzed dietary patterns by principal component analysis using data from the system and extracted 5 major dietary patterns: healthy, traditional Japanese, Chinese, Japanese noodles, and pasta. The ability to predict current body mass index (BMI) with dietary preference was assessed with multiple linear regression analyses, and in the current study, BMI was positively correlated with male gender, preference for "Japanese noodles," mean energy intake, protein content, and frequency of body measurement at a body measurement booth in the cafeteria. There was a negative correlation with age, dietary fiber, and lunchtime cafeteria use (R(2) = 0.22). This regression model predicted "would-be obese" participants (BMI >or= 23) with 68.8% accuracy by leave-one-out cross validation. This shows that there was sufficient predictability of BMI based on data from the AutoMealRecord System. We conclude that the AutoMealRecord system is valuable for further consideration as a health care intervention tool. PMID:20534329

  10. A Machine Learning Approach to Estimate Riverbank Geotechnical Parameters from Sediment Particle Size Data

    NASA Astrophysics Data System (ADS)

    Iwashita, Fabio; Brooks, Andrew; Spencer, John; Borombovits, Daniel; Curwen, Graeme; Olley, Jon

    2015-04-01

    Assessing bank stability using geotechnical models traditionally involves the laborious collection of data on the bank and floodplain stratigraphy, as well as in-situ geotechnical data for each sedimentary unit within a river bank. The application of geotechnical bank stability models are limited to those sites where extensive field data has been collected, where their ability to provide predictions of bank erosion at the reach scale are limited without a very extensive and expensive field data collection program. Some challenges in the construction and application of riverbank erosion and hydraulic numerical models are their one-dimensionality, steady-state requirements, lack of calibration data, and nonuniqueness. Also, numerical models commonly can be too rigid with respect to detecting unexpected features like the onset of trends, non-linear relations, or patterns restricted to sub-samples of a data set. These shortcomings create the need for an alternate modelling approach capable of using available data. The application of the Self-Organizing Maps (SOM) approach is well-suited to the analysis of noisy, sparse, nonlinear, multidimensional, and scale-dependent data. It is a type of unsupervised artificial neural network with hybrid competitive-cooperative learning. In this work we present a method that uses a database of geotechnical data collected at over 100 sites throughout Queensland State, Australia, to develop a modelling approach that enables geotechnical parameters (soil effective cohesion, friction angle, soil erodibility and critical stress) to be derived from sediment particle size data (PSD). The model framework and predicted values were evaluated using two methods, splitting the dataset into training and validation set, and through a Bootstrap approach. The basis of Bootstrap cross-validation is a leave-one-out strategy. This requires leaving one data value out of the training set while creating a new SOM to estimate that missing value based on the

  11. Markov blanket-based approach for learning multi-dimensional Bayesian network classifiers: an application to predict the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson's Disease Questionnaire (PDQ-39).

    PubMed

    Borchani, Hanen; Bielza, Concha; Martı Nez-Martı N, Pablo; Larrañaga, Pedro

    2012-12-01

    Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models recently proposed to deal with multi-dimensional classification problems, where each instance in the data set has to be assigned to more than one class variable. In this paper, we propose a Markov blanket-based approach for learning MBCs from data. Basically, it consists of determining the Markov blanket around each class variable using the HITON algorithm, then specifying the directionality over the MBC subgraphs. Our approach is applied to the prediction problem of the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson's Disease Questionnaire (PDQ-39) in order to estimate the health-related quality of life of Parkinson's patients. Fivefold cross-validation experiments were carried out on randomly generated synthetic data sets, Yeast data set, as well as on a real-world Parkinson's disease data set containing 488 patients. The experimental study, including comparison with additional Bayesian network-based approaches, back propagation for multi-label learning, multi-label k-nearest neighbor, multinomial logistic regression, ordinary least squares, and censored least absolute deviations, shows encouraging results in terms of predictive accuracy as well as the identification of dependence relationships among class and feature variables.

  12. Markov blanket-based approach for learning multi-dimensional Bayesian network classifiers: an application to predict the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson's Disease Questionnaire (PDQ-39).

    PubMed

    Borchani, Hanen; Bielza, Concha; Martı Nez-Martı N, Pablo; Larrañaga, Pedro

    2012-12-01

    Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models recently proposed to deal with multi-dimensional classification problems, where each instance in the data set has to be assigned to more than one class variable. In this paper, we propose a Markov blanket-based approach for learning MBCs from data. Basically, it consists of determining the Markov blanket around each class variable using the HITON algorithm, then specifying the directionality over the MBC subgraphs. Our approach is applied to the prediction problem of the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson's Disease Questionnaire (PDQ-39) in order to estimate the health-related quality of life of Parkinson's patients. Fivefold cross-validation experiments were carried out on randomly generated synthetic data sets, Yeast data set, as well as on a real-world Parkinson's disease data set containing 488 patients. The experimental study, including comparison with additional Bayesian network-based approaches, back propagation for multi-label learning, multi-label k-nearest neighbor, multinomial logistic regression, ordinary least squares, and censored least absolute deviations, shows encouraging results in terms of predictive accuracy as well as the identification of dependence relationships among class and feature variables. PMID:22897950

  13. Bifactor Approach to Modeling Multidimensionality of Physical Self-Perception Profile

    ERIC Educational Resources Information Center

    Chung, ChihMing; Liao, Xiaolan; Song, Hairong; Lee, Taehun

    2016-01-01

    The multi-dimensionality of Physical Self-Perception Profile (PSPP) has been acknowledged by the use of correlated-factor model and second-order model. In this study, the authors critically endorse the bifactor model, as a substitute to address the multi-dimensionality of PSPP. To cross-validate the models, analyses are conducted first in…

  14. Unbiased estimation of chloroplast number in mesophyll cells: advantage of a genuine three-dimensional approach

    PubMed Central

    Kubínová, Zuzana

    2014-01-01

    Chloroplast number per cell is a frequently examined quantitative anatomical parameter, often estimated by counting chloroplast profiles in two-dimensional (2D) sections of mesophyll cells. However, a mesophyll cell is a three-dimensional (3D) structure and this has to be taken into account when quantifying its internal structure. We compared 2D and 3D approaches to chloroplast counting from different points of view: (i) in practical measurements of mesophyll cells of Norway spruce needles, (ii) in a 3D model of a mesophyll cell with chloroplasts, and (iii) using a theoretical analysis. We applied, for the first time, the stereological method of an optical disector based on counting chloroplasts in stacks of spruce needle optical cross-sections acquired by confocal laser-scanning microscopy. This estimate was compared with counting chloroplast profiles in 2D sections from the same stacks of sections. Comparing practical measurements of mesophyll cells, calculations performed in a 3D model of a cell with chloroplasts as well as a theoretical analysis showed that the 2D approach yielded biased results, while the underestimation could be up to 10-fold. We proved that the frequently used method for counting chloroplasts in a mesophyll cell by counting their profiles in 2D sections did not give correct results. We concluded that the present disector method can be efficiently used for unbiased estimation of chloroplast number per mesophyll cell. This should be the method of choice, especially in coniferous needles and leaves with mesophyll cells with lignified cell walls where maceration methods are difficult or impossible to use. PMID:24336344

  15. Morpheus Surface Approach

    NASA Video Gallery

    This animation shows the Project Morpheus lander flying a kilometer-long simulated surface approach while avoiding hazards in a landing field. The approach takes place at the Shuttle Landing Facili...

  16. A similarity learning approach to content-based image retrieval: application to digital mammography.

    PubMed

    El-Naqa, Issam; Yang, Yongyi; Galatsanos, Nikolas P; Nishikawa, Robert M; Wernick, Miles N

    2004-10-01

    In this paper, we describe an approach to content-based retrieval of medical images from a database, and provide a preliminary demonstration of our approach as applied to retrieval of digital mammograms. Content-based image retrieval (CBIR) refers to the retrieval of images from a database using information derived from the images themselves, rather than solely from accompanying text indices. In the medical-imaging context, the ultimate aim of CBIR is to provide radiologists with a diagnostic aid in the form of a display of relevant past cases, along with proven pathology and other suitable information. CBIR may also be useful as a training tool for medical students and residents. The goal of information retrieval is to recall from a database information that is relevant to the user's query. The most challenging aspect of CBIR is the definition of relevance (similarity), which is used to guide the retrieval machine. In this paper, we pursue a new approach, in which similarity is learned from training examples provided by human observers. Specifically, we explore the use of neural networks and support vector machines to predict the user's notion of similarity. Within this framework we propose using a hierarchal learning approach, which consists of a cascade of a binary classifier and a regression module to optimize retrieval effectiveness and efficiency. We also explore how to incorporate online human interaction to achieve relevance feedback in this learning framework. Our experiments are based on a database consisting of 76 mammograms, all of which contain clustered microcalcifications (MCs). Our goal is to retrieve mammogram images containing similar MC clusters to that in a query. The performance of the retrieval system is evaluated using precision-recall curves computed using a cross-validation procedure. Our experimental results demonstrate that: 1) the learning framework can accurately predict the perceptual similarity reported by human observers, thereby

  17. Anti-spoofing for display and print attacks on palmprint verification systems

    NASA Astrophysics Data System (ADS)

    Kanhangad, Vivek; Bhilare, Shruti; Garg, Pragalbh; Singh, Pranjalya; Chaudhari, Narendra

    2015-05-01

    A number of approaches for personal authentication using palmprint features have been proposed in the literature, majority of which focus on improving the matching performance. However, of late, preventing potential attacks on biometric systems has become a major concern as more and more biometric systems get deployed for wide range of applications. Among various types of attacks, sensor level attack, commonly known as spoof attack, has emerged as the most common attack due to simplicity in its execution. In this paper, we present an approach for detection of display and print based spoof attacks on palmprint verifcation systems. The approach is based on the analysis of acquired hand images for estimating surface re ectance. First and higher order statistical features computed from the distributions of pixel intensities and sub-band wavelet coeefficients form the feature set. A trained binary classifier utilizes the discriminating information to determine if the acquired image is of real hand or a fake one. Experiments are performed on a publicly available hand image dataset, containing 1300 images corresponding to 230 subjects. Experimental results show that the real hand biometrics samples can be substituted by the fake digital or print copies with an alarming spoof acceptance rate as high as 79.8%. Experimental results also show that the proposed spoof detection approach is very effective for discriminating between real and fake palmprint images. The proposed approach consistently achieves over 99% average 10-fold cross validation classification accuracy in our experiments.

  18. Holistic Approaches to Health.

    ERIC Educational Resources Information Center

    Dinkmeyer, Don; Dinkmeyer, Don, Jr.

    1979-01-01

    The holistic approach to health includes a spectrum of concepts that have an important influence on our health. Elementary school counselors must recognize this previously neglected need for a holistic approach. Stress, relaxation response, biofeedback, and the orthomolecular approach are discussed. (Author/BEF)

  19. Charge-controlled nanoprecipitation as a modular approach to ultrasmall polymer nanocarriers: making bright and stable nanoparticles.

    PubMed

    Reisch, Andreas; Runser, Anne; Arntz, Youri; Mély, Yves; Klymchenko, Andrey S

    2015-05-26

    Ultrasmall polymer nanoparticles are rapidly gaining importance as nanocarriers for drugs and contrast agents. Here, a straightforward modular approach to efficiently loaded and stable sub-20-nm polymer particles is developed. In order to obtain ultrasmall polymer nanoparticles, we investigated the influence of one to two charged groups per polymer chain on the size of particles obtained by nanoprecipitation. Negatively charged carboxylate and sulfonate or positively charged trimethylammonium groups were introduced into the polymers poly(d,l-lactide-co-glycolide) (PLGA), polycaprolactone (PCL), and poly(methyl methacrylate) (PMMA). According to dynamic light scattering, atomic force and electron microscopy, the presence of one to two charged groups per polymer chain can strongly reduce the size of polymer nanoparticles made by nanoprecipitation. The particle size can be further decreased to less than 15 nm by decreasing the concentration of polymer in the solvent used for nanoprecipitation. We then show that even very small nanocarriers of 15 nm size preserve the capacity to encapsulate large amounts of ionic dyes with bulky counterions at efficiencies >90%, which generates polymer nanoparticles 10-fold brighter than quantum dots of the same size. Postmodification of their surface with the PEG containing amphiphiles Tween 80 and pluronic F-127 led to particles that were stable under physiological conditions and in the presence of 10% fetal bovine serum. This modular route could become a general method for the preparation of ultrasmall polymer nanoparticles as nanocarriers of contrast agents and drugs.

  20. Computer-aided detection of microcalcifications in digital breast tomosynthesis (DBT): a multichannel signal detection approach on projection views

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Zhou, Chuan; Lu, Yao

    2012-03-01

    DBT is one of the promising imaging modalities that may improve the sensitivity and specificity for breast cancer detection. We are developing a computer-aided detection (CADe) system for clustered microcalcifications (MC) in DBT. A data set of two-view DBTs from 42 breasts was collected with a GE prototype system. We investigated a 2D approach to MC detection using projection view (PV) images rather than reconstructed 3D DBT volume. Our 2D approach consisted of two major stages: 1) detecting individual MC candidates on each PV, and 2) correlating the MC candidates from the different PVs and detecting clusters in the breast volume. With the MC candidates detected by prescreening on PVs, a trained multi-channel (MCH) filter bank was used to extract signal response from each MC candidate. A ray-tracing process was performed to fuse the MCH responses and localize the MC candidates in 3D using the geometrical information of the DBT system. Potential MC clusters were then identified by dynamic clustering of the MCs in 3D. A two-fold cross-validation method was used to train and test the CADe system. The detection performance of clustered MCs was assessed by free receiver operating characteristic (FROC) analysis. It was found that the CADe system achieved a case-based sensitivity of 90% at an average false positive rate of 2.1 clusters per DBT volume. Our study demonstrated that the CADe system using 2D MCH filter bank is promising for detection of clustered MCs in DBT.

  1. Data-driven approach to Type Ia supernovae: variable selection on the peak luminosity and clustering in visual analytics

    NASA Astrophysics Data System (ADS)

    Uemura, Makoto; Kawabata, Koji S.; Ikeda, Shiro; Maeda, Keiichi; Wu, Hsiang-Yun; Watanabe, Kazuho; Takahashi, Shigeo; Fujishiro, Issei

    2016-03-01

    Type Ia supernovae (SNIa) have an almost uniform peak luminosity, so that they are used as “standard candle” to estimate distances to galaxies in cosmology. In this article, we introduce our two recent works on SNIa based on data-driven approach. The diversity in the peak luminosity of SNIa can be reduced by corrections in several variables. The color and decay rate have been used as the explanatory variables of the peak luminosity in past studies. However, it is proposed that their spectral data could give a better model of the peak luminosity. We use cross-validation in order to control the generalization error and a LASSO-type estimator in order to choose the set of variables. Using 78 samples and 276 candidates of variables, we confirm that the peak luminosity depends on the color and decay rate. Our analysis does not support adding any other variables in order to have a better generalization error. On the other hand, this analysis is based on the assumption that SNIa originate in a single population, while it is not trivial. Indeed, several sub-types possibly having different nature have been proposed. We used a visual analytics tool for the asymmetric biclustering method to find both a good set of variables and samples at the same time. Using 14 variables and 132 samples, we found that SNIa can be divided into two categories by the expansion velocity of ejecta. Those examples demonstrate that the data-driven approach is useful for high-dimensional large-volume data which becomes common in modern astronomy.

  2. Prediction and validation of protein–protein interactors from genome-wide DNA-binding data using a knowledge-based machine-learning approach

    PubMed Central

    Homan, Bernou; Mohamed, Stephanie; Harvey, Richard P.; Bouveret, Romaric

    2016-01-01

    The ability to accurately predict the DNA targets and interacting cofactors of transcriptional regulators from genome-wide data can significantly advance our understanding of gene regulatory networks. NKX2-5 is a homeodomain transcription factor that sits high in the cardiac gene regulatory network and is essential for normal heart development. We previously identified genomic targets for NKX2-5 in mouse HL-1 atrial cardiomyocytes using DNA-adenine methyltransferase identification (DamID). Here, we apply machine learning algorithms and propose a knowledge-based feature selection method for predicting NKX2-5 protein : protein interactions based on motif grammar in genome-wide DNA-binding data. We assessed model performance using leave-one-out cross-validation and a completely independent DamID experiment performed with replicates. In addition to identifying previously described NKX2-5-interacting proteins, including GATA, HAND and TBX family members, a number of novel interactors were identified, with direct protein : protein interactions between NKX2-5 and retinoid X receptor (RXR), paired-related homeobox (PRRX) and Ikaros zinc fingers (IKZF) validated using the yeast two-hybrid assay. We also found that the interaction of RXRα with NKX2-5 mutations found in congenital heart disease (Q187H, R189G and R190H) was altered. These findings highlight an intuitive approach to accessing protein–protein interaction information of transcription factors in DNA-binding experiments. PMID:27683156

  3. Discrimination and characterization of strawberry juice based on electronic nose and tongue: comparison of different juice processing approaches by LDA, PLSR, RF, and SVM.

    PubMed

    Qiu, Shanshan; Wang, Jun; Gao, Liping

    2014-07-01

    An electronic nose (E-nose) and an electronic tongue (E-tongue) have been used to characterize five types of strawberry juices based on processing approaches (i.e., microwave pasteurization, steam blanching, high temperature short time pasteurization, frozen-thawed, and freshly squeezed). Juice quality parameters (vitamin C, pH, total soluble solid, total acid, and sugar/acid ratio) were detected by traditional measuring methods. Multivariate statistical methods (linear discriminant analysis (LDA) and partial least squares regression (PLSR)) and neural networks (Random Forest (RF) and Support Vector Machines) were employed to qualitative classification and quantitative regression. E-tongue system reached higher accuracy rates than E-nose did, and the simultaneous utilization did have an advantage in LDA classification and PLSR regression. According to cross-validation, RF has shown outstanding and indisputable performances in the qualitative and quantitative analysis. This work indicates that the simultaneous utilization of E-nose and E-tongue can discriminate processed fruit juices and predict quality parameters successfully for the beverage industry.

  4. A statistical approach towards the derivation of predictive gene sets for potency ranking of chemicals in the mouse embryonic stem cell test.

    PubMed

    Schulpen, Sjors H W; Pennings, Jeroen L A; Tonk, Elisa C M; Piersma, Aldert H

    2014-03-21

    The embryonic stem cell test (EST) is applied as a model system for detection of embryotoxicants. The application of transcriptomics allows a more detailed effect assessment compared to the morphological endpoint. Genes involved in cell differentiation, modulated by chemical exposures, may be useful as biomarkers of developmental toxicity. We describe a statistical approach to obtain a predictive gene set for toxicity potency ranking of compounds within one class. This resulted in a gene set based on differential gene expression across concentration-response series of phthalatic monoesters. We determined the concentration at which gene expression was changed at least 1.5-fold. Genes responding with the same potency ranking in vitro and in vivo embryotoxicity were selected. A leave-one-out cross-validation showed that the relative potency of each phthalate was always predicted correctly. The classical morphological 50% effect level (ID50) in EST was similar to the predicted concentration using gene set expression responses. A general down-regulation of development-related genes and up-regulation of cell-cycle related genes was observed, reminiscent of the differentiation inhibition in EST. This study illustrates the feasibility of applying dedicated gene set selections as biomarkers for developmental toxicity potency ranking on the basis of in vitro testing in the EST.

  5. On the reliability of a geometric morphometric approach to sex determination: a blind test of six criteria of the juvenile ilium.

    PubMed

    Wilson, Laura A B; Cardoso, Hugo F V; Humphrey, Louise T

    2011-03-20

    Despite the attention of many studies, researchers still struggle to identify criteria with which to sex juvenile remains at levels of accuracy and reproducibility comparable with those documented for adults. This study uses a sample of 82 juvenile ilia from an identified Portuguese population (Lisbon collection) to test the cross-applicability of a new approach by Wilson et al. [23] that uses geometric morphometric methods to sex the subadult ilium. Further, we evaluate the wider applicability of these methods for forensic casework, extending the age range of the original study by examining an additional 19 juvenile ilia from the St. Brides and Spitalfields collections, housed in London. Levels of accuracy for the Portuguese sample (62.2-89.0%) indicate that the methods can be used to document dimorphism in another sample. Discriminant functions are sample-specific, indicated by not better than average classification using cross-validation. We propose a methodological update, whereby we recommend disuse of the auricular surface morphology criterion, based upon reduced success rates and inadequate accuracy of female identification. We show, in addition to population differences, differences in the ontogeny of dimorphism may lead to differing degrees of success for female identification using some criteria. The success rates are highest between the ages of 11.00 and 14.99 years (93.3% males, 80.0% females).

  6. Modeling temporal sequences of cognitive state changes based on a combination of EEG-engagement, EEG-workload, and heart rate metrics.

    PubMed

    Stikic, Maja; Berka, Chris; Levendowski, Daniel J; Rubio, Roberto F; Tan, Veasna; Korszen, Stephanie; Barba, Douglas; Wurzer, David

    2014-01-01

    The objective of this study was to investigate the feasibility of physiological metrics such as ECG-derived heart rate and EEG-derived cognitive workload and engagement as potential predictors of performance on different training tasks. An unsupervised approach based on self-organizing neural network (NN) was utilized to model cognitive state changes over time. The feature vector comprised EEG-engagement, EEG-workload, and heart rate metrics, all self-normalized to account for individual differences. During the competitive training process, a linear topology was developed where the feature vectors similar to each other activated the same NN nodes. The NN model was trained and auto-validated on combat marksmanship training data from 51 participants that were required to make "deadly force decisions" in challenging combat scenarios. The trained NN model was cross validated using 10-fold cross-validation. It was also validated on a golf study in which additional 22 participants were asked to complete 10 sessions of 10 putts each. Temporal sequences of the activated nodes for both studies followed the same pattern of changes, demonstrating the generalization capabilities of the approach. Most node transition changes were local, but important events typically caused significant changes in the physiological metrics, as evidenced by larger state changes. This was investigated by calculating a transition score as the sum of subsequent state transitions between the activated NN nodes. Correlation analysis demonstrated statistically significant correlations between the transition scores and subjects' performances in both studies. This paper explored the hypothesis that temporal sequences of physiological changes comprise the discriminative patterns for performance prediction. These physiological markers could be utilized in future training improvement systems (e.g., through neurofeedback), and applied across a variety of training environments. PMID:25414629

  7. Modeling temporal sequences of cognitive state changes based on a combination of EEG-engagement, EEG-workload, and heart rate metrics

    PubMed Central

    Stikic, Maja; Berka, Chris; Levendowski, Daniel J.; Rubio, Roberto F.; Tan, Veasna; Korszen, Stephanie; Barba, Douglas; Wurzer, David

    2014-01-01

    The objective of this study was to investigate the feasibility of physiological metrics such as ECG-derived heart rate and EEG-derived cognitive workload and engagement as potential predictors of performance on different training tasks. An unsupervised approach based on self-organizing neural network (NN) was utilized to model cognitive state changes over time. The feature vector comprised EEG-engagement, EEG-workload, and heart rate metrics, all self-normalized to account for individual differences. During the competitive training process, a linear topology was developed where the feature vectors similar to each other activated the same NN nodes. The NN model was trained and auto-validated on combat marksmanship training data from 51 participants that were required to make “deadly force decisions” in challenging combat scenarios. The trained NN model was cross validated using 10-fold cross-validation. It was also validated on a golf study in which additional 22 participants were asked to complete 10 sessions of 10 putts each. Temporal sequences of the activated nodes for both studies followed the same pattern of changes, demonstrating the generalization capabilities of the approach. Most node transition changes were local, but important events typically caused significant changes in the physiological metrics, as evidenced by larger state changes. This was investigated by calculating a transition score as the sum of subsequent state transitions between the activated NN nodes. Correlation analysis demonstrated statistically significant correlations between the transition scores and subjects' performances in both studies. This paper explored the hypothesis that temporal sequences of physiological changes comprise the discriminative patterns for performance prediction. These physiological markers could be utilized in future training improvement systems (e.g., through neurofeedback), and applied across a variety of training environments. PMID:25414629

  8. Selection of a T7 promoter mutant with enhanced in vitro activity by a novel multi-copy bead display approach for in vitro evolution.

    PubMed

    Paul, Siddhartha; Stang, Alexander; Lennartz, Klaus; Tenbusch, Matthias; Überla, Klaus

    2013-01-01

    In vitro evolution of nucleic acids and proteins is a powerful strategy to optimize their biological and physical properties. To select proteins with the desired phenotype from large gene libraries, the proteins need to be linked to the gene they are encoded by. To facilitate selection of the desired phenotype and isolation of the encoding DNA, a novel bead display approach was developed, in which each member of a library of beads is first linked to multiple copies of a clonal gene variant by emulsion polymerase chain reaction. Beads are transferred to a second emulsion for an in vitro transcription-translation reaction, in which the protein encoded by each bead's amplicon covalently binds to the bead present in the same picoliter reactor. The beads then contain multiple copies of a clonal gene variant and multiple molecules of the protein encoded by the bead's gene variant and serve as the unit of selection. As a proof of concept, we screened a randomized library of the T7 promoter for high expression levels by flow cytometry and identified a T7 promoter variant with an ~10-fold higher in vitro transcriptional activity, confirming that the multi-copy bead display approach can be efficiently applied to in vitro evolution.

  9. A Practical Approach for Designing Breeding Groups to Maximize Genetic Diversity in a Large Colony of Captive Rhesus Macaques (Macaca mulatta).

    PubMed

    Vinson, Amanda; Raboin, Michael J

    2015-11-01

    Limited guidance is available on practical approaches for maintaining genetic diversity in large NHP colonies that support biomedical research, despite the fact that reduced diversity in these colonies is likely to compromise the application of findings in NHP to human disease. In particular, constraints related to simultaneously housing, breeding, and providing ongoing veterinary care for thousands of animals with a highly complex social structure creates unique challenges for genetic management in these colonies. Because the composition of new breeding groups is a critical component of genetic management, here we outline a 3-stage protocol for forming new breeding groups of NHP that is aimed at maximizing genetic diversity in the face of frequent restrictions on age, sex, and numbers of animals per breeding group. As an example application of this protocol, we describe optimal combinations of rhesus macaques from an analysis of candidate animals available for breeding in July 2013, selected from among the approximately 4000 macaques maintained at the Oregon National Primate Research Center. In addition, a simulation study to explore the genetic diversity in breeding groups formed by using this protocol, indicated an approximate 10-fold higher genome uniqueness, 50% lower mean kinship, and an 84-fold lower mean inbreeding coefficient among potential offspring within groups, when compared with a suboptimal group design. We conclude that this protocol provides a practical and effective approach to breeding group design for colony managers who want to prevent the loss of genetic diversity in large, semiisolated NHP colonies.

  10. Endoscopic thyroidectomy: retroauricular approach

    PubMed Central

    Lee, Doh Young; Baek, Seung-Kuk

    2016-01-01

    The incidence of thyroid cancer has abruptly increased recently, with a female predominance. Conventional thyroidectomy using transcervical incision inevitably leaves an unfavorable neck scar; therefore, various extracervical approaches have been introduced to improve cosmetic satisfaction after thyroidectomy. Several reports demonstrated that these extracervical approaches have advantages not only in terms of cosmesis but also in terms of surgical outcomes and postoperative functional preservation. The retroauricular approach has advantages as the dissection area is smaller than that in the transaxillary approach (TA) and surgical anatomy is familiar to the head and neck surgeons. In addition, there is no concern about paresthesia around the nipple or anterior chest, and surgical direction makes central neck dissection easier than with the other extracervical approaches. Herein, we aim to introduce the surgical procedure of retroauricular approach thyroidectomy and present our experiences of postoperative outcomes. PMID:27294041

  11. [Surgical approaches in rhinoplasty].

    PubMed

    Nguyen, P S; Duron, J-B; Bardot, J; Levet, Y; Aiach, G

    2014-12-01

    In the first step of rhinoplasty, the surgical approach will expose through different types of incisions and dissection planes the osteocartilaginous framework of the nasal pyramid prior to performing actions to reduce or increase the latter. This exposure can be performed by a closed approach or by an external approach--the choice depends on the type of nose and the habits of the surgeon. Far from being opposites, closed and external approaches are complementary and should be known and mastered by surgeons performing rhinoplasty.

  12. Alternative Approaches to Negotiating.

    ERIC Educational Resources Information Center

    Ramming, Thomas M.

    1997-01-01

    The wait-and-react and laundry-list approaches to combating teachers' collective-bargaining demands are ineffective. An alternative goals-setting approach requires management and the district negotiations team to identify important needs and objectives. West Seneca Central School District ended contentious negotiations by presenting unions with…

  13. The NLERAP Approach

    ERIC Educational Resources Information Center

    Nieto, Sonia; Rivera, Melissa; Irizarry, Jason

    2012-01-01

    From the start, NLERAP has been based on two major premises: one is that a sociocultural and sociopolitical approach to learning is more effective than a traditional approach, particularly in the case of populations that have historically been marginalized through their education; and the second is that research is more meaningful and inclusive…

  14. Stuttering-Psycholinguistic Approach

    ERIC Educational Resources Information Center

    Hategan, Carolina Bodea; Anca, Maria; Prihoi, Lacramioara

    2012-01-01

    This research promotes psycholinguistic paradigm, it focusing in delimitating several specific particularities in stuttering pathology. Structural approach, on language sides proves both the recurrent aspects found within specialized national and international literature and the psycholinguistic approaches dependence on the features of the…

  15. Approaches to Truancy Prevention.

    ERIC Educational Resources Information Center

    Mogulescu, Sara; Segal, Heidi J.

    This report examines how New York counties can systematically and programmatically improve approaches to managing persons in need of supervision (PINS), describing approaches to truancy prevention and diversion that have been instituted nationwide and may be applicable to the PINS operating system. Researchers surveyed truancy-specific programs…

  16. The TLC Approach.

    ERIC Educational Resources Information Center

    Welker, William A.

    2002-01-01

    Notes how the author has developed the Teaching and Learning Cues (TLC) approach, an offspring of textbook organizational patterns instruction that stresses the significance of certain words and phrases in reading. Concludes that with the TLC approach, students learn to appreciate the important role cue words and phrases play in understanding…

  17. Risk Prediction of One-Year Mortality in Patients with Cardiac Arrhythmias Using Random Survival Forest

    PubMed Central

    Miao, Fen; Cai, Yun-Peng; Zhang, Yu-Xiao; Li, Ye; Zhang, Yuan-Ting

    2015-01-01

    Existing models for predicting mortality based on traditional Cox proportional hazard approach (CPH) often have low prediction accuracy. This paper aims to develop a clinical risk model with good accuracy for predicting 1-year mortality in cardiac arrhythmias patients using random survival forest (RSF), a robust approach for survival analysis. 10,488 cardiac arrhythmias patients available in the public MIMIC II clinical database were investigated, with 3,452 deaths occurring within 1-year followups. Forty risk factors including demographics and clinical and laboratory information and antiarrhythmic agents were analyzed as potential predictors of all-cause mortality. RSF was adopted to build a comprehensive survival model and a simplified risk model composed of 14 top risk factors. The built comprehensive model achieved a prediction accuracy of 0.81 measured by c-statistic with 10-fold cross validation. The simplified risk model also achieved a good accuracy of 0.799. Both results outperformed traditional CPH (which achieved a c-statistic of 0.733 for the comprehensive model and 0.718 for the simplified model). Moreover, various factors are observed to have nonlinear impact on cardiac arrhythmias prognosis. As a result, RSF based model which took nonlinearity into account significantly outperformed traditional Cox proportional hazard model and has great potential to be a more effective approach for survival analysis. PMID:26379761

  18. Risk Prediction of One-Year Mortality in Patients with Cardiac Arrhythmias Using Random Survival Forest.

    PubMed

    Miao, Fen; Cai, Yun-Peng; Zhang, Yu-Xiao; Li, Ye; Zhang, Yuan-Ting

    2015-01-01

    Existing models for predicting mortality based on traditional Cox proportional hazard approach (CPH) often have low prediction accuracy. This paper aims to develop a clinical risk model with good accuracy for predicting 1-year mortality in cardiac arrhythmias patients using random survival forest (RSF), a robust approach for survival analysis. 10,488 cardiac arrhythmias patients available in the public MIMIC II clinical database were investigated, with 3,452 deaths occurring within 1-year followups. Forty risk factors including demographics and clinical and laboratory information and antiarrhythmic agents were analyzed as potential predictors of all-cause mortality. RSF was adopted to build a comprehensive survival model and a simplified risk model composed of 14 top risk factors. The built comprehensive model achieved a prediction accuracy of 0.81 measured by c-statistic with 10-fold cross validation. The simplified risk model also achieved a good accuracy of 0.799. Both results outperformed traditional CPH (which achieved a c-statistic of 0.733 for the comprehensive model and 0.718 for the simplified model). Moreover, various factors are observed to have nonlinear impact on cardiac arrhythmias prognosis. As a result, RSF based model which took nonlinearity into account significantly outperformed traditional Cox proportional hazard model and has great potential to be a more effective approach for survival analysis.

  19. Discriminative Analysis of Migraine without Aura: Using Functional and Structural MRI with a Multi-Feature Classification Approach

    PubMed Central

    Zhang, Junran; He, Ling; Huang, Jiangtao; Zhang, Jiang; Huang, Hua; Gong, Qiyong

    2016-01-01

    Magnetic resonance imaging (MRI) is by nature a multi-modality technique that provides complementary information about different aspects of diseases. So far no attempts have been reported to assess the potential of multi-modal MRI in discriminating individuals with and without migraine, so in this study, we proposed a classification approach to examine whether or not the integration of multiple MRI features could improve the classification performance between migraine patients without aura (MWoA) and healthy controls. Twenty-one MWoA patients and 28 healthy controls participated in this study. Resting-state functional MRI data was acquired to derive three functional measures: the amplitude of low-frequency fluctuations, regional homogeneity and regional functional correlation strength; and structural MRI data was obtained to measure the regional gray matter volume. For each measure, the values of 116 pre-defined regions of interest were extracted as classification features. Features were first selected and combined by a multi-kernel strategy; then a support vector machine classifier was trained to distinguish the subjects at individual level. The performance of the classifier was evaluated using a leave-one-out cross-validation method, and the final classification accuracy obtained was 83.67% (with a sensitivity of 92.86% and a specificity of 71.43%). The anterior cingulate cortex, prefrontal cortex, orbitofrontal cortex and the insula contributed the most discriminative features. In general, our proposed framework shows a promising classification capability for MWoA by integrating information from multiple MRI features. PMID:27690138

  20. Data Mining Approach for Evaluating Vegetation Dynamics in Earth System Models (ESMs) Using Satellite Remote Sensing Products

    NASA Astrophysics Data System (ADS)

    Shu, S.; Hoffman, F. M.; Kumar, J.; Hargrove, W. W.; Jain, A. K.

    2014-12-01

    biome types. However, Mapcurves results showed a relatively low goodness of fit score for modeled phenology projected onto observations. This study demonstrates the utility of a data mining approach for cross-validation of observations and evaluation of model performance.

  1. Topological and canonical kriging for design flood prediction in ungauged catchments: an improvement over a traditional regional regression approach?

    USGS Publications Warehouse

    Archfield, Stacey A.; Pugliese, Alessio; Castellarin, Attilio; Skøien, Jon O.; Kiang, Julie E.

    2013-01-01

    In the United States, estimation of flood frequency quantiles at ungauged locations has been largely based on regional regression techniques that relate measurable catchment descriptors to flood quantiles. More recently, spatial interpolation techniques of point data have been shown to be effective for predicting streamflow statistics (i.e., flood flows and low-flow indices) in ungauged catchments. Literature reports successful applications of two techniques, canonical kriging, CK (or physiographical-space-based interpolation, PSBI), and topological kriging, TK (or top-kriging). CK performs the spatial interpolation of the streamflow statistic of interest in the two-dimensional space of catchment descriptors. TK predicts the streamflow statistic along river networks taking both the catchment area and nested nature of catchments into account. It is of interest to understand how these spatial interpolation methods compare with generalized least squares (GLS) regression, one of the most common approaches to estimate flood quantiles at ungauged locations. By means of a leave-one-out cross-validation procedure, the performance of CK and TK was compared to GLS regression equations developed for the prediction of 10, 50, 100 and 500 yr floods for 61 streamgauges in the southeast United States. TK substantially outperforms GLS and CK for the study area, particularly for large catchments. The performance of TK over GLS highlights an important distinction between the treatments of spatial correlation when using regression-based or spatial interpolation methods to estimate flood quantiles at ungauged locations. The analysis also shows that coupling TK with CK slightly improves the performance of TK; however, the improvement is marginal when compared to the improvement in performance over GLS.

  2. A wrapper-based approach for feature selection and classification of major depressive disorder-bipolar disorders.

    PubMed

    Tekin Erguzel, Turker; Tas, Cumhur; Cebi, Merve

    2015-09-01

    Feature selection (FS) and classification are consecutive artificial intelligence (AI) methods used in data analysis, pattern classification, data mining and medical informatics. Beside promising studies in the application of AI methods to health informatics, working with more informative features is crucial in order to contribute to early diagnosis. Being one of the prevalent psychiatric disorders, depressive episodes of bipolar disorder (BD) is often misdiagnosed as major depressive disorder (MDD), leading to suboptimal therapy and poor outcomes. Therefore discriminating MDD and BD at earlier stages of illness could help to facilitate efficient and specific treatment. In this study, a nature inspired and novel FS algorithm based on standard Ant Colony Optimization (ACO), called improved ACO (IACO), was used to reduce the number of features by removing irrelevant and redundant data. The selected features were then fed into support vector machine (SVM), a powerful mathematical tool for data classification, regression, function estimation and modeling processes, in order to classify MDD and BD subjects. Proposed method used coherence, a promising quantitative electroencephalography (EEG) biomarker, values calculated from alpha, theta and delta frequency bands. The noteworthy performance of novel IACO-SVM approach stated that it is possible to discriminate 46 BD and 55 MDD subjects using 22 of 48 features with 80.19% overall classification accuracy. The performance of IACO algorithm was also compared to the performance of standard ACO, genetic algorithm (GA) and particle swarm optimization (PSO) algorithms in terms of their classification accuracy and number of selected features. In order to provide an almost unbiased estimate of classification error, the validation process was performed using nested cross-validation (CV) procedure. PMID:26164033

  3. A wrapper-based approach for feature selection and classification of major depressive disorder-bipolar disorders.

    PubMed

    Tekin Erguzel, Turker; Tas, Cumhur; Cebi, Merve

    2015-09-01

    Feature selection (FS) and classification are consecutive artificial intelligence (AI) methods used in data analysis, pattern classification, data mining and medical informatics. Beside promising studies in the application of AI methods to health informatics, working with more informative features is crucial in order to contribute to early diagnosis. Being one of the prevalent psychiatric disorders, depressive episodes of bipolar disorder (BD) is often misdiagnosed as major depressive disorder (MDD), leading to suboptimal therapy and poor outcomes. Therefore discriminating MDD and BD at earlier stages of illness could help to facilitate efficient and specific treatment. In this study, a nature inspired and novel FS algorithm based on standard Ant Colony Optimization (ACO), called improved ACO (IACO), was used to reduce the number of features by removing irrelevant and redundant data. The selected features were then fed into support vector machine (SVM), a powerful mathematical tool for data classification, regression, function estimation and modeling processes, in order to classify MDD and BD subjects. Proposed method used coherence, a promising quantitative electroencephalography (EEG) biomarker, values calculated from alpha, theta and delta frequency bands. The noteworthy performance of novel IACO-SVM approach stated that it is possible to discriminate 46 BD and 55 MDD subjects using 22 of 48 features with 80.19% overall classification accuracy. The performance of IACO algorithm was also compared to the performance of standard ACO, genetic algorithm (GA) and particle swarm optimization (PSO) algorithms in terms of their classification accuracy and number of selected features. In order to provide an almost unbiased estimate of classification error, the validation process was performed using nested cross-validation (CV) procedure.

  4. A hierarchical approach for online temporal lobe seizure detection in long-term intracranial EEG recordings

    NASA Astrophysics Data System (ADS)

    Liang, Sheng-Fu; Chen, Yi-Chun; Wang, Yu-Lin; Chen, Pin-Tzu; Yang, Chia-Hsiang; Chiueh, Herming

    2013-08-01

    Objective. Around 1% of the world's population is affected by epilepsy, and nearly 25% of patients cannot be treated effectively by available therapies. The presence of closed-loop seizure-triggered stimulation provides a promising solution for these patients. Realization of fast, accurate, and energy-efficient seizure detection is the key to such implants. In this study, we propose a two-stage on-line seizure detection algorithm with low-energy consumption for temporal lobe epilepsy (TLE). Approach. Multi-channel signals are processed through independent component analysis and the most representative independent component (IC) is automatically selected to eliminate artifacts. Seizure-like intracranial electroencephalogram (iEEG) segments are fast detected in the first stage of the proposed method and these seizures are confirmed in the second stage. The conditional activation of the second-stage signal processing reduces the computational effort, and hence energy, since most of the non-seizure events are filtered out in the first stage. Main results. Long-term iEEG recordings of 11 patients who suffered from TLE were analyzed via leave-one-out cross validation. The proposed method has a detection accuracy of 95.24%, a false alarm rate of 0.09/h, and an average detection delay time of 9.2 s. For the six patients with mesial TLE, a detection accuracy of 100.0%, a false alarm rate of 0.06/h, and an average detection delay time of 4.8 s can be achieved. The hierarchical approach provides a 90% energy reduction, yielding effective and energy-efficient implementation for real-time epileptic seizure detection. Significance. An on-line seizure detection method that can be applied to monitor continuous iEEG signals of patients who suffered from TLE was developed. An IC selection strategy to automatically determine the most seizure-related IC for seizure detection was also proposed. The system has advantages of (1) high detection accuracy, (2) low false alarm, (3) short

  5. A hybrid approach for rapid, accurate, and direct kilovoltage radiation dose calculations in CT voxel space

    SciTech Connect

    Kouznetsov, Alexei; Tambasco, Mauro

    2011-03-15

    Purpose: To develop and validate a fast and accurate method that uses computed tomography (CT) voxel data to estimate absorbed radiation dose at a point of interest (POI) or series of POIs from a kilovoltage (kV) imaging procedure. Methods: The authors developed an approach that computes absorbed radiation dose at a POI by numerically evaluating the linear Boltzmann transport equation (LBTE) using a combination of deterministic and Monte Carlo (MC) techniques. This hybrid approach accounts for material heterogeneity with a level of accuracy comparable to the general MC algorithms. Also, the dose at a POI is computed within seconds using the Intel Core i7 CPU 920 2.67 GHz quad core architecture, and the calculations are performed using CT voxel data, making it flexible and feasible for clinical applications. To validate the method, the authors constructed and acquired a CT scan of a heterogeneous block phantom consisting of a succession of slab densities: Tissue (1.29 cm), bone (2.42 cm), lung (4.84 cm), bone (1.37 cm), and tissue (4.84 cm). Using the hybrid transport method, the authors computed the absorbed doses at a set of points along the central axis and x direction of the phantom for an isotropic 125 kVp photon spectral point source located along the central axis 92.7 cm above the phantom surface. The accuracy of the results was compared to those computed with MCNP, which was cross-validated with EGSnrc, and served as the benchmark for validation. Results: The error in the depth dose ranged from -1.45% to +1.39% with a mean and standard deviation of -0.12% and 0.66%, respectively. The error in the x profile ranged from -1.3% to +0.9%, with standard deviations of -0.3% and 0.5%, respectively. The number of photons required to achieve these results was 1x10{sup 6}. Conclusions: The voxel-based hybrid method evaluates the LBTE rapidly and accurately to estimate the absorbed x-ray dose at any POI or series of POIs from a kV imaging procedure.

  6. Sire evaluation for total number born in pigs using a genomic reaction norms approach.

    PubMed

    Silva, F F; Mulder, H A; Knol, E F; Lopes, M S; Guimarães, S E F; Lopes, P S; Mathur, P K; Viana, J M S; Bastiaansen, J W M

    2014-09-01

    In the era of genome-wide selection (GWS), genotype-by-environment (G×E) interactions can be studied using genomic information, thus enabling the estimation of SNP marker effects and the prediction of genomic estimated breeding values (GEBV) for young candidates for selection in different environments. Although G×E studies in pigs are scarce, the use of artificial insemination has enabled the distribution of genetic material from sires across multiple environments. Given the relevance of reproductive traits, such as the total number born (TNB) and the variation in environmental conditions encountered by commercial dams, understanding G×E interactions can be essential for choosing the best sires for different environments. The present work proposes a two-step reaction norm approach for G×E analysis using genomic information. The first step provided estimates of environmental effects (herd-year-season, HYS), and the second step provided estimates of the intercept and slope for the TNB across different HYS levels, obtained from the first step, using a random regression model. In both steps, pedigree ( A: ) and genomic ( G: ) relationship matrices were considered. The genetic parameters (variance components, h(2) and genetic correlations) were very similar when estimated using the A: and G: relationship matrices. The reaction norm graphs showed considerable differences in environmental sensitivity between sires, indicating a reranking of sires in terms of genetic merit across the HYS levels. Based on the G: matrix analysis, SNP by environment interactions were observed. For some SNP, the effects increased at increasing HYS levels, while for others, the effects decreased at increasing HYS levels or showed no changes between HYS levels. Cross-validation analysis demonstrated better performance of the genomic approach with respect to traditional pedigrees for both the G×E and standard models. The genomic reaction norm model resulted in an accuracy of GEBV for

  7. Implications of dose-dependent target tissue absorption for linear and non-linear/threshold approaches in development of a cancer-based oral toxicity factor for hexavalent chromium.

    PubMed

    Haney, J

    2015-07-01

    Dose-dependent changes in target tissue absorption have important implications for determining the most defensible approach for developing a cancer-based oral toxicity factor for hexavalent chromium (CrVI). For example, mouse target tissue absorption per unit dose is an estimated 10-fold lower at the CrVI dose corresponding to the federal maximum contaminant level (MCL) than at the USEPA draft oral slope factor (SFo) point of departure dose. This decreasing target tissue absorption as doses decrease to lower, more environmentally-relevant doses is inconsistent with linear low-dose extrapolation. The shape of the dose-response curve accounting for this toxicokinetic phenomenon would clearly be non-linear. Furthermore, these dose-dependent differences in absorption indicate that the magnitude of risk overestimation by a linear low-dose extrapolation approach (e.g., SFo) increases and is likely to span one or perhaps more orders of magnitude as it is used to predict risk at progressively lower, more environmentally-relevant doses. An additional apparent implication is that no single SFo can reliably predict risk across potential environmental doses (e.g., doses corresponding to water concentrations⩽the federal MCL). A non-linear approach, consistent with available mode of action data, is most scientifically defensible for derivation of an oral toxicity factor for CrVI-induced carcinogenesis.

  8. Validation of the Chinese Version of the Life Orientation Test with a Robust Weighted Least Squares Approach

    ERIC Educational Resources Information Center

    Li, Cheng-Hsien

    2012-01-01

    Of the several measures of optimism presently available in the literature, the Life Orientation Test (LOT; Scheier & Carver, 1985) has been the most widely used in empirical research. This article explores, confirms, and cross-validates the factor structure of the Chinese version of the LOT with ordinal data by using robust weighted least squares…

  9. Approaches to Human Communication.

    ERIC Educational Resources Information Center

    Budd, Richard W., Ed.; Ruben, Brent D., Ed.

    This anthology of essays approaches human communication from the points of view of: anthropology, art biology, economics, encounter groups, semantics, general system theory, history, information theory, international behavior, journalism, linguistics, mass media, neurophysiology, nonverbal behavior, organizational behavior, philosophy, political…

  10. SOHO Sees Venus' Approach

    NASA Video Gallery

    This video taken by the Solar and Heliospheric Observatory (SOHO) shows the Sun's corona and Venus' approach for the transit. This was taken with the Extreme ultraviolet Imaging Telescope (EIT) in ...

  11. Tiny Asteroid Approaches Earth

    NASA Video Gallery

    On Oct. 15, 2010, NASA astronomer Rob Suggs captured this view of the tiny asteroid 2010 TG19 as it made its way among the stars of the constellation Pegasus. It will continue to approach during th...

  12. The case study approach

    PubMed Central

    2011-01-01

    The case study approach allows in-depth, multi-faceted explorations of complex issues in their real-life settings. The value of the case study approach is well recognised in the fields of business, law and policy, but somewhat less so in health services research. Based on our experiences of conducting several health-related case studies, we reflect on the different types of case study design, the specific research questions this approach can help answer, the data sources that tend to be used, and the particular advantages and disadvantages of employing this methodological approach. The paper concludes with key pointers to aid those designing and appraising proposals for conducting case study research, and a checklist to help readers assess the quality of case study reports. PMID:21707982

  13. Cultural Approaches to Parenting

    PubMed Central

    Bornstein, Marc H.

    2012-01-01

    SYNOPSIS This article first introduces some main ideas behind culture and parenting and next addresses philosophical rationales and methodological considerations central to cultural approaches to parenting, including a brief account of a cross-cultural study of parenting. It then focuses on universals, specifics, and distinctions between form (behavior) and function (meaning) in parenting as embedded in culture. The article concludes by pointing to social policy implications as well as future directions prompted by a cultural approach to parenting. PMID:22962544

  14. Evaluation of the predictive capacity of DNA variants associated with straight hair in Europeans.

    PubMed

    Pośpiech, Ewelina; Karłowska-Pik, Joanna; Marcińska, Magdalena; Abidi, Sarah; Andersen, Jeppe Dyrberg; van den Berge, Margreet; Carracedo, Ángel; Eduardoff, Mayra; Freire-Aradas, Ana; Morling, Niels; Sijen, Titia; Skowron, Małgorzata; Söchtig, Jens; Syndercombe-Court, Denise; Weiler, Natalie; Schneider, Peter M; Ballard, David; Børsting, Claus; Parson, Walther; Phillips, Chris; Branicki, Wojciech

    2015-11-01

    DNA-based prediction of hair morphology, defined as straight, curly or wavy hair, could contribute to an improved description of an unknown offender and allow more accurate forensic reconstructions of physical appearance in the field of forensic DNA phenotyping. Differences in scalp hair morphology are significant at the worldwide scale and within Europe. The only genome-wide association study made to date revealed the Trichohyalin gene (TCHH) to be significantly associated with hair morphology in Europeans and reported weaker associations for WNT10A and FRAS1 genes. We conducted a study that centered on six SNPs located in these three genes with a sample of 528 individuals from Poland. The predictive capacity of the candidate DNA variants was evaluated using logistic regression; classification and regression trees; and neural networks, by applying a 10-fold cross validation procedure. Additionally, an independent test set of 142 males from six European populations was used to verify performance of the developed prediction models. Our study confirmed association of rs11803731 (TCHH), rs7349332 (WNT10A) and rs1268789 (FRAS1) SNPs with hair morphology. The combined genotype risk score for straight hair had an odds ratio of 2.7 and these predictors explained ∼ 8.2% of the total variance. The selected three SNPs were found to predict straight hair with a high sensitivity but low specificity when a 10-fold cross validation procedure was applied and the best results were obtained using the neural networks approach (AUC=0.688, sensitivity=91.2%, specificity=23.0%). Application of the neural networks model with 65% probability threshold on an additional test set gave high sensitivity (81.4%) and improved specificity (50.0%) with a total of 78.7% correct calls, but a high non-classification rate (66.9%). The combined TTGGGG SNP genotype for rs11803731, rs7349332, rs1268789 (European frequency=4.5%) of all six straight hair-associated alleles was identified as the best

  15. Machine learning framework for early MRI-based Alzheimer's conversion prediction in MCI subjects.

    PubMed

    Moradi, Elaheh; Pepe, Antonietta; Gaser, Christian; Huttunen, Heikki; Tohka, Jussi

    2015-01-01

    Mild cognitive impairment (MCI) is a transitional stage between age-related cognitive decline and Alzheimer's disease (AD). For the effective treatment of AD, it would be important to identify MCI patients at high risk for conversion to AD. In this study, we present a novel magnetic resonance imaging (MRI)-based method for predicting the MCI-to-AD conversion from one to three years before the clinical diagnosis. First, we developed a novel MRI biomarker of MCI-to-AD conversion using semi-supervised learning and then integrated it with age and cognitive measures about the subjects using a supervised learning algorithm resulting in what we call the aggregate biomarker. The novel characteristics of the methods for learning the biomarkers are as follows: 1) We used a semi-supervised learning method (low density separation) for the construction of MRI biomarker as opposed to more typical supervised methods; 2) We performed a feature selection on MRI data from AD subjects and normal controls without using data from MCI subjects via regularized logistic regression; 3) We removed the aging effects from the MRI data before the classifier training to prevent possible confounding between AD and age related atrophies; and 4) We constructed the aggregate biomarker by first learning a separate MRI biomarker and then combining it with age and cognitive measures about the MCI subjects at the baseline by applying a random forest classifier. We experimentally demonstrated the added value of these novel characteristics in predicting the MCI-to-AD conversion on data obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. With the ADNI data, the MRI biomarker achieved a 10-fold cross-validated area under the receiver operating characteristic curve (AUC) of 0.7661 in discriminating progressive MCI patients (pMCI) from stable MCI patients (sMCI). Our aggregate biomarker based on MRI data together with baseline cognitive measurements and age achieved a 10-fold cross-validated

  16. Evaluation of the predictive capacity of DNA variants associated with straight hair in Europeans.

    PubMed

    Pośpiech, Ewelina; Karłowska-Pik, Joanna; Marcińska, Magdalena; Abidi, Sarah; Andersen, Jeppe Dyrberg; van den Berge, Margreet; Carracedo, Ángel; Eduardoff, Mayra; Freire-Aradas, Ana; Morling, Niels; Sijen, Titia; Skowron, Małgorzata; Söchtig, Jens; Syndercombe-Court, Denise; Weiler, Natalie; Schneider, Peter M; Ballard, David; Børsting, Claus; Parson, Walther; Phillips, Chris; Branicki, Wojciech

    2015-11-01

    DNA-based prediction of hair morphology, defined as straight, curly or wavy hair, could contribute to an improved description of an unknown offender and allow more accurate forensic reconstructions of physical appearance in the field of forensic DNA phenotyping. Differences in scalp hair morphology are significant at the worldwide scale and within Europe. The only genome-wide association study made to date revealed the Trichohyalin gene (TCHH) to be significantly associated with hair morphology in Europeans and reported weaker associations for WNT10A and FRAS1 genes. We conducted a study that centered on six SNPs located in these three genes with a sample of 528 individuals from Poland. The predictive capacity of the candidate DNA variants was evaluated using logistic regression; classification and regression trees; and neural networks, by applying a 10-fold cross validation procedure. Additionally, an independent test set of 142 males from six European populations was used to verify performance of the developed prediction models. Our study confirmed association of rs11803731 (TCHH), rs7349332 (WNT10A) and rs1268789 (FRAS1) SNPs with hair morphology. The combined genotype risk score for straight hair had an odds ratio of 2.7 and these predictors explained ∼ 8.2% of the total variance. The selected three SNPs were found to predict straight hair with a high sensitivity but low specificity when a 10-fold cross validation procedure was applied and the best results were obtained using the neural networks approach (AUC=0.688, sensitivity=91.2%, specificity=23.0%). Application of the neural networks model with 65% probability threshold on an additional test set gave high sensitivity (81.4%) and improved specificity (50.0%) with a total of 78.7% correct calls, but a high non-classification rate (66.9%). The combined TTGGGG SNP genotype for rs11803731, rs7349332, rs1268789 (European frequency=4.5%) of all six straight hair-associated alleles was identified as the best

  17. Development of 3D-QSAR Model for Acetylcholinesterase Inhibitors Using a Combination of Fingerprint, Molecular Docking, and Structure-Based Pharmacophore Approaches.

    PubMed

    Lee, Sehan; Barron, Mace G

    2015-11-01

    Acetylcholinesterase (AChE), a serine hydrolase vital for regulating the neurotransmitter acetylcholine in animals, has been used as a target for drugs and pesticides. With the increasing availability of AChE crystal structures, with or without ligands bound, structure-based approaches have been successfully applied to AChE inhibitors (AChEIs). The major limitation of these approaches has been the small applicability domain due to the lack of structural diversity in the training set. In this study, we developed a 3 dimensional quantitative structure-activity relationship (3D-QSAR) for inhibitory activity of 89 reversible and irreversible AChEIs including drugs and insecticides. A 3D-fingerprint descriptor encoding protein-ligand interactions was developed using molecular docking and structure-based pharmacophore to rationalize the structural requirements responsible for the activity of these compounds. The obtained 3D-QSAR model exhibited high correlation value (R(2) = 0.93) and low mean absolute error (MAE = 0.32 log units) for the training set (n = 63). The model was predictive across a range of structures as shown by the leave-one-out cross-validated correlation coefficient (Q(2) = 0.89) and external validation results (n = 26, R(2) = 0.89, and MAE = 0.38 log units). The model revealed that the compounds with high inhibition potency had proper conformation in the active site gorge and interacted with key amino acid residues, in particular Trp84 and Phe330 at the catalytic anionic site, Trp279 at the peripheral anionic site, and Gly118, Gly119, and Ala201 at the oxyanion hole. The resulting universal 3D-QSAR model provides insight into the multiple molecular interactions determining AChEI potency that may guide future chemical design and regulation of toxic AChEIs.

  18. Development of 3D-QSAR Model for Acetylcholinesterase Inhibitors Using a Combination of Fingerprint, Molecular Docking, and Structure-Based Pharmacophore Approaches.

    PubMed

    Lee, Sehan; Barron, Mace G

    2015-11-01

    Acetylcholinesterase (AChE), a serine hydrolase vital for regulating the neurotransmitter acetylcholine in animals, has been used as a target for drugs and pesticides. With the increasing availability of AChE crystal structures, with or without ligands bound, structure-based approaches have been successfully applied to AChE inhibitors (AChEIs). The major limitation of these approaches has been the small applicability domain due to the lack of structural diversity in the training set. In this study, we developed a 3 dimensional quantitative structure-activity relationship (3D-QSAR) for inhibitory activity of 89 reversible and irreversible AChEIs including drugs and insecticides. A 3D-fingerprint descriptor encoding protein-ligand interactions was developed using molecular docking and structure-based pharmacophore to rationalize the structural requirements responsible for the activity of these compounds. The obtained 3D-QSAR model exhibited high correlation value (R(2) = 0.93) and low mean absolute error (MAE = 0.32 log units) for the training set (n = 63). The model was predictive across a range of structures as shown by the leave-one-out cross-validated correlation coefficient (Q(2) = 0.89) and external validation results (n = 26, R(2) = 0.89, and MAE = 0.38 log units). The model revealed that the compounds with high inhibition potency had proper conformation in the active site gorge and interacted with key amino acid residues, in particular Trp84 and Phe330 at the catalytic anionic site, Trp279 at the peripheral anionic site, and Gly118, Gly119, and Ala201 at the oxyanion hole. The resulting universal 3D-QSAR model provides insight into the multiple molecular interactions determining AChEI potency that may guide future chemical design and regulation of toxic AChEIs. PMID:26202430

  19. A fast SCOP fold classification system using content-based E-Predict algorithm

    PubMed Central

    Chi, Pin-Hao; Shyu, Chi-Ren; Xu, Dong

    2006-01-01

    Background Domain experts manually construct the Structural Classification of Protein (SCOP) database to categorize and compare protein structures. Even though using the SCOP database is believed to be more reliable than classification results from other methods, it is labor intensive. To mimic human classification processes, we develop an automatic SCOP fold classification system to assign possible known SCOP folds and recognize novel folds for newly-discovered proteins. Results With a sufficient amount of ground truth data, our system is able to assign the known folds for newly-discovered proteins in the latest SCOP v1.69 release with 92.17% accuracy. Our system also recognizes the novel folds with 89.27% accuracy using 10 fold cross validation. The average response time for proteins with 500 and 1409 amino acids to complete the classification process is 4.1 and 17.4 seconds, respectively. By comparison with several structural alignment algorithms, our approach outperforms previous methods on both the classification accuracy and efficiency. Conclusion In this paper, we build an advanced, non-parametric classifier to accelerate the manual classification processes of SCOP. With satisfactory ground truth data from the SCOP database, our approach identifies relevant domain knowledge and yields reasonably accurate classifications. Our system is publicly accessible at . PMID:16872501

  20. Ontology driven decision support for the diagnosis of mild cognitive impairment.

    PubMed

    Zhang, Xiaowei; Hu, Bin; Ma, Xu; Moore, Philip; Chen, Jing

    2014-03-01

    In recent years, mild cognitive impairment (MCI) has attracted significant attention as an indicator of high risk for Alzheimer's disease (AD), and the diagnosis of MCI can alert patient to carry out appropriate strategies to prevent AD. To avoid subjectivity in diagnosis, we propose an ontology driven decision support method which is an automated procedure for diagnosing MCI through magnetic resonance imaging (MRI). In this approach, we encode specialized MRI knowledge into an ontology and construct a rule set using machine learning algorithms. Then we apply these two parts in conjunction with reasoning engine to automatically distinguish MCI patients from normal controls (NC). The rule set is trained by MRI data of 187 MCI patients and 177 normal controls selected from Alzheimer's Disease Neuroimaging Initiative (ADNI) using C4.5 algorithm. By using a 10-fold cross validation, we prove that the performance of C4.5 with 80.2% sensitivity is better than other algorithms, such as support vector machine (SVM), Bayesian network (BN) and back propagation (BP) neural networks, and C4.5 is suitable for the construction of reasoning rules. Meanwhile, the evaluation results suggest that our approach would be useful to assist physicians efficiently in real clinical diagnosis for the disease of MCI.

  1. [Prediction of lipases types by different scale pseudo-amino acid composition].

    PubMed

    Zhang, Guangya; Li, Hongchun; Gao, Jiaqiang; Fang, Baishan

    2008-11-01

    Lipases are widely used enzymes in biotechnology. Although they catalyze the same reaction, their sequences vary. Therefore, it is highly desired to develop a fast and reliable method to identify the types of lipases according to their sequences, or even just to confirm whether they are lipases or not. By proposing two scales based pseudo amino acid composition approaches to extract the features of the sequences, a powerful predictor based on k-nearest neighbor was introduced to address the problems. The overall success rates thus obtained by the 10-fold cross-validation test were shown as below: for predicting lipases and nonlipase, the success rates were 92.8%, 91.4% and 91.3%, respectively. For lipase types, the success rates were 92.3%, 90.3% and 89.7%, respectively. Among them, the Z scales based pseudo amino acid composition was the best, T scales was the second. They outperformed significantly than 6 other frequently used sequence feature extraction methods. The high success rates yielded for such a stringent dataset indicate predicting the types of lipases is feasible and the different scales pseudo amino acid composition might be a useful tool for extracting the features of protein sequences, or at lease can play a complementary role to many of the other existing approaches. PMID:19256347

  2. Combining Land-Use Regression and Chemical Transport Modeling in a Spatiotemporal Geostatistical Model for Ozone and PM2.5.

    PubMed

    Wang, Meng; Sampson, Paul D; Hu, Jianlin; Kleeman, Michael; Keller, Joshua P; Olives, Casey; Szpiro, Adam A; Vedal, Sverre; Kaufman, Joel D

    2016-05-17

    Assessments of long-term air pollution exposure in population studies have commonly employed land-use regression (LUR) or chemical transport modeling (CTM) techniques. Attempts to incorporate both approaches in one modeling framework are challenging. We present a novel geostatistical modeling framework, incorporating CTM predictions into a spatiotemporal LUR model with spatial smoothing to estimate spatiotemporal variability of ozone (O3) and particulate matter with diameter less than 2.5 μm (PM2.5) from 2000 to 2008 in the Los Angeles Basin. The observations include over 9 years' data from more than 20 routine monitoring sites and specific monitoring data at over 100 locations to provide more comprehensive spatial coverage of air pollutants. Our composite modeling approach outperforms separate CTM and LUR models in terms of root-mean-square error (RMSE) assessed by 10-fold cross-validation in both temporal and spatial dimensions, with larger improvement in the accuracy of predictions for O3 (RMSE [ppb] for CTM, 6.6; LUR, 4.6; composite, 3.6) than for PM2.5 (RMSE [μg/m(3)] CTM: 13.7, LUR: 3.2, composite: 3.1). Our study highlights the opportunity for future exposure assessment to make use of readily available spatiotemporal modeling methods and auxiliary gridded data that takes chemical reaction processes into account to improve the accuracy of predictions in a single spatiotemporal modeling framework.

  3. Using nanoinformatics methods for automatically identifying relevant nanotoxicology entities from the literature.

    PubMed

    García-Remesal, Miguel; García-Ruiz, Alejandro; Pérez-Rey, David; de la Iglesia, Diana; Maojo, Víctor

    2013-01-01

    Nanoinformatics is an emerging research field that uses informatics techniques to collect, process, store, and retrieve data, information, and knowledge on nanoparticles, nanomaterials, and nanodevices and their potential applications in health care. In this paper, we have focused on the solutions that nanoinformatics can provide to facilitate nanotoxicology research. For this, we have taken a computational approach to automatically recognize and extract nanotoxicology-related entities from the scientific literature. The desired entities belong to four different categories: nanoparticles, routes of exposure, toxic effects, and targets. The entity recognizer was trained using a corpus that we specifically created for this purpose and was validated by two nanomedicine/nanotoxicology experts. We evaluated the performance of our entity recognizer using 10-fold cross-validation. The precisions range from 87.6% (targets) to 93.0% (routes of exposure), while recall values range from 82.6% (routes of exposure) to 87.4% (toxic effects). These results prove the feasibility of using computational approaches to reliably perform different named entity recognition (NER)-dependent tasks, such as for instance augmented reading or semantic searches. This research is a "proof of concept" that can be expanded to stimulate further developments that could assist researchers in managing data, information, and knowledge at the nanolevel, thus accelerating research in nanomedicine.

  4. Using nanoinformatics methods for automatically identifying relevant nanotoxicology entities from the literature.

    PubMed

    García-Remesal, Miguel; García-Ruiz, Alejandro; Pérez-Rey, David; de la Iglesia, Diana; Maojo, Víctor

    2013-01-01

    Nanoinformatics is an emerging research field that uses informatics techniques to collect, process, store, and retrieve data, information, and knowledge on nanoparticles, nanomaterials, and nanodevices and their potential applications in health care. In this paper, we have focused on the solutions that nanoinformatics can provide to facilitate nanotoxicology research. For this, we have taken a computational approach to automatically recognize and extract nanotoxicology-related entities from the scientific literature. The desired entities belong to four different categories: nanoparticles, routes of exposure, toxic effects, and targets. The entity recognizer was trained using a corpus that we specifically created for this purpose and was validated by two nanomedicine/nanotoxicology experts. We evaluated the performance of our entity recognizer using 10-fold cross-validation. The precisions range from 87.6% (targets) to 93.0% (routes of exposure), while recall values range from 82.6% (routes of exposure) to 87.4% (toxic effects). These results prove the feasibility of using computational approaches to reliably perform different named entity recognition (NER)-dependent tasks, such as for instance augmented reading or semantic searches. This research is a "proof of concept" that can be expanded to stimulate further developments that could assist researchers in managing data, information, and knowledge at the nanolevel, thus accelerating research in nanomedicine. PMID:23509721

  5. Integrated cortical structural marker for Alzheimer's disease.

    PubMed

    Ming, Jing; Harms, Michael P; Morris, John C; Beg, M Faisal; Wang, Lei

    2015-01-01

    In this article, we propose an approach to integrate cortical morphology measures for improving the discrimination of individuals with and without very mild Alzheimer's disease (AD). FreeSurfer was applied to scans collected from 83 participants with very mild AD and 124 cognitively normal individuals. We generated cortex thickness, white matter convexity (aka "sulcal depth"), and white matter surface metric distortion measures on a normalized surface atlas in this first study to integrate high resolution gray matter thickness and white matter surface geometric measures in identifying very mild AD. Principal component analysis was applied to each individual structural measure to generate eigenvectors. Discrimination power based on individual and combined measures are compared, based on stepwise logistic regression and 10-fold cross-validation. Global AD likelihood index and surface-based likelihood maps were also generated. Our results show complementary patterns on the cortical surface between thickness, which reflects gray matter atrophy, convexity, which reflects white matter sulcal depth changes and metric distortion, which reflects white matter surface area changes. The classifier integrating all 3 types of surface measures significantly improved classification performance compared with classification based on single measures. The principal component analysis-based approach provides a framework for achieving high discrimination power by integrating high-dimensional data, and this method could be very powerful in future studies for early diagnosis of diseases that are known to be associated with abnormal gyral and sulcal patterns. PMID:25444604

  6. Using Nanoinformatics Methods for Automatically Identifying Relevant Nanotoxicology Entities from the Literature

    PubMed Central

    García-Remesal, Miguel; García-Ruiz, Alejandro; Pérez-Rey, David; de la Iglesia, Diana; Maojo, Víctor

    2013-01-01

    Nanoinformatics is an emerging research field that uses informatics techniques to collect, process, store, and retrieve data, information, and knowledge on nanoparticles, nanomaterials, and nanodevices and their potential applications in health care. In this paper, we have focused on the solutions that nanoinformatics can provide to facilitate nanotoxicology research. For this, we have taken a computational approach to automatically recognize and extract nanotoxicology-related entities from the scientific literature. The desired entities belong to four different categories: nanoparticles, routes of exposure, toxic effects, and targets. The entity recognizer was trained using a corpus that we specifically created for this purpose and was validated by two nanomedicine/nanotoxicology experts. We evaluated the performance of our entity recognizer using 10-fold cross-validation. The precisions range from 87.6% (targets) to 93.0% (routes of exposure), while recall values range from 82.6% (routes of exposure) to 87.4% (toxic effects). These results prove the feasibility of using computational approaches to reliably perform different named entity recognition (NER)-dependent tasks, such as for instance augmented reading or semantic searches. This research is a “proof of concept” that can be expanded to stimulate further developments that could assist researchers in managing data, information, and knowledge at the nanolevel, thus accelerating research in nanomedicine. PMID:23509721

  7. Combining Land-Use Regression and Chemical Transport Modeling in a Spatiotemporal Geostatistical Model for Ozone and PM2.5.

    PubMed

    Wang, Meng; Sampson, Paul D; Hu, Jianlin; Kleeman, Michael; Keller, Joshua P; Olives, Casey; Szpiro, Adam A; Vedal, Sverre; Kaufman, Joel D

    2016-05-17

    Assessments of long-term air pollution exposure in population studies have commonly employed land-use regression (LUR) or chemical transport modeling (CTM) techniques. Attempts to incorporate both approaches in one modeling framework are challenging. We present a novel geostatistical modeling framework, incorporating CTM predictions into a spatiotemporal LUR model with spatial smoothing to estimate spatiotemporal variability of ozone (O3) and particulate matter with diameter less than 2.5 μm (PM2.5) from 2000 to 2008 in the Los Angeles Basin. The observations include over 9 years' data from more than 20 routine monitoring sites and specific monitoring data at over 100 locations to provide more comprehensive spatial coverage of air pollutants. Our composite modeling approach outperforms separate CTM and LUR models in terms of root-mean-square error (RMSE) assessed by 10-fold cross-validation in both temporal and spatial dimensions, with larger improvement in the accuracy of predictions for O3 (RMSE [ppb] for CTM, 6.6; LUR, 4.6; composite, 3.6) than for PM2.5 (RMSE [μg/m(3)] CTM: 13.7, LUR: 3.2, composite: 3.1). Our study highlights the opportunity for future exposure assessment to make use of readily available spatiotemporal modeling methods and auxiliary gridded data that takes chemical reaction processes into account to improve the accuracy of predictions in a single spatiotemporal modeling framework. PMID:27074524

  8. Segmentation of skin strata in reflectance confocal microscopy depth stacks

    NASA Astrophysics Data System (ADS)

    Hames, Samuel C.; Ardigò, Marco; Soyer, H. Peter; Bradley, Andrew P.; Prow, Tarl W.

    2015-03-01

    Reflectance confocal microscopy is an emerging tool for imaging human skin, but currently requires expert human assessment. To overcome the need for human experts it is necessary to develop automated tools for automatically assessing reflectance confocal microscopy imagery. This work presents a novel approach to this task, using a bag of visual words approach to represent and classify en-face optical sections from four distinct strata of the skin. A dictionary of representative features is learned from whitened and normalised patches using hierarchical spherical k-means. Each image is then represented by extracting a dense array of patches and encoding each with the most similar element in the dictionary. Linear discriminant analysis is used as a simple linear classifier. The proposed framework was tested on 308 depth stacks from 54 volunteers. Parameters are tuned using 10 fold cross validation on a training sub-set of the data, and final evaluation was performed on a held out test set. The proposed method generated physically plausible profiles of the distinct strata of human skin, and correctly classified 81.4% of sections in the test set.

  9. A confirmatory factor analytic approach on perceptions of knowledge and skills in teaching (PKST).

    PubMed

    Choy, Doris; Lim, Kam Ming; Chong, Sylvia; Wong, Angela F L

    2012-04-01

    This paper reports the cross-validation of the factor pattern of the Perceptions of Knowledge and Skills in Teaching (PKST) survey, which was used to assess the self-perceived pedagogical knowledge and skills of pre-service and beginning teachers. The sample comprised 323 pre-service teachers enrolled in a 1-yr. post-graduate teacher education program in Singapore. The survey had 37 items distributed across six scales: student learning, lesson planning, instructional support, accommodating diversity, classroom management, and care and concern. A confirmatory factor analysis (CFA) was used to cross-validate the survey's factor pattern. The results showed that the model was an acceptable fit to the data. The PKST survey can thus be adapted by different teacher education programs to assess pre-service and beginning teachers' progress in developing their pedagogical knowledge and skills. PMID:22662412

  10. Unconventional approaches to fusion

    SciTech Connect

    Brunelli, B.; Leotta, G.G.

    1982-01-01

    This volume is dedicated to unconventional approaches to fusionthose thermonuclear reactors that, in comparison with Tokamak and other main lines, have received little attention in the worldwide scientific community. Many of the approaches considered are still in the embryonic stages. The authors-an international group of active nuclear scientists and engineers-focus on the parameters achieved in the use of these reactors and on the meaning of the most recent physical studies and their implications for the future. They also compare these approaches with conventional ones, the Tokamak in particular, stressing the non-plasma-physics requirements of fusion reactors. Unconventional compact toroids, linear systems, and multipoles are considered, as are the ''almost conventional'' fusion machines: stellarators, mirrors, reversed-field pinches, and EBT.

  11. Personal Approaches to Career Planning.

    ERIC Educational Resources Information Center

    DeMont, Billie; DeMont, Roger

    1983-01-01

    Identifies four approaches to career planning based on situational leadership theory: the network approach, self-help approach, engineering approach, and mentor approach. Guidelines for the selection of a planning method based on the nature of the work environment and personal preference are discussed. (JAC)

  12. Dynamic Approaches to Language Processing

    ERIC Educational Resources Information Center

    Srinivasan, Narayanan

    2007-01-01

    Symbolic rule-based approaches have been a preferred way to study language and cognition. Dissatisfaction with rule-based approaches in the 1980s lead to alternative approaches to study language, the most notable being the dynamic approaches to language processing. Dynamic approaches provide a significant alternative by not being rule-based and…

  13. Technical approach document

    SciTech Connect

    Not Available

    1989-12-01

    The Uranium Mill Tailings Radiation Control Act (UMTRCA) of 1978, Public Law 95-604 (PL95-604), grants the Secretary of Energy the authority and responsibility to perform such actions as are necessary to minimize radiation health hazards and other environmental hazards caused by inactive uranium mill sites. This Technical Approach Document (TAD) describes the general technical approaches and design criteria adopted by the US Department of Energy (DOE) in order to implement remedial action plans (RAPS) and final designs that comply with EPA standards. It does not address the technical approaches necessary for aquifer restoration at processing sites; a guidance document, currently in preparation, will describe aquifer restoration concerns and technical protocols. This document is a second revision to the original document issued in May 1986; the revision has been made in response to changes to the groundwater standards of 40 CFR 192, Subparts A--C, proposed by EPA as draft standards. New sections were added to define the design approaches and designs necessary to comply with the groundwater standards. These new sections are in addition to changes made throughout the document to reflect current procedures, especially in cover design, water resources protection, and alternate site selection; only minor revisions were made to some of the sections. Sections 3.0 is a new section defining the approach taken in the design of disposal cells; Section 4.0 has been revised to include design of vegetated covers; Section 8.0 discusses design approaches necessary for compliance with the groundwater standards; and Section 9.0 is a new section dealing with nonradiological hazardous constituents. 203 refs., 18 figs., 26 tabs.

  14. Financial Management: An Organic Approach

    ERIC Educational Resources Information Center

    Laux, Judy

    2013-01-01

    Although textbooks present corporate finance using a topical approach, good financial management requires an organic approach that integrates the various assignments financial managers confront every day. Breaking the tasks into meaningful subcategories, the current article offers one approach.

  15. Genotoxic mode of action predictions from a multiplexed flow cytometric assay and a machine learning approach.

    PubMed

    Bryce, Steven M; Bernacki, Derek T; Bemis, Jeffrey C; Dertinger, Stephen D

    2016-04-01

    Several endpoints associated with cellular responses to DNA damage as well as overt cytotoxicity were multiplexed into a miniaturized, "add and read" type flow cytometric assay. Reagents included a detergent to liberate nuclei, RNase and propidium iodide to serve as a pan-DNA dye, fluorescent antibodies against γH2AX, phospho-histone H3, and p53, and fluorescent microspheres for absolute nuclei counts. The assay was applied to TK6 cells and 67 diverse reference chemicals that served as a training set. Exposure was for 24 hrs in 96-well plates, and unless precipitation or foreknowledge about cytotoxicity suggested otherwise, the highest concentration was 1 mM. At 4- and 24-hrs aliquots were removed and added to microtiter plates containing the reagent mix. Following a brief incubation period robotic sampling facilitated walk-away data acquisition. Univariate analyses identified biomarkers and time points that were valuable for classifying agents into one of three groups: clastogenic, aneugenic, or non-genotoxic. These mode of action predictions were optimized with a forward-stepping process that considered Wald test p-values, receiver operator characteristic curves, and pseudo R(2) values, among others. A particularly high performing multinomial logistic regression model was comprised of four factors: 4 hr γH2AX and phospho-histone H3 values, and 24 hr p53 and polyploidy values. For the training set chemicals, the four-factor model resulted in 94% concordance with our a priori classifications. Cross validation occurred via a leave-one-out approach, and in this case 91% concordance was observed. A test set of 17 chemicals that were not used to construct the model were evaluated, some of which utilized a short-term treatment in the presence of a metabolic activation system, and in 16 cases mode of action was correctly predicted. These initial results are encouraging as they suggest a machine learning strategy can be used to rapidly and reliably predict new chemicals

  16. Mapping Natural Terroir Units using a multivariate approach and legacy data

    NASA Astrophysics Data System (ADS)

    Priori, Simone; Barbetti, Roberto; L'Abate, Giovanni; Bucelli, Piero; Storchi, Paolo; Costantini, Edoardo A. C.

    2014-05-01

    Natural Terroir Unit (NTU) is a volume of earth's biosphere that is characterized by a stable set of variables related to the topography, climate, geology and soil. Methods to study the association soil-climate-vines are numerous, but the main question is always: which variables are actually important for the quality and the typicality of grapevines, and then wine, for a particular scale? This work aimed to setting up a multivariate methodology to define viticultural terroirs at the province scale (1:125,000), using viticultural and oenological legacy data. The study area was the Siena province in the Tuscany region (Central Italy). The reference grapevine cultivar was "Sangiovese", which is the most important cultivar of the region. The methodology was based upon the creation of a GIS storing several viticultural and oenological legacy data of 55 experimental vineyards (vintages between 1989-2009), the long term climate data, the digital elevation model, the soil-landscapes (land systems) and the soil profiles with the soil analysis. The selected viticultural and oenological parameters were: must sugar content, sugar accumulation rate from veraison to harvest, must titratable acidity, grape yield per vine, number of bunches for vine, mean bunch weight, and mean weight of berries. The environmental parameters related to viticulture, selected by an explorative PCA, were: elevation, mean annual temperature, mean soil temperature, annual precipitation, clay, sand and gravel content of soils, soil water availability, redoximorphic features and rooting depth. The geostatistical models of the variables interpolation were chosen on the best of mean standardize error, obtained by the cross-validation, between "Simple cokriging with varying local mean", "Multicollocated simple cokriging with varying local mean" and "Regression kriging". These variables were used for a k-means clustering aimed to map the Natural Terroirs Units (NTUs). The viticultural areas of Siena province

  17. Realistic Approach to Innovation.

    ERIC Educational Resources Information Center

    Dawson, Garth C.

    Part of the Omaha police in-service training program was devoted to innovative approaches to solving police department problems and improving community relations. The sessions were an attempt to use the brainstorming technique to elicit new solutions to everyday problems faced by the rank-and-file members of the police department. The report…

  18. External approach to rhinoplasty.

    PubMed

    Goodman, Wilfred S; Charbonneau, Paul A

    2015-07-01

    The technique of external rhinoplasty is outlined. Having reviewed 74 cases, its advantages and disadvantages are discussed. Reluctance to use this external approach seems to be based on emotional rather than radical grounds, for its seems to be the procedure of choice for many problems.

  19. Orion Emergency Mask Approach

    NASA Technical Reports Server (NTRS)

    Tuan, George C.; Graf, John C.

    2009-01-01

    Emergency mask approach on Orion poses a challenge to the traditional Shuttle or Station approaches. Currently, in the case of a fire or toxic spill event, the crew utilizes open loop oxygen masks that provide the crew with oxygen to breath, but also dumps the exhaled oxygen into the cabin. For Orion, with a small cabin volume, the extra oxygen will exceed the flammability limit within a short period of time, unless a nitrogen purge is also provided. Another approach to a fire or toxic spill event is the use of a filtering emergency masks. These masks utilize some form of chemical beds to scrub the air clean of toxic providing the crew safe breathing air for a period without elevating the oxygen level in the cabin. Using the masks and a form of smoke-eater filter, it may be possible to clean the cabin completely or to a level for safe transition to a space suit to perform a cabin purge. Issues with filters in the past have been the reaction time, breakthroughs, and high breathing resistance. Development in a new form of chemical filters has shown promise to make the filtering approach feasible.

  20. Orion Emergency Mask Approach

    NASA Technical Reports Server (NTRS)

    Tuan, George C.; Graf, John C.

    2008-01-01

    Emergency mask approach on Orion poses a challenge to the traditional Shuttle or Station approaches. Currently, in the case of a fire or toxic spill event, the crew utilizes open loop oxygen masks that provide the crew with oxygen to breath, but also dumps the exhaled oxygen into the cabin. For Orion, with a small cabin volume, the extra oxygen will exceed the flammability limit within a short period of time, unless a nitrogen purge is also provided. Another approach to a fire or toxic spill event is the use of a filtering emergency masks. These masks utilize some form of chemical beds to scrub the air clean of toxic providing the crew safe breathing air for a period without elevating the oxygen level in the cabin. Using the masks and a form of smoke-eater filter, it may be possible to clean the cabin completely or to a level for safe transition to a space suit to perform a cabin purge. Issues with filters in the past have been the reaction temperature and high breathing resistance. Development in a new form of chemical filters has shown promise to make the filtering approach feasible.

  1. New Ideas and Approaches

    ERIC Educational Resources Information Center

    Lukov, V. A.

    2014-01-01

    The article examines theories of youth that have been proposed in the past few years by Russian scientists, and presents the author's original version of a theory of youth that is based on the thesaurus methodological approach. It addresses the ways in which biosocial characteristics may be reflected in new theories of youth.

  2. Adopting a Pluricentric Approach

    ERIC Educational Resources Information Center

    van Kerckvoorde, Colette

    2012-01-01

    This article argues for a "D-A-CH" approach, which stands for Germany (D), Austria (A), and Switzerland (CH), in language classes from the introductory level on. I begin by tracing the emergence and development of distinct Standard Swiss and Austrian German varieties. I then discuss marketing efforts for Swiss and Austrian German, and…

  3. A Fresh Approach

    ERIC Educational Resources Information Center

    Violino, Bob

    2011-01-01

    Facilities and services are a huge drain on community college budgets. They are also vital to the student experience. As funding dries up across the country, many institutions are taking a team approach, working with partner colleges and private service providers to offset costs and generate revenue without sacrificing the services and amenities…

  4. Approaches to acceptable risk

    SciTech Connect

    Whipple, C.

    1997-04-30

    Several alternative approaches to address the question {open_quotes}How safe is safe enough?{close_quotes} are reviewed and an attempt is made to apply the reasoning behind these approaches to the issue of acceptability of radiation exposures received in space. The approaches to the issue of the acceptability of technological risk described here are primarily analytical, and are drawn from examples in the management of environmental health risks. These include risk-based approaches, in which specific quantitative risk targets determine the acceptability of an activity, and cost-benefit and decision analysis, which generally focus on the estimation and evaluation of risks, benefits and costs, in a framework that balances these factors against each other. These analytical methods tend by their quantitative nature to emphasize the magnitude of risks, costs and alternatives, and to downplay other factors, especially those that are not easily expressed in quantitative terms, that affect acceptance or rejection of risk. Such other factors include the issues of risk perceptions and how and by whom risk decisions are made.

  5. SYSTEMS APPROACH TO LEARNING.

    ERIC Educational Resources Information Center

    WIENS, JACOB H.

    TO PERMIT COMPARATIVE ANALYSIS FOR PURPOSES OF EDUCATIONAL PLANNING AT SAN MATEO, FIVE INSTITUTIONS WITH SYSTEMS PROGRAMS ARE EVALUATED ON THE BASIS OF TRIP NOTES. OAKLAND COMMUNITY COLLEGE HAS BEEN COMPLETELY ORGANIZED AROUND THE VOLUNTARY WORK-STUDY LABORATORY APPROACH TO LEARNING. ORAL ROBERTS UNIVERSITY, OKLAHOMA CHRISTIAN COLLEGE, HENRY FORD…

  6. USEPA WATERSHED APPROACH

    EPA Science Inventory

    The U.S. Environmental Protection Agency's Office of Research and Development has developed a well defined research plan to evaluate pollutants within watersheds. This plan is defined by long term goals and annual performance measures. The first goal is to provide the approache...

  7. Implementation of Communicative Approach

    ERIC Educational Resources Information Center

    Jabeen, Shazi Shah

    2014-01-01

    In the contemporary age of high professional requirements such as excellent communicative skills, the need for successful learning of communicative skills of English language suggests communicative ability to be the goal of language teaching. In other words, to teach English language using communicative approach becomes essential. Studies to…

  8. Seasonal precipitation forecasts for selected regions in West Africa using circulation type classifications in combination with further statistical approaches - Conceptual framework and first results

    NASA Astrophysics Data System (ADS)

    Bliefernicht, Jan; Laux, Patrik; Waongo, Moussa; Kunstmann, Harald

    2015-04-01

    Providing valuable forecasts of the seasonal precipitation amount for the upcoming rainy season is one of the big challenges for the national weather services in West Africa. Every year a harmonized forecast of the seasonal precipitation amount for the West African region is issued by the national weather services within the PRESAO framework. The PREASO forecast is based on various statistical approaches ranging from a simple subjective analog method based on the experiences of a meteorological expert to objective regression-based approaches by using various sources of input information such as predicted monsoon winds or observed sea surface temperature anomalies close to the West African coastline. The objective of this study is to perform an evaluation of these techniques for selected West African regions and to introduce classification techniques in the current operational practices and to combine these approaches with further techniques for an additional refinement of the forecasting procedure. We use a fuzzy-rule based technique for a classification of (sub-) monthly large-scale atmospheric and oceanic patterns which are combined to further statistical approaches such as an analog method and a data depth approach for the prediction of the (sub-) seasonal precipitation amounts and additional precipitation indices. The study regions are located from the Edges of the Sahel region in the North of Burkina Faso to the coastline of Ghana. A novel precipitation archive based on daily observations provided by the meteorological services of Burkina Faso and Ghana is the basis for the predictands and is used as reference for model evaluation. The performance of the approach is evaluated over a long period (e.g. 50 years) using cross-validation techniques and sophisticated verification measures for an evaluation of a probability forecast. The precipitation forecast of the classification techniques are also compared to the techniques of the PREASAO community, the

  9. Domain Approach: An Alternative Approach in Moral Education

    ERIC Educational Resources Information Center

    Vengadasalam, Chander; Mamat, Wan Hasmah Wan; Mail, Fauziah; Sudramanian, Munimah

    2014-01-01

    This paper discusses the use of the domain approach in moral education in an upper secondary school in Malaysia. Moral Education needs a creative and an innovative approach. Therefore, a few forms of approaches are used in the teaching-learning of Moral Education. This research describes the use of domain approach which comprises the moral domain…

  10. Approach to hemorrhoids.

    PubMed

    Lohsiriwat, Varut

    2013-07-01

    Hemorrhoids are a very common anorectal disorder defined as the symptomatic enlargement and abnormally downward displacement of anal cushions. The current pathophysiologies of hemorrhoids include the degenerative change of supportive tissue within the anal cushions, vascular hyperplasia, and hyperperfusion of hemorrhoidal plexus. Low-grade hemorrhoids are easily and effectively treated with dietary and lifestyle modification, medical intervention, and some office-based procedures. An operation is usually indicated in symptomatic high-grade and/or complicated hemorrhoids. Whilst hemorrhoidectomy has been the mainstay of surgical treatment, more recently other approaches have been employed including Ligasure hemorrhoidectomy, stapled hemorrhoidopexy, and doppler-guided hemorrhoidal artery ligation. Post-procedural pain and disease recurrence remain the most challenging problems in the treatment of hemorrhoids. This article deals with modern approaches to hemorrhoids based on the latest evidence and reviews of the literature. The management of hemorrhoids in complicated situations is also discussed.

  11. Theoretical Approaches to Nanoparticles

    NASA Astrophysics Data System (ADS)

    Kempa, Krzysztof

    Nanoparticles can be viewed as wave resonators. Involved waves are, for example, carrier waves, plasmon waves, polariton waves, etc. A few examples of successful theoretical treatments that follow this approach are given. In one, an effective medium theory of a nanoparticle composite is presented. In another, plasmon polaritonic solutions allow to extend concepts of radio technology, such as an antenna and a coaxial transmission line, to the visible frequency range.

  12. An Approach to Cosmeceuticals.

    PubMed

    Milam, Emily C; Rieder, Evan A

    2016-04-01

    The cosmeceutical industry is a multi-billion dollar, consumer-driven market. Products promise highly desirable anti-aging benefits, but are not subject to regulation. We present an introduction to cosmeceuticals for the general and cosmetic dermatologist, including definitions and explanations of key terms, an approach to the evidence base, a dissection of chamomile and green tea, two paradigmatic cosmeceutical products, and a window into the underlying psychology of this vast marketplace. PMID:27050700

  13. Synthetic approaches to monofluoroalkenes.

    PubMed

    Landelle, Grégory; Bergeron, Maxime; Turcotte-Savard, Marc-Olivier; Paquin, Jean-François

    2011-05-01

    Monofluoroalkenes are an important fluorinated class of compounds with applications in medicinal chemistry, material sciences and organic chemistry. An overview of methods allowing synthetic access to these fluorinated building blocks is provided. In particular, this critical review, which covers publications up to October 2010, will be divided according to the substitution pattern of the monofluoroalkenes, i.e. di-, tri- or tetra-substituted. Within each group, the various synthetic approaches will be divided according to the reaction type (282 references).

  14. Computational vaccinology: quantitative approaches.

    PubMed

    Flower, Darren R; McSparron, Helen; Blythe, Martin J; Zygouri, Christianna; Taylor, Debra; Guan, Pingping; Wan, Shouzhan; Coveney, Peter V; Walshe, Valerie; Borrow, Persephone; Doytchinova, Irini A

    2003-01-01

    The immune system is hierarchical and has many levels, exhibiting much emergent behaviour. However, at its heart are molecular recognition events that are indistinguishable from other types of biomacromolecular interaction. These can be addressed well by quantitative experimental and theoretical biophysical techniques, and particularly by methods from drug design. We review here our approach to computational immunovaccinology. In particular, we describe the JenPep database and two new techniques for T cell epitope prediction. One is based on quantitative structure-activity relationships (a 3D-QSAR method based on CoMSIA and another 2D method based on the Free-Wilson approach) and the other on atomistic molecular dynamic simulations using high performance computing. JenPep (http://www.jenner.ar.uk/ JenPep) is a relational database system supporting quantitative data on peptide binding to major histocompatibility complexes, TAP transporters, TCR-pMHC complexes, and an annotated list of B cell and T cell epitopes. Our 2D-QSAR method factors the contribution to peptide binding from individual amino acids as well as 1-2 and 1-3 residue interactions. In the 3D-QSAR approach, the influence of five physicochemical properties (volume, electrostatic potential, hydrophobicity, hydrogen-bond donor and acceptor abilities) on peptide affinity were considered. Both methods are exemplified through their application to the well-studied problem of peptide binding to the human class I MHC molecule HLA-A*0201. PMID:14712934

  15. Parsec's astrometry direct approaches .

    NASA Astrophysics Data System (ADS)

    Andrei, A. H.

    Parallaxes - and hence the fundamental establishment of stellar distances - rank among the oldest, keyest, and hardest of astronomical determinations. Arguably amongst the most essential too. The direct approach to obtain trigonometric parallaxes, using a constrained set of equations to derive positions, proper motions, and parallaxes, has been labeled as risky. Properly so, because the axis of the parallactic apparent ellipse is smaller than one arcsec even for the nearest stars, and just a fraction of its perimeter can be followed. Thus the classical approach is of linearizing the description by locking the solution to a set of precise positions of the Earth at the instants of observation, rather than to the dynamics of its orbit, and of adopting a close examination of the never many points available. In the PARSEC program the parallaxes of 143 brown dwarfs were aimed at. Five years of observation of the fields were taken with the WIFI camera at the ESO 2.2m telescope, in Chile. The goal is to provide a statistically significant number of trigonometric parallaxes to BD sub-classes from L0 to T7. Taking advantage of the large, regularly spaced, quantity of observations, here we take the risky approach to fit an ellipse in ecliptical observed coordinates and derive the parallaxes. We also combine the solutions from different centroiding methods, widely proven in prior astrometric investigations. As each of those methods assess diverse properties of the PSFs, they are taken as independent measurements, and combined into a weighted least-square general solution.

  16. A comparative approach to 7, 12-dimethylbenz[a]anthracene effects: Metabolism and mutagenesis in mice and fish

    SciTech Connect

    Gallagher, K.; Cline, J.; Burkhart, J.G.; Gundersen, J.L.

    1997-10-01

    A comparative approach was used to examine the effects of exposure to the potent carcinogen 7,12-dimethylbenz[a]anthracene (DMBA) in two divergent sentinel species, mouse and fish, containing a common transgenic reporter, the bacteriophage {O}X174am3. Effects of DMBA were examined using both in vitro and in vivo studies through the analysis of metabolites, cytochrome P450 was examined 72 hours after DMBA dosing using an assay for 7-ethoxyresorufin-o-deethylase (EROD) activity. Fish showed an increasing trend of EROD induction with increasing dose, with the EROD level at the highest dose being significantly greater than corn oil controls and the lowest DMBA dose. DMBA had less of an effect on mouse P450 levels. Metabolites of DMBA in the bile at 12 hours were quantified in both species using HPLC/PDA detection. Bile extracts were enzyme digested to differentiate glucuronide, sulfate and glutathione conjugates. Primary metabolites in mice were 2-hydroxy-7,12-dimethylbenz[a]anthracene, 7-hydroxymethyl-12-methylbenz[a]anthracene. The same metabolites were detected in the fish with the addition of 7,12-bis-hydrodxymethylbenz[a]anthracene. In vitro assays using uninduced and 3-methylcholanthrene-induced mouse microsomes showed no increase over background mutant frequencies of 1-3x10{sup -6} when 0X DNA was incubated with DMBA. In vivo induced mutation was also examined in mice and fish liver. The 1.9 and 19 mg/kg doses of DMBA resulted in a 10-fold increase in mutation frequency over controls in fish. There was a similar increase in mutation frequency at the 19 mg/kg dose in mice. Analysis of the 1.9 mb/kg dosed mice and the replicate variance among treated and control animals is underway.

  17. Three Approaches to Descriptive Research.

    ERIC Educational Resources Information Center

    Svensson, Lennart

    This report compares three approaches to descriptive research, focusing on the kinds of descriptions developed and on the methods used to develop the descriptions. The main emphasis in all three approaches is on verbal data. In these approaches the importance of interpretation and its intuitive nature are emphasized. The three approaches, however,…

  18. Current Approaches to Teaching Reading.

    ERIC Educational Resources Information Center

    Mackintosh, Helen K., Ed.

    Eight approaches to the teaching of elementary reading are described briefly. The Executive Committee of the Department of Elementary-Kindergarten-Nursery Education of the National Education Association selected the approaches to be discussed. They include (1) Language Experience Approach by R. V. Allen, (2) Phonic Approach by Charles E. Wingo,…

  19. Computer-Aided Diagnosis for Breast Ultrasound Using Computerized BI-RADS Features and Machine Learning Methods.

    PubMed

    Shan, Juan; Alam, S Kaisar; Garra, Brian; Zhang, Yingtao; Ahmed, Tahira

    2016-04-01

    This work identifies effective computable features from the Breast Imaging Reporting and Data System (BI-RADS), to develop a computer-aided diagnosis (CAD) system for breast ultrasound. Computerized features corresponding to ultrasound BI-RADs categories were designed and tested using a database of 283 pathology-proven benign and malignant lesions. Features were selected based on classification performance using a "bottom-up" approach for different machine learning methods, including decision tree, artificial neural network, random forest and support vector machine. Using 10-fold cross-validation on the database of 283 cases, the highest area under the receiver operating characteristic (ROC) curve (AUC) was 0.84 from a support vector machine with 77.7% overall accuracy; the highest overall accuracy, 78.5%, was from a random forest with the AUC 0.83. Lesion margin and orientation were optimum features common to all of the different machine learning methods. These features can be used in CAD systems to help distinguish benign from worrisome lesions.

  20. Detecting Lung Diseases from Exhaled Aerosols: Non-Invasive Lung Diagnosis Using Fractal Analysis and SVM Classification

    PubMed Central

    Xi, Jinxiang; Zhao, Weizhong; Yuan, Jiayao Eddie; Kim, JongWon; Si, Xiuhua; Xu, Xiaowei

    2015-01-01

    Background Each lung structure exhales a unique pattern of aerosols, which can be used to detect and monitor lung diseases non-invasively. The challenges are accurately interpreting the exhaled aerosol fingerprints and quantitatively correlating them to the lung diseases. Objective and Methods In this study, we presented a paradigm of an exhaled aerosol test that addresses the above two challenges and is promising to detect the site and severity of lung diseases. This paradigm consists of two steps: image feature extraction using sub-regional fractal analysis and data classification using a support vector machine (SVM). Numerical experiments were conducted to evaluate the feasibility of the breath test in four asthmatic lung models. A high-fidelity image-CFD approach was employed to compute the exhaled aerosol patterns under different disease conditions. Findings By employing the 10-fold cross-validation method, we achieved 100% classification accuracy among four asthmatic models using an ideal 108-sample dataset and 99.1% accuracy using a more realistic 324-sample dataset. The fractal-SVM classifier has been shown to be robust, highly sensitive to structural variations, and inherently suitable for investigating aerosol-disease correlations. Conclusion For the first time, this study quantitatively linked the exhaled aerosol patterns with their underlying diseases and set the stage for the development of a computer-aided diagnostic system for non-invasive detection of obstructive respiratory diseases. PMID:26422016

  1. Prediction of fat-free body mass from bioelectrical impedance and anthropometry among 3-year-old children using DXA.

    PubMed

    Ejlerskov, Katrine T; Jensen, Signe M; Christensen, Line B; Ritz, Christian; Michaelsen, Kim F; Mølgaard, Christian

    2014-01-27

    For 3-year-old children suitable methods to estimate body composition are sparse. We aimed to develop predictive equations for estimating fat-free mass (FFM) from bioelectrical impedance (BIA) and anthropometry using dual-energy X-ray absorptiometry (DXA) as reference method using data from 99 healthy 3-year-old Danish children. Predictive equations were derived from two multiple linear regression models, a comprehensive model (height(2)/resistance (RI), six anthropometric measurements) and a simple model (RI, height, weight). Their uncertainty was quantified by means of 10-fold cross-validation approach. Prediction error of FFM was 3.0% for both equations (root mean square error: 360 and 356 g, respectively). The derived equations produced BIA-based prediction of FFM and FM near DXA scan results. We suggest that the predictive equations can be applied in similar population samples aged 2-4 years. The derived equations may prove useful for studies linking body composition to early risk factors and early onset of obesity.

  2. A computational method to predict carbonylation sites in yeast proteins.

    PubMed

    Lv, H Q; Liu, J; Han, J Q; Zheng, J G; Liu, R L

    2016-01-01

    Several post-translational modifications (PTM) have been discussed in literature. Among a variety of oxidative stress-induced PTM, protein carbonylation is considered a biomarker of oxidative stress. Only certain proteins can be carbonylated because only four amino acid residues, namely lysine (K), arginine (R), threonine (T) and proline (P), are susceptible to carbonylation. The yeast proteome is an excellent model to explore oxidative stress, especially protein carbonylation. Current experimental approaches in identifying carbonylation sites are expensive, time-consuming and limited in their abilities to process proteins. Furthermore, there is no bioinformational method to predict carbonylation sites in yeast proteins. Therefore, we propose a computational method to predict yeast carbonylation sites. This method has total accuracies of 86.32, 85.89, 84.80, and 86.80% in predicting the carbonylation sites of K, R, T, and P, respectively. These results were confirmed by 10-fold cross-validation. The ability to identify carbonylation sites in different kinds of features was analyzed and the position-specific composition of the modification site-flanking residues was discussed. Additionally, a software tool has been developed to help with the calculations in this method. Datasets and the software are available at https://sourceforge.net/projects/hqlstudio/ files/CarSpred.Y/. PMID:27420944

  3. Generating one biometric feature from another: faces from fingerprints.

    PubMed

    Ozkaya, Necla; Sagiroglu, Seref

    2010-01-01

    This study presents a new approach based on artificial neural networks for generating one biometric feature (faces) from another (only fingerprints). An automatic and intelligent system was designed and developed to analyze the relationships among fingerprints and faces and also to model and to improve the existence of the relationships. The new proposed system is the first study that generates all parts of the face including eyebrows, eyes, nose, mouth, ears and face border from only fingerprints. It is also unique and different from similar studies recently presented in the literature with some superior features. The parameter settings of the system were achieved with the help of Taguchi experimental design technique. The performance and accuracy of the system have been evaluated with 10-fold cross validation technique using qualitative evaluation metrics in addition to the expanded quantitative evaluation metrics. Consequently, the results were presented on the basis of the combination of these objective and subjective metrics for illustrating the qualitative properties of the proposed methods as well as a quantitative evaluation of their performances. Experimental results have shown that one biometric feature can be determined from another. These results have once more indicated that there is a strong relationship between fingerprints and faces.

  4. Developing a QSAR model for hepatotoxicity screening of the active compounds in traditional Chinese medicines.

    PubMed

    Huang, Shan-Han; Tung, Chun-Wei; Fülöp, Ferenc; Li, Jih-Heng

    2015-04-01

    The perception that natural substances are deemed safe has made traditional Chinese medicine (TCM) popular in the treatment and prevention of disease globally. However, such an assumption is often misleading owing to a lack of scientific validation. To assess the safety of TCM, in silico screening provides major advantages over the classical laboratory approaches in terms of resource- and time-saving and full reproducibility. To screen the hepatotoxicity of the active compounds of TCMs, a quantitative structure-activity relationship (QSAR) model was firstly established by utilizing drugs from the Liver Toxicity Knowledge Base. These drugs were annotated with drug-induced liver injury information obtained from clinical trials and post-marketing surveillance. The performance of the model after nested 10-fold cross-validation was 79.1%, 91.2%, 53.8% for accuracy, sensitivity, and specificity, respectively. The external validation of 91 well-known ingredients of common herbal medicines yielded a high accuracy (87%). After screening the TCM Database@Taiwan, the world's largest TCM database, a total of 6853 (74.8%) ingredients were predicted to have hepatotoxic potential. The one-hundred chemical ingredients predicted to have the highest hepatotoxic potential by our model were further verified by published literatures. Our study indicated that this model can serve as a complementary tool to evaluate the safety of TCM.

  5. Developing a QSAR model for hepatotoxicity screening of the active compounds in traditional Chinese medicines.

    PubMed

    Huang, Shan-Han; Tung, Chun-Wei; Fülöp, Ferenc; Li, Jih-Heng

    2015-04-01

    The perception that natural substances are deemed safe has made traditional Chinese medicine (TCM) popular in the treatment and prevention of disease globally. However, such an assumption is often misleading owing to a lack of scientific validation. To assess the safety of TCM, in silico screening provides major advantages over the classical laboratory approaches in terms of resource- and time-saving and full reproducibility. To screen the hepatotoxicity of the active compounds of TCMs, a quantitative structure-activity relationship (QSAR) model was firstly established by utilizing drugs from the Liver Toxicity Knowledge Base. These drugs were annotated with drug-induced liver injury information obtained from clinical trials and post-marketing surveillance. The performance of the model after nested 10-fold cross-validation was 79.1%, 91.2%, 53.8% for accuracy, sensitivity, and specificity, respectively. The external validation of 91 well-known ingredients of common herbal medicines yielded a high accuracy (87%). After screening the TCM Database@Taiwan, the world's largest TCM database, a total of 6853 (74.8%) ingredients were predicted to have hepatotoxic potential. The one-hundred chemical ingredients predicted to have the highest hepatotoxic potential by our model were further verified by published literatures. Our study indicated that this model can serve as a complementary tool to evaluate the safety of TCM. PMID:25660478

  6. A novel algorithm for detecting multiple covariance and clustering of biological sequences

    PubMed Central

    Shen, Wei; Li, Yan

    2016-01-01

    Single genetic mutations are always followed by a set of compensatory mutations. Thus, multiple changes commonly occur in biological sequences and play crucial roles in maintaining conformational and functional stability. Although many methods are available to detect single mutations or covariant pairs, detecting non-synchronous multiple changes at different sites in sequences remains challenging. Here, we develop a novel algorithm, named Fastcov, to identify multiple correlated changes in biological sequences using an independent pair model followed by a tandem model of site-residue elements based on inter-restriction thinking. Fastcov performed exceptionally well at harvesting co-pairs and detecting multiple covariant patterns. By 10-fold cross-validation using datasets of different scales, the characteristic patterns successfully classified the sequences into target groups with an accuracy of greater than 98%. Moreover, we demonstrated that the multiple covariant patterns represent co-evolutionary modes corresponding to the phylogenetic tree, and provide a new understanding of protein structural stability. In contrast to other methods, Fastcov provides not only a reliable and effective approach to identify covariant pairs but also more powerful functions, including multiple covariance detection and sequence classification, that are most useful for studying the point and compensatory mutations caused by natural selection, drug induction, environmental pressure, etc. PMID:27451921

  7. A Survey of Genomic Properties for the Detection of Regulatory Polymorphisms

    PubMed Central

    Montgomery, Stephen B; Griffith, Obi L; Schuetz, Johanna M; Brooks-Wilson, Angela; Jones, Steven J. M

    2007-01-01

    Advances in the computational identification of functional noncoding polymorphisms will aid in cataloging novel determinants of health and identifying genetic variants that explain human evolution. To date, however, the development and evaluation of such techniques has been limited by the availability of known regulatory polymorphisms. We have attempted to address this by assembling, from the literature, a computationally tractable set of regulatory polymorphisms within the ORegAnno database (http://www.oreganno.org). We have further used 104 regulatory single-nucleotide polymorphisms from this set and 951 polymorphisms of unknown function, from 2-kb and 152-bp noncoding upstream regions of genes, to investigate the discriminatory potential of 23 properties related to gene regulation and population genetics. Among the most important properties detected in this region are distance to transcription start site, local repetitive content, sequence conservation, minor and derived allele frequencies, and presence of a CpG island. We further used the entire set of properties to evaluate their collective performance in detecting regulatory polymorphisms. Using a 10-fold cross-validation approach, we were able to achieve a sensitivity and specificity of 0.82 and 0.71, respectively, and we show that this performance is strongly influenced by the distance to the transcription start site. PMID:17559298

  8. Systematic Characterization and Prediction of Post-Translational Modification Cross-Talk*

    PubMed Central

    Huang, Yuanhua; Xu, Bosen; Zhou, Xueya; Li, Ying; Lu, Ming; Jiang, Rui; Li, Tingting

    2015-01-01

    Post-translational modification (PTM)1 plays an important role in regulating the functions of proteins. PTMs of multiple residues on one protein may work together to determine a functional outcome, which is known as PTM cross-talk. Identification of PTM cross-talks is an emerging theme in proteomics and has elicited great interest, but their properties remain to be systematically characterized. To this end, we collected 193 PTM cross-talk pairs in 77 human proteins from the literature and then tested location preference and co-evolution at the residue and modification levels. We found that cross-talk events preferentially occurred among nearby PTM sites, especially in disordered protein regions, and cross-talk pairs tended to co-evolve. Given the properties of PTM cross-talk pairs, a naïve Bayes classifier integrating different features was built to predict cross-talks for pairwise combination of PTM sites. By using a 10-fold cross-validation, the integrated prediction model showed an area under the receiver operating characteristic (ROC) curve of 0.833, superior to using any individual feature alone. The prediction performance was also demonstrated to be robust to the biases in the collected PTM cross-talk pairs. The integrated approach has the potential for large-scale prioritization of PTM cross-talk candidates for functional validation and was implemented as a web server available at http://bioinfo.bjmu.edu.cn/ptm-x/. PMID:25605461

  9. Prediction of fat-free body mass from bioelectrical impedance and anthropometry among 3-year-old children using DXA

    PubMed Central

    Ejlerskov, Katrine T.; Jensen, Signe M.; Christensen, Line B.; Ritz, Christian; Michaelsen, Kim F.; Mølgaard, Christian

    2014-01-01

    For 3-year-old children suitable methods to estimate body composition are sparse. We aimed to develop predictive equations for estimating fat-free mass (FFM) from bioelectrical impedance (BIA) and anthropometry using dual-energy X-ray absorptiometry (DXA) as reference method using data from 99 healthy 3-year-old Danish children. Predictive equations were derived from two multiple linear regression models, a comprehensive model (height2/resistance (RI), six anthropometric measurements) and a simple model (RI, height, weight). Their uncertainty was quantified by means of 10-fold cross-validation approach. Prediction error of FFM was 3.0% for both equations (root mean square error: 360 and 356 g, respectively). The derived equations produced BIA-based prediction of FFM and FM near DXA scan results. We suggest that the predictive equations can be applied in similar population samples aged 2–4 years. The derived equations may prove useful for studies linking body composition to early risk factors and early onset of obesity. PMID:24463487

  10. Systematic characterization and prediction of post-translational modification cross-talk.

    PubMed

    Huang, Yuanhua; Xu, Bosen; Zhou, Xueya; Li, Ying; Lu, Ming; Jiang, Rui; Li, Tingting

    2015-03-01

    Post-translational modification (PTM)(1) plays an important role in regulating the functions of proteins. PTMs of multiple residues on one protein may work together to determine a functional outcome, which is known as PTM cross-talk. Identification of PTM cross-talks is an emerging theme in proteomics and has elicited great interest, but their properties remain to be systematically characterized. To this end, we collected 193 PTM cross-talk pairs in 77 human proteins from the literature and then tested location preference and co-evolution at the residue and modification levels. We found that cross-talk events preferentially occurred among nearby PTM sites, especially in disordered protein regions, and cross-talk pairs tended to co-evolve. Given the properties of PTM cross-talk pairs, a naïve Bayes classifier integrating different features was built to predict cross-talks for pairwise combination of PTM sites. By using a 10-fold cross-validation, the integrated prediction model showed an area under the receiver operating characteristic (ROC) curve of 0.833, superior to using any individual feature alone. The prediction performance was also demonstrated to be robust to the biases in the collected PTM cross-talk pairs. The integrated approach has the potential for large-scale prioritization of PTM cross-talk candidates for functional validation and was implemented as a web server available at http://bioinfo.bjmu.edu.cn/ptm-x/.

  11. [Approaches to radial shaft].

    PubMed

    Bartoníček, J; Naňka, O; Tuček, M

    2015-10-01

    In the clinical practice, radial shaft may be exposed via two approaches, namely the posterolateral Thompson and volar (anterior) Henry approaches. A feared complication of both of them is the injury to the deep branch of the radial nerve. No consensus has been reached, yet, as to which of the two approaches is more beneficial for the proximal half of radius. According to our anatomical studies and clinical experience, Thompson approach is safe only in fractures of the middle and distal thirds of the radial shaft, but highly risky in fractures of its proximal third. Henry approach may be used in any fracture of the radial shaft and provides a safe exposure of the entire lateral and anterior surfaces of the radius.The Henry approach has three phases. In the first phase, incision is made along the line connecting the biceps brachii tendon and the styloid process of radius. Care must be taken not to damage the lateral cutaneous nerve of forearm.In the second phase, fascia is incised and the brachioradialis identified by the typical transition from the muscle belly to tendon and the shape of the tendon. On the lateral side, the brachioradialis lines the space with the radial artery and veins and the superficial branch of the radial nerve running at its bottom. On the medial side, the space is defined by the pronator teres in the proximal part and the flexor carpi radialis in the distal part. The superficial branch of the radial nerve is retracted together with the brachioradialis laterally, and the radial artery medially.In the third phase, the attachment of the pronator teres is identified by its typical tendon in the middle of convexity of the lateral surface of the radial shaft. The proximal half of the radius must be exposed very carefully in order not to damage the deep branch of the radial nerve. Dissection starts at the insertion of the pronator teres and proceeds proximally along its lateral border in interval between this muscle and insertion of the supinator

  12. Repository program licensing approach

    SciTech Connect

    Williamson, T.M.; Gil, A.V.

    1994-12-31

    Yucca Mountain, Nevada is currently being studied by the US Department of Energy (DOE) as a potential site for a mined geologic repository for high-level nuclear waste. DOE has the responsibility to determine the suitability of the site and to develop a license application (LA) for authorization to construct the potential repository. If the site is suitable, the license application would be submitted to the US Nuclear Regulatory Commission (NRC). The repository program licensing approach is focused on the timely acquisition of information needed in licensing and the resolution of potential licensing issues with the NRC staff. Licensing involves an iterative process requiring refinements as data are acquired, analyzed, and evaluated. The repository licensing approach presented in this paper ensures that the information is available when needed to facilitate the licensing process. Identifying the information needed to evaluate compliance with the performance objectives in 10 CFR 60, monitoring the acquisition of such information, and developing a successful license application are integral elements of DOE`s repository program licensing approach. Activities to characterize the site are being systematically conducted as planned in the Site Characterization Plan (SCP). In addition, DOE is implementing the issue resolution initiative, the license application annotated outline (LAAO) process, and interim licensability evaluations to update the early planning in the SCP and to focus site characterization, design, and performance assessment activities on the acquisition of information needed for a site suitability determination and licensing. Collectively, the issue resolution initiative, LAAO process, and interim licensability evaluations are key elements of a transition to the iterative process to answer the question: {open_quotes}When do we have enough data to support licensing?{close_quotes}

  13. Approaches to Numerical Relativity

    NASA Astrophysics Data System (ADS)

    d'Inverno, Ray

    2005-07-01

    Introduction Ray d'Inverno; Preface C. J. S. Clarke; Part I. Theoretical Approaches: 1. Numerical relativity on a transputer array Ray d'Inverno; 2. Some aspects of the characteristic initial value problem in numerical relativity Nigel Bishop; 3. The characteristic initial value problem in general relativity J. M. Stewart; 4. Algebraic approachs to the characteristic initial value problem in general relativity Jõrg Frauendiener; 5. On hyperboidal hypersurfaces Helmut Friedrich; 6. The initial value problem on null cones J. A. Vickers; 7. Introduction to dual-null dynamics S. A. Hayward; 8. On colliding plane wave space-times J. B. Griffiths; 9. Boundary conditions for the momentum constraint Niall O Murchadha; 10. On the choice of matter model in general relativity A. D. Rendall; 11. A mathematical approach to numerical relativity J. W. Barrett; 12. Making sense of the effects of rotation in general relativity J. C. Miller; 13. Stability of charged boson stars and catastrophe theory Franz E. Schunck, Fjodor V. Kusmartsev and Eckehard W. Mielke; Part II. Practical Approaches: 14. Numerical asymptotics R. Gómez and J. Winicour; 15. Instabilities in rapidly rotating polytropes Scott C. Smith and Joan M. Centrella; 16. Gravitational radiation from coalescing binary neutron stars Ken-Ichi Oohara and Takashi Nakamura; 17. 'Critical' behaviour in massless scalar field collapse M. W. Choptuik; 18. Goudunov-type methods applied to general relativistic gravitational collapse José Ma. Ibánez, José Ma. Martí, Juan A. Miralles and J. V. Romero; 19. Astrophysical sources of gravitational waves and neutrinos Silvano Bonazzola, Eric Gourgoulhon, Pawel Haensel and Jean-Alain Marck; 20. Gravitational radiation from triaxial core collapse Jean-Alain Marck and Silvano Bonazzola; 21. A vacuum fully relativistic 3D numerical code C. Bona and J. Massó; 22. Solution of elliptic equations in numerical relativity using multiquadrics M. R. Dubal, S. R. Oliveira and R. A. Matzner; 23

  14. Perioperative approach to children.

    PubMed

    Zuckerberg, A L

    1994-02-01

    There has been a tremendous amount of progress in the perioperative approach to the child since Levy wrote "Psychic trauma of operations in children and a note on combat neurosis" nearly 50 years ago. Recognition of prolonged behavioral derangements following the anesthetic-surgical-hospital experience and the prominent role that the parent and physician play in modifying these have dramatically changed the contemporary pediatric perioperative care. Of paramount importance is the psychological preparation of family and child. With increasing outpatient or same-day admission surgery and free-standing surgical centers, preoperative preparation will, of necessity, increasingly become the responsibility of the pediatrician.

  15. The collaboratory approach

    SciTech Connect

    Peskin, A.M.

    1997-04-01

    A {open_quotes}collaboratory{close_quotes} has been defined as a center without walls, in which researchers can perform their work without regard to geographical location. To an increasing degree, engineering design and development is also taking the form of far-flung collaborations among divisions of a plant, subcontractors, university consultants and customers. It has long been recognized that quality engineering education presents the student with an environment that duplicates as much as possible that which the graduate will encounter in industry. To that end, it is important that engineering schools begin to introduce the collaboratory approach in its preparation, and even use it in delivery of subject matter to students.

  16. Combined approach brings success.

    PubMed

    Law, Oliver

    2014-06-01

    Sixteen months ago, according to Trumpf Medical Systems, which managed the project, 'something out of the ordinary' happened at Leighton Hospital in Crewe. When making plans to upgrade ageing operating theatres and critical care units, the estates department took the decision to involve other disciplines from the very start of the process. Clinicians, nursing staff, architects, patient representatives, and suppliers, all played their part, with the estates team always at the hub. As Oliver Law, managing director of the UK medical technology specialist, explains, this multidisciplinary approach had a profound effect on the outcome. PMID:25004555

  17. New approaches for immunosuppression

    SciTech Connect

    Eiseman, B.; Hansbrough, J.; Weil, R.

    1980-01-01

    New approaches for experimental immunosuppression have been reviewed. These include the following: (1) cyclosporin A, a metabolite from fungus that suppresses multiplying but not resting T and B lymphocytes and can be used in pulsed manner with interspersed drug-free periods; (2) total lymphoid irradiation (transplantation tolerance in rats has been achieved by pretransplant radiation); (3) thoracic duct drainage, which is being revived following its demonstrated effectiveness in the treatment of some autoimmune diseases; (4) hyperbaric oxygen (HBOX). We have found that HBOX 2 1/2 ATA for five hours daily depresses cell-mediated immunity in mice and that this can be reversed by intravenous administration of autologous macrophages.

  18. Avenue of approach generation

    SciTech Connect

    Powell, D.R.; Storm, G.

    1988-01-01

    Los Alamos National Laboratory is conducting research on developing a dynamic planning capability within an Army corps level combat simulation. Central to this research is the development of a computer based ability to ''understand'' terrain and how it is used in military planning. Such a capability demands data structures that adequately represent terrain features used in the planning process. These features primarily relate to attributes of mobility and visibility. Mobility concepts are abstracted to networks of mobility corridors. Notions of visibility are, for the purposes of planning, incorporated into the definition of key terrain. Prior work at Los Alamos has produced algorithms to generate mobility corridors from digitized terrain data. Mobility corridors, by definition, are the building blocks for avenues of approach, and the latter are the context in which key terrain is defined. The purpose of this paper is to describe recent work in constructing avenues of approach, characterization of avenues using summary characteristics, and their role in military planning. 7 refs., 4 figs., 1 tab.

  19. Systemic approaches to biodegradation.

    PubMed

    Trigo, Almudena; Valencia, Alfonso; Cases, Ildefonso

    2009-01-01

    Biodegradation, the ability of microorganisms to remove complex chemicals from the environment, is a multifaceted process in which many biotic and abiotic factors are implicated. The recent accumulation of knowledge about the biochemistry and genetics of the biodegradation process, and its categorization and formalization in structured databases, has recently opened the door to systems biology approaches, where the interactions of the involved parts are the main subject of study, and the system is analysed as a whole. The global analysis of the biodegradation metabolic network is beginning to produce knowledge about its structure, behaviour and evolution, such as its free-scale structure or its intrinsic robustness. Moreover, these approaches are also developing into useful tools such as predictors for compounds' degradability or the assisted design of artificial pathways. However, it is the environmental application of high-throughput technologies from the genomics, metagenomics, proteomics and metabolomics that harbours the most promising opportunities to understand the biodegradation process, and at the same time poses tremendous challenges from the data management and data mining point of view.

  20. Interstage Flammability Analysis Approach

    NASA Technical Reports Server (NTRS)

    Little, Jeffrey K.; Eppard, William M.

    2011-01-01

    The Interstage of the Ares I launch platform houses several key components which are on standby during First Stage operation: the Reaction Control System (ReCS), the Upper Stage (US) Thrust Vector Control (TVC) and the J-2X with the Main Propulsion System (MPS) propellant feed system. Therefore potentially dangerous leaks of propellants could develop. The Interstage leaks analysis addresses the concerns of localized mixing of hydrogen and oxygen gases to produce deflagration zones in the Interstage of the Ares I launch vehicle during First Stage operation. This report details the approach taken to accomplish the analysis. Specified leakage profiles and actual flammability results are not presented due to proprietary and security restrictions. The interior volume formed by the Interstage walls, bounding interfaces with the Upper and First Stages, and surrounding the J2-X engine was modeled using Loci-CHEM to assess the potential for flammable gas mixtures to develop during First Stage operations. The transient analysis included a derived flammability indicator based on mixture ratios to maintain achievable simulation times. Validation of results was based on a comparison to Interstage pressure profiles outlined in prior NASA studies. The approach proved useful in the bounding of flammability risk in supporting program hazard reviews.

  1. Breakfast: a multidisciplinary approach

    PubMed Central

    2013-01-01

    Background The role of breakfast as an essential part of an healthy diet has been only recently promoted even if breakfast practices were known since the Middle Age. The growing scientific evidences on this topic are extremely sector-based nevertheless breakfast could be regarded from different point of views and from different expertises. This approach, that take into account history, sociology, anthropology, medicine, psychology and pedagogy, is useful to better understand the value of this meal in our culture. The aim of this paper was to analyse breakfast-related issues based on a multidisciplinary approach with input by specialists from different fields of learning. Discussion Breakfast is now recommended as part of a diet because it is associated with healthier macro- and micronutrient intakes, body mass index and lifestyle. Moreover recent studies showed that breakfast improves cognitive function, intuitive perception and academic performance. Research demonstrates the importance of providing breakfast not only to children but in adults and elderly too. Although the important role breakfast plays in maintaining the health, epidemiological data from industrialised countries reveal that many individuals either eat a nutritionally unhealthy breakfast or skip it completely. Summary The historical, bio-psychological and educational value of breakfast in our culture is extremely important and should be recognized and stressed by the scientific community. Efforts should be done to promote this practice for the individual health and well-being. PMID:23842429

  2. Modular Approach to Spintronics.

    PubMed

    Camsari, Kerem Yunus; Ganguly, Samiran; Datta, Supriyo

    2015-06-11

    There has been enormous progress in the last two decades, effectively combining spintronics and magnetics into a powerful force that is shaping the field of memory devices. New materials and phenomena continue to be discovered at an impressive rate, providing an ever-increasing set of building blocks that could be exploited in designing transistor-like functional devices of the future. The objective of this paper is to provide a quantitative foundation for this building block approach, so that new discoveries can be integrated into functional device concepts, quickly analyzed and critically evaluated. Through careful benchmarking against available theory and experiment we establish a set of elemental modules representing diverse materials and phenomena. These elemental modules can be integrated seamlessly to model composite devices involving both spintronic and nanomagnetic phenomena. We envision the library of modules to evolve both by incorporating new modules and by improving existing modules as the field progresses. The primary contribution of this paper is to establish the ground rules or protocols for a modular approach that can build a lasting bridge between materials scientists and circuit designers in the field of spintronics and nanomagnetics.

  3. Modular Approach to Spintronics

    PubMed Central

    Camsari, Kerem Yunus; Ganguly, Samiran; Datta, Supriyo

    2015-01-01

    There has been enormous progress in the last two decades, effectively combining spintronics and magnetics into a powerful force that is shaping the field of memory devices. New materials and phenomena continue to be discovered at an impressive rate, providing an ever-increasing set of building blocks that could be exploited in designing transistor-like functional devices of the future. The objective of this paper is to provide a quantitative foundation for this building block approach, so that new discoveries can be integrated into functional device concepts, quickly analyzed and critically evaluated. Through careful benchmarking against available theory and experiment we establish a set of elemental modules representing diverse materials and phenomena. These elemental modules can be integrated seamlessly to model composite devices involving both spintronic and nanomagnetic phenomena. We envision the library of modules to evolve both by incorporating new modules and by improving existing modules as the field progresses. The primary contribution of this paper is to establish the ground rules or protocols for a modular approach that can build a lasting bridge between materials scientists and circuit designers in the field of spintronics and nanomagnetics. PMID:26066079

  4. Approaching the new reality

    NASA Astrophysics Data System (ADS)

    Diaz, Al V.

    I'm very pleased to be here and to have this opportunity to discuss with you what I view as the current challenges in space science. Today, NASA finds itself at a major crossroads. We are in the process of moving from one era in our existence into another. As we continue to launch important science missions, we are simultaneously changing the way we do business, in a very fundamental way. We are again focusing on more frequent access to space through smaller, less costly missions. We are again focusing on NASA's role as a source of technological advancement within the U.S. economy. And we are returning to the leaner, more flexible approach to managing our projects. In short, NASA has embarked on a new journey, and a challenging journey it will be.

  5. An environmental approach

    SciTech Connect

    Geerling, C.

    1996-11-01

    The Shell Petroleum Development Company is operating in southern Nigeria in the delta of the Niger River. This delta covers an area 70,000 square kin of coastal ridge barriers, mangroves, freshwater swamp forest and lowland rain forests. Over the past decades considerable changes has occurred through coastal zone modifications, upstream urban and hydrological infrastructure, deforestation, agriculture, fisheries, industrial development, oil operation, as well as demographic changes. The problems associated with these changes are: (1) over-exploitation of renewable natural resources and breakdown of traditional management structures; (2) impact from industry such as pollution and physical changes, and (3) a perception of lack of social and economic equity. This paper describes approaches to help counteract theses problems.

  6. [Hypercholesterolemia: a therapeutic approach].

    PubMed

    Moráis López, A; Lama More, R A; Dalmau Serra, J

    2009-05-01

    High blood cholesterol levels represent an important cardiovascular risk factor. Hypercholesterolemia is defined as levels of total cholesterol and low-density lipoprotein cholesterol above 95th percentile for age and gender. For the paediatric population, selective screening is recommended in children older than 2 years who are overweight, with a family history of early cardiovascular disease or whose parents have high cholesterol levels. Initial therapeutic approach includes diet therapy, appropriate physical activity and healthy lifestyle changes. Drug treatment should be considered in children from the age of 10 who, after having followed appropriate diet recommendations, still have very high LDL-cholesterol levels or moderately high levels with concomitant risk factors. In case of extremely high LDL-cholesterol levels, drug treatment should be taken into consideration at earlier ages (8 years old). Modest response is usually observed with bile acid-binding resins. Statins can be considered first-choice drugs, once evidence on their efficacy and safety has been shown.

  7. Approaching the new reality

    NASA Technical Reports Server (NTRS)

    Diaz, Al V.

    1993-01-01

    I'm very pleased to be here and to have this opportunity to discuss with you what I view as the current challenges in space science. Today, NASA finds itself at a major crossroads. We are in the process of moving from one era in our existence into another. As we continue to launch important science missions, we are simultaneously changing the way we do business, in a very fundamental way. We are again focusing on more frequent access to space through smaller, less costly missions. We are again focusing on NASA's role as a source of technological advancement within the U.S. economy. And we are returning to the leaner, more flexible approach to managing our projects. In short, NASA has embarked on a new journey, and a challenging journey it will be.

  8. Coordinated Parallel Runway Approaches

    NASA Technical Reports Server (NTRS)

    Koczo, Steve

    1996-01-01

    The current air traffic environment in airport terminal areas experiences substantial delays when weather conditions deteriorate to Instrument Meteorological Conditions (IMC). Expected future increases in air traffic will put additional pressures on the National Airspace System (NAS) and will further compound the high costs associated with airport delays. To address this problem, NASA has embarked on a program to address Terminal Area Productivity (TAP). The goals of the TAP program are to provide increased efficiencies in air traffic during the approach, landing, and surface operations in low-visibility conditions. The ultimate goal is to achieve efficiencies of terminal area flight operations commensurate with Visual Meteorological Conditions (VMC) at current or improved levels of safety.

  9. Endoscopic approach to achalasia

    PubMed Central

    Müller, Michaela; Eckardt, Alexander J; Wehrmann, Till

    2013-01-01

    Achalasia is a primary esophageal motor disorder. The etiology is still unknown and therefore all treatment options are strictly palliative with the intention to weaken the lower esophageal sphincter (LES). Current established endoscopic therapeutic options include pneumatic dilation (PD) or botulinum toxin injection. Both treatment approaches have an excellent symptomatic short term effect, and lead to a reduction of LES pressure. However, the long term success of botulinum toxin (BT) injection is poor with symptom recurrence in more than 50% of the patients after 12 mo and in nearly 100% of the patients after 24 mo, which commonly requires repeat injections. In contrast, after a single PD 40%-60% of the patients remain asymptomatic for ≥ 10 years. Repeated on demand PD might become necessary and long term remission can be achieved with this approach in up to 90% of these patients. The main positive predictors for a symptomatic response to PD are an age > 40 years, a LES-pressure reduction to < 15 mmHg and/or an improved radiological esophageal clearance post-PD. However PD has a significant risk for esophageal perforation, which occurs in about 2%-3% of cases. In randomized, controlled studies BT injection was inferior to PD and surgical cardiomyotomy, whereas the efficacy of PD, in patients > 40 years, was nearly equivalent to surgery. A new promising technique might be peroral endoscopic myotomy, although long term results are needed and practicability as well as safety issues must be considered. Treatment with a temporary self expanding stent has been reported with favorable outcomes, but the data are all from one study group and must be confirmed by others before definite recommendations can be made. In addition to its use as a therapeutic tool, endoscopy also plays an important role in the diagnosis and surveillance of patients with achalasia. PMID:23951393

  10. Approaching attometer laser vibrometry

    NASA Astrophysics Data System (ADS)

    Rembe, C.; Kadner, L.; Giesen, M.

    2016-10-01

    The heterodyne two-beam interferometer has been proven to be the optimal solution for laser-Doppler vibrometry (LDV) regarding accuracy and signal robustness. The theoretical resolution limit for a two-beam interferometer of laser class 3R (up to 5 mW visible measurement-light) is in the regime of a few femtometer per square-root Hertz and well suited to study vibrations in microstructures. However, some new applications of radio-frequency microelectromechanical (RF-MEM) resonators, nanostructures, and surface-nano-defect detection require resolutions beyond that limit. The resolution depends only on the photodetector noise and the sensor sensitivity to specimen displacements. The noise is already defined in present systems by the quantum nature of light for a properly designed optical sensor and more light would lead to an inacceptable influence like heating of the tiny specimen. Noise can only be improved by squeezed-light techniques which require a negligible loss of measurement light which is impossible to realize for almost all technical measurement tasks. Thus, improving the sensitivity is the only path which could make attometer laser vibrometry possible. Decreasing the measurement wavelength would increase the sensitivity but would also increase the photon shot noise. In this paper, we discuss an approach to increase the sensitivity by assembling an additional mirror between interferometer and specimen to form an optical cavity. A detailed theoretical analysis of this setup is presented and we derive the resolution limit, discuss the main contributions to the uncertainty budget, and show a first experiment proving the sensitivity and resolution improvement of our approach.

  11. Investigational Approaches for Mesothelioma

    PubMed Central

    Surmont, Veerle F.; van Thiel, Eric R. E.; Vermaelen, Karim; van Meerbeeck, Jan P.

    2011-01-01

    Malignant pleural mesothelioma (MPM) is a rare, aggressive tumor with a poor prognosis. In view of the poor survival benefit from first-line chemotherapy and the lack of subsequent effective treatment options, there is a strong need for the development of more effective treatment approaches for patients with MPM. This review will provide a comprehensive state of the art of new investigational approaches for mesothelioma. In an introductory section, the etiology, epidemiology, natural history, and standard of care treatment for MPM will be discussed. This review provide an update of the major clinical trials that impact mesothelioma treatment, discuss the impact of novel therapeutics, and provide perspective on where the clinical research in mesothelioma is moving. The evidence was collected by a systematic analysis of the literature (2000–2011) using the databases Medline (National Library of Medicine, USA), Embase (Elsevier, Netherlands), Cochrane Library (Great Britain), National Guideline Clearinghouse (USA), HTA Database (International Network of Agencies for Health Technology Assessment – INAHTA), NIH database (USA), International Pleural Mesothelioma Program – WHOLIS (WHO Database), with the following keywords and filters: mesothelioma, guidelines, treatment, surgery, chemotherapy, radiotherapy, review, investigational, drugs. Currently different targeted therapies and biologicals are under investigation for MPM. It is important that the molecular biologic research should first focus on mesothelioma-specific pathways and biomarkers in order to have more effective treatment options for this disease. The use of array technology will be certainly an implicit gain in the identification of new potential prognostic or biomarkers or important pathways in the MPM pathogenesis. Probably a central mesothelioma virtual tissue bank may contribute to the ultimate goal to identify druggable targets and to develop personalized treatment for the MPM patients. PMID

  12. Approaching attometer laser vibrometry

    SciTech Connect

    Rembe, Christian; Kadner, Lisa; Giesen, Moritz

    2014-05-27

    The heterodyne two-beam interferometer has been proven to be the optimal solution for laser-Doppler vibrometry regarding accuracy and signal robustness. The theoretical resolution limit for a two-beam interferometer of laser class 3R (up to 5 mW visible measurement-light) is in the regime of a few femtometer per square-root Hertz and well suited to study vibrations in microstructures. However, some new applications of RF-MEM resonators, nanostructures, and surface-nano-defect detection require resolutions beyond that limit. The resolution depends only on the noise and the sensor sensitivity to specimen displacements. The noise is already defined in nowadays systems by the quantum nature of light for a properly designed optical sensor and more light would lead to an inacceptable influence like heating of a very tiny structure. Thus, noise can only be improved by squeezed-light techniques which require a negligible loss of measurement light which is impossible for almost all technical measurement tasks. Thus, improving the sensitivity is the only possible path which could make attometer laser vibrometry possible. Decreasing the measurement wavelength would increase the sensitivity but would also increase the photon shot noise. In this paper, we discuss an approach to increase the sensitivity by assembling an additional mirror between interferometer and specimen to form an optical cavity. A detailed theoretical analysis of this setup is presented and we derive the resolution limit, discuss the main contributions to the uncertainty budget, and show a first experiment proving the sensitivity amplification of our approach.

  13. Television Criticism: A Multifarious Approach.

    ERIC Educational Resources Information Center

    Oseguera, A. Anthony

    Recognizing the need for a multifarious approach to television, this paper provides the reader with the following multidimensional approaches to television criticism: rhetorical, dramatic, literary, cinematic, content analysis, myth, linguistics, semiotics, phenomenalism, phenomenology, interpersonal communication, public relations, image,…

  14. [Three approaches to culpability. 1].

    PubMed

    Guyot-Gans, F

    1995-11-01

    During our psychiatric practice, we noted consistently the important role devoted to culpability as a symptom but also its presence in our psychic functions apart from any pathological decompensation. This double existence guided our research, we considered culpability through three different approaches: namely, in this first part, a philosophical approach and a sociological approach (the psychoanalytical approach will be examined in the second part). We will particularly insist here on the cultural dimension of culpability and on its dynamic function in social relations.

  15. Approaches to Multicultural Curriculum Reform.

    ERIC Educational Resources Information Center

    Banks, James A.

    1990-01-01

    Discusses the pros and cons of the contributions of ethnic additive, transformation, decision-making, and social action approaches to multicultural curriculum development. Suggests that movement from a mainstream-centric approach to social action approach is gradual and cumulative. (GG)

  16. Adaptive neuro-fuzzy inference systems with k-fold cross-validation for energy expenditure predictions based on heart rate.

    PubMed

    Kolus, Ahmet; Imbeau, Daniel; Dubé, Philippe-Antoine; Dubeau, Denise

    2015-09-01

    This paper presents a new model based on adaptive neuro-fuzzy inference systems (ANFIS) to predict oxygen consumption (V˙O2) from easily measured variables. The ANFIS prediction model consists of three ANFIS modules for estimating the Flex-HR parameters. Each module was developed based on clustering a training set of data samples relevant to that module and then the ANFIS prediction model was tested against a validation data set. Fifty-eight participants performed the Meyer and Flenghi step-test, during which heart rate (HR) and V˙O2 were measured. Results indicated no significant difference between observed and estimated Flex-HR parameters and between measured and estimated V˙O2 in the overall HR range, and separately in different HR ranges. The ANFIS prediction model (MAE = 3 ml kg(-1) min(-1)) demonstrated better performance than Rennie et al.'s (MAE = 7 ml kg(-1) min(-1)) and Keytel et al.'s (MAE = 6 ml kg(-1) min(-1)) models, and comparable performance with the standard Flex-HR method (MAE = 2.3 ml kg(-1) min(-1)) throughout the HR range. The ANFIS model thus provides practitioners with a practical, cost- and time-efficient method for V˙O2 estimation without the need for individual calibration.

  17. Cross-validation Methodology between Ground and GPM Satellite-based Radar Rainfall Product over Dallas-Fort Worth (DFW) Metroplex

    NASA Astrophysics Data System (ADS)

    Chen, H.; Chandrasekar, V.; Biswas, S.

    2015-12-01

    Over the past two decades, a large number of rainfall products have been developed based on satellite, radar, and/or rain gauge observations. However, to produce optimal rainfall estimation for a given region is still challenging due to the space time variability of rainfall at many scales and the spatial and temporal sampling difference of different rainfall instruments. In order to produce high-resolution rainfall products for urban flash flood applications and improve the weather sensing capability in urban environment, the center for Collaborative Adaptive Sensing of the Atmosphere (CASA), in collaboration with National Weather Service (NWS) and North Central Texas Council of Governments (NCTCOG), has developed an urban radar remote sensing network in DFW Metroplex. DFW is the largest inland metropolitan area in the U.S., that experiences a wide range of natural weather hazards such as flash flood and hailstorms. The DFW urban remote sensing network, centered by the deployment of eight dual-polarization X-band radars and a NWS WSR-88DP radar, is expected to provide impacts-based warning and forecasts for benefit of the public safety and economy. High-resolution quantitative precipitation estimation (QPE) is one of the major goals of the development of this urban test bed. In addition to ground radar-based rainfall estimation, satellite-based rainfall products for this area are also of interest for this study. Typical example is the rainfall rate product produced by the Dual-frequency Precipitation Radar (DPR) onboard Global Precipitation Measurement (GPM) Core Observatory satellite. Therefore, cross-comparison between ground and space-based rainfall estimation is critical to building an optimal regional rainfall system, which can take advantages of the sampling differences of different sensors. This paper presents the real-time high-resolution QPE system developed for DFW urban radar network, which is based upon the combination of S-band WSR-88DP and X-band CASA radars. In addition, we focuses on the cross-comparison between rainfall estimation from this ground based QPE system and GPM rainfall products. The observations collected during the GPM satellite overpasses over DFW area will be used extensively in this study. Data alignment for better comparison will also be presented.

  18. Cross-Validation of the Spanish HP-Version of the Jefferson Scale of Empathy Confirmed with Some Cross-Cultural Differences

    PubMed Central

    Alcorta-Garza, Adelina; San-Martín, Montserrat; Delgado-Bolton, Roberto; Soler-González, Jorge; Roig, Helena; Vivanco, Luis

    2016-01-01

    Context: Medical educators agree that empathy is essential for physicians' professionalism. The Health Professional Version of the Jefferson Scale of Empathy (JSE-HP) was developed in response to a need for a psychometrically sound instrument to measure empathy in the context of patient care. Although extensive support for its validity and reliability is available, the authors recognize the necessity to examine psychometrics of the JSE-HP in different socio-cultural contexts to assure the psychometric soundness of this instrument. The first aim of this study was to confirm its psychometric properties in the cross-cultural context of Spain and Latin American countries. The second aim was to measure the influence of social and cultural factors on the development of medical empathy in health practitioners. Methods: The original English version of the JSE-HP was translated into International Spanish using back-translation procedures. The Spanish version of the JSE-HP was administered to 896 physicians from Spain and 13 Latin American countries. Data were subjected to exploratory factor analysis using principal component analysis (PCA) with oblique rotation (promax) to allow for correlation among the resulting factors, followed by a second analysis, using confirmatory factor analysis (CFA). Two theoretical models, one based on the English JSE-HP and another on the first Spanish student version of the JSE (JSE-S), were tested. Demographic variables were compared using group comparisons. Results: A total of 715 (80%) surveys were returned fully completed. Cronbach's alpha coefficient of the JSE for the entire sample was 0.84. The psychometric properties of the Spanish JSE-HP matched those of the original English JSE-HP. However, the Spanish JSE-S model proved more appropriate than the original English model for the sample in this study. Group comparisons among physicians classified by gender, medical specialties, cultural and cross-cultural backgrounds yielded statistically significant differences (p < 0.001). Conclusions: The findings support the underlying factor structure of the Jefferson Scale of Empathy (JSE). The results reveal the importance of culture in the development of medical empathy. The cross-cultural differences described could open gates for further lines of medical education research. PMID:27462282

  19. Cross-validation of IASI/MetOp derived tropospheric δD with TES and ground-based FTIR observations

    NASA Astrophysics Data System (ADS)

    Lacour, J.-L.; Clarisse, L.; Worden, J.; Schneider, M.; Barthlott, S.; Hase, F.; Risi, C.; Clerbaux, C.; Hurtmans, D.; Coheur, P.-F.

    2014-11-01

    The Infrared Atmospheric Sounding Interferometer (IASI) flying on-board MetOpA and MetOpB is able to capture fine isotopic variations of the HDO to H2O ratio (δD) in the troposphere. Such observations at the high spatio temporal resolution of the sounder are of great interest to improve our understanding of the mechanisms controlling humidity in the troposphere. In this study we aim to empirically assess the validity of our error estimation previously evaluated theoretically. To achieve this, we compare IASI δD retrieved profiles with other available profiles of δD, from the TES infrared sounder onboard AURA and from three ground-based FTIR stations produced within the MUSICA project: the NDACC (Network for the Detection of Atmospheric Composition Change) sites Kiruna and Izana, and the TCCON site Karlsruhe, which in addition to near-infrared TCCON spectra also records mid-infrared spectra. We describe the achievable level of agreement between the different retrievals and show that these theoretical errors are in good agreement with empirical differences. The comparisons are made at different locations from tropical to Arctic latitudes, above sea and above land. Generally IASI and TES are similarly sensitive to δD in the free troposphere which allows to compare their measurements directly. At tropical latitudes where IASI's sensitivity is lower than that of TES, we show that the agreement improves when taking into account the sensitivity of IASI in the TES retrieval. For the comparison IASI-FTIR only direct comparisons are performed because of similar sensitivities. We identify a quasi negligible bias in the free troposphere (-3‰) between IASI retrieved δD with the TES one, which are bias corrected, but an important with the ground-based FTIR reaching -47‰. We also suggest that model-satellite observations comparisons could be optimized with IASI thanks to its high spatial and temporal sampling.

  20. Cross-validation of IASI/MetOp derived tropospheric δD with TES and ground-based FTIR observations

    NASA Astrophysics Data System (ADS)

    Lacour, J.-L.; Clarisse, L.; Worden, J.; Schneider, M.; Barthlott, S.; Hase, F.; Risi, C.; Clerbaux, C.; Hurtmans, D.; Coheur, P.-F.

    2015-03-01

    The Infrared Atmospheric Sounding Interferometer (IASI) flying onboard MetOpA and MetOpB is able to capture fine isotopic variations of the HDO to H2O ratio (δD) in the troposphere. Such observations at the high spatio-temporal resolution of the sounder are of great interest to improve our understanding of the mechanisms controlling humidity in the troposphere. In this study we aim to empirically assess the validity of our error estimation previously evaluated theoretically. To achieve this, we compare IASI δD retrieved profiles with other available profiles of δD, from the TES infrared sounder onboard AURA and from three ground-based FTIR stations produced within the MUSICA project: the NDACC (Network for the Detection of Atmospheric Composition Change) sites Kiruna and Izaña, and the TCCON site Karlsruhe, which in addition to near-infrared TCCON spectra also records mid-infrared spectra. We describe the achievable level of agreement between the different retrievals and show that these theoretical errors are in good agreement with empirical differences. The comparisons are made at different locations from tropical to Arctic latitudes, above sea and above land. Generally IASI and TES are similarly sensitive to δD in the free troposphere which allows one to compare their measurements directly. At tropical latitudes where IASI's sensitivity is lower than that of TES, we show that the agreement improves when taking into account the sensitivity of IASI in the TES retrieval. For the comparison IASI-FTIR only direct comparisons are performed because the sensitivity profiles of the two observing systems do not allow to take into account their differences of sensitivity. We identify a quasi negligible bias in the free troposphere (-3‰) between IASI retrieved δD with the TES, which are bias corrected, but important with the ground-based FTIR reaching -47‰. We also suggest that model-satellite observation comparisons could be optimized with IASI thanks to its high spatial and temporal sampling.

  1. Near surface geotechnical and geophysical data cross validated for site characterization applications. The cases of selected accelerometric stations in Crete island (Greece)

    NASA Astrophysics Data System (ADS)

    Loupasakis, Constantinos; Tsangaratos, Paraskevas; Rozos, Dimitrios; Rondoyianni, Theodora; Vafidis, Antonis; Steiakakis, Emanouil; Agioutantis, Zacharias; Savvaidis, Alexandros; Soupios, Pantelis; Papadopoulos, Ioannis; Papadopoulos, Nikos; Sarris, Apostolos; Mangriotis, Maria-Dafni; Dikmen, Unal

    2015-04-01

    The near surface ground conditions are highly important for the design of civil constructions. These conditions determine primarily the ability of the foundation formations to bear loads, the stress - strain relations and the corresponding deformations, as well as the soil amplification and corresponding peak ground motion in case of dynamic loading. The static and dynamic geotechnical parameters as well as the ground-type/soil-category can be determined by combining geotechnical and geophysical methods, such as engineering geological surface mapping, geotechnical drilling, in situ and laboratory testing and geophysical investigations. The above mentioned methods were combined for the site characterization in selected sites of the Hellenic Accelerometric Network (HAN) in the area of Crete Island. The combination of the geotechnical and geophysical methods in thirteen (13) sites provided sufficient information about their limitations, setting up the minimum tests requirements in relation to the type of the geological formations. The reduced accuracy of the surface mapping in urban sites, the uncertainties introduced by the geophysical survey in sites with complex geology and the 1-D data provided by the geotechnical drills are some of the causes affecting the right order and the quantity of the necessary investigation methods. Through this study the gradual improvement on the accuracy of the site characterization data in regards to the applied investigation techniques is presented by providing characteristic examples from the total number of thirteen sites. As an example of the gradual improvement of the knowledge about the ground conditions the case of AGN1 strong motion station, located at Agios Nikolaos city (Eastern Crete), is briefly presented. According to the medium scale geological map of IGME the station was supposed to be founded over limestone. The detailed geological mapping reveled that a few meters of loose alluvial deposits occupy the area, expected to lay over the Neogene marly formations and the Mesozoic limestone, identified at the surrounding area. This changes the ground type to E instead of A, based on the EC8 classification. According the geophysical survey the Neogene formations extend down several meters and the mean Vs30 is 476m/s, increasing the rank of the ground type to B. Finally, the geotechnical drill reviled that the loose alluvial deposits extend down 13m containing two clearly identified layers of liquefiable loose sand. Below the alluvial deposits a thin layer (1,5m thick) of Neogene marly formations and the karstified limestone was located, as expected. So finally it was proved that the ground type category at the site is S2, setting up the geotechnical drills as the determinant investigation technique for this site. Besides the above described case, all selected examples present sufficiently the ability, the limitations and the right order of the investigation methods aiming to the site characterization. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.

  2. Concurrent cross-validation of the Self-Appraisal Questionnaire: a tool for assessing violent and nonviolent recidivism and institutional adjustment on a sample of North Carolina offenders.

    PubMed

    Loza, Wagdy; Conley, Michael; Warren, Birchie

    2004-02-01

    The aim of this study was to determine whether the Self-Appraisal Questionnaire (SAQ), a tool that was found to be reliable and valid for assessing violent and nonviolent recidivism and institutional adjustment for Canadian offenders, would also be valid for the same purposes with a demographically different population of North Carolina offenders. The internal consistency alphas and SAQ total and subscale scores' correlations were high. Offenders with high SAQ total scores had significantly more violent offenses, had more total number of past offenses, had higher numbers of past arrests, and had more institutional infractions than those with low SAQ scores. There were no significant differences between the responses of the African American and Caucasian offenders on the SAQ scales. These results support previous findings regarding the reliability and validity of the SAQ for assessing recidivism and institutional adjustment and suggest that the SAQ could be used with diverse populations. PMID:14969119

  3. Direct spectral analysis of tea samples using 266 nm UV pulsed laser-induced breakdown spectroscopy and cross validation of LIBS results with ICP-MS.

    PubMed

    Gondal, M A; Habibullah, Y B; Baig, Umair; Oloore, L E

    2016-05-15

    Tea is one of the most common and popular beverages spanning vast array of cultures all over the world. The main nutritional benefits of drinking tea are its anti-oxidant properties, presumed protection against certain cancers, inhibition of inflammation and possible protective effects against diabetes. Laser induced breakdown spectrometer (LIBS) was assembled as a powerful tool for qualitative and quantitative analysis of various brands of tea samples using 266 nm pulsed UV laser. LIBS spectra for six brands of tea samples in the wavelength range of 200-900 nm was recorded and all elements present in our tea samples were identified. The major toxic elements detected in several brands of tea samples were bromine, chromium and minerals like iron, calcium, potassium and silicon. The spectral assignment was conducted prior to the determination of concentration of each element. For quantitative analysis, calibration curves were drawn for each element using standard samples prepared in known concentration in the tea matrix. The plasma parameters (electron temperature and electron density) were also determined prior to the tea samples spectroscopic analysis. The concentration of iron, chromium, potassium, bromine, copper, silicon and calcium detected in all tea samples was between 378-656, 96-124, 1421-6785, 99-1476, 17-36, 2-11 and 92-130 mg L(-1) respectively. The limits of detection estimated for Fe, Cr, K, Br, Cu, Si, Ca in tea samples were 22, 12, 14, 11, 6, 1 and 12 mg L(-1) respectively. To further confirm the accuracy of our LIBS results, we determined the concentration of each element present in tea samples by using standard analytical technique like ICP-MS. The concentrations detected with our LIBS system are in excellent agreement with ICP-MS results. The system assembled for spectral analysis in this work could be highly applicable for testing the quality and purity of food and also pharmaceuticals products. PMID:26992530

  4. Adaptive neuro-fuzzy inference systems with k-fold cross-validation for energy expenditure predictions based on heart rate.

    PubMed

    Kolus, Ahmet; Imbeau, Daniel; Dubé, Philippe-Antoine; Dubeau, Denise

    2015-09-01

    This paper presents a new model based on adaptive neuro-fuzzy inference systems (ANFIS) to predict oxygen consumption (V˙O2) from easily measured variables. The ANFIS prediction model consists of three ANFIS modules for estimating the Flex-HR parameters. Each module was developed based on clustering a training set of data samples relevant to that module and then the ANFIS prediction model was tested against a validation data set. Fifty-eight participants performed the Meyer and Flenghi step-test, during which heart rate (HR) and V˙O2 were measured. Results indicated no significant difference between observed and estimated Flex-HR parameters and between measured and estimated V˙O2 in the overall HR range, and separately in different HR ranges. The ANFIS prediction model (MAE = 3 ml kg(-1) min(-1)) demonstrated better performance than Rennie et al.'s (MAE = 7 ml kg(-1) min(-1)) and Keytel et al.'s (MAE = 6 ml kg(-1) min(-1)) models, and comparable performance with the standard Flex-HR method (MAE = 2.3 ml kg(-1) min(-1)) throughout the HR range. The ANFIS model thus provides practitioners with a practical, cost- and time-efficient method for V˙O2 estimation without the need for individual calibration. PMID:25959320

  5. RegulonDB v8.0: omics data sets, evolutionary conservation, regulatory phrases, cross-validated gold standards and more.

    PubMed

    Salgado, Heladia; Peralta-Gil, Martin; Gama-Castro, Socorro; Santos-Zavaleta, Alberto; Muñiz-Rascado, Luis; García-Sotelo, Jair S; Weiss, Verena; Solano-Lira, Hilda; Martínez-Flores, Irma; Medina-Rivera, Alejandra; Salgado-Osorio, Gerardo; Alquicira-Hernández, Shirley; Alquicira-Hernández, Kevin; López-Fuentes, Alejandra; Porrón-Sotelo, Liliana; Huerta, Araceli M; Bonavides-Martínez, César; Balderas-Martínez, Yalbi I; Pannier, Lucia; Olvera, Maricela; Labastida, Aurora; Jiménez-Jacinto, Verónica; Vega-Alvarado, Leticia; Del Moral-Chávez, Victor; Hernández-Alvarez, Alfredo; Morett, Enrique; Collado-Vides, Julio

    2013-01-01

    This article summarizes our progress with RegulonDB (http://regulondb.ccg.unam.mx/) during the past 2 years. We have kept up-to-date the knowledge from the published literature regarding transcriptional regulation in Escherichia coli K-12. We have maintained and expanded our curation efforts to improve the breadth and quality of the encoded experimental knowledge, and we have implemented criteria for the quality of our computational predictions. Regulatory phrases now provide high-level descriptions of regulatory regions. We expanded the assignment of quality to various sources of evidence, particularly for knowledge generated through high-throughput (HT) technology. Based on our analysis of most relevant methods, we defined rules for determining the quality of evidence when multiple independent sources support an entry. With this latest release of RegulonDB, we present a new highly reliable larger collection of transcription start sites, a result of our experimental HT genome-wide efforts. These improvements, together with several novel enhancements (the tracks display, uploading format and curational guidelines), address the challenges of incorporating HT-generated knowledge into RegulonDB. Information on the evolutionary conservation of regulatory elements is also available now. Altogether, RegulonDB version 8.0 is a much better home for integrating knowledge on gene regulation from the sources of information currently available.

  6. Cross-validation of δ15N and FishBase estimates of fish trophic position in a Mediterranean lagoon: The importance of the isotopic baseline

    NASA Astrophysics Data System (ADS)

    Mancinelli, Giorgio; Vizzini, Salvatrice; Mazzola, Antonio; Maci, Stefano; Basset, Alberto

    2013-12-01

    FishBase, a relational database freely available on the Internet, is to date widely used as a source of quantitative information on the trophic position of marine fish species. Here, we compared FishBase estimates for an assemblage of 30 fish species sampled in a Mediterranean lagoon (Acquatina lagoon, SE Italy) with their trophic positions calculated using nitrogen stable isotopes.

  7. Improved cross validation of a static ubiquitin structure derived from high precision residual dipolar couplings measured in a drug-based liquid crystalline phase.

    PubMed

    Maltsev, Alexander S; Grishaev, Alexander; Roche, Julien; Zasloff, Michael; Bax, Ad

    2014-03-12

    The antibiotic squalamine forms a lyotropic liquid crystal at very low concentrations in water (0.3-3.5% w/v), which remains stable over a wide range of temperature (1-40 °C) and pH (4-8). Squalamine is positively charged, and comparison of the alignment of ubiquitin relative to 36 previously reported alignment conditions shows that it differs substantially from most of these, but is closest to liquid crystalline cetyl pyridinium bromide. High precision residual dipolar couplings (RDCs) measured for the backbone (1)H-(15)N, (15)N-(13)C', (1)H(α)-(13)C(α), and (13)C'-(13)C(α) one-bond interactions in the squalamine medium fit well to the static structural model previously derived from NMR data. Inclusion into the structure refinement procedure of these RDCs, together with (1)H-(15)N and (1)H(α)-(13)C(α) RDCs newly measured in Pf1, results in improved agreement between alignment-induced changes in (13)C' chemical shift, (3)JHNHα values, and (13)C(α)-(13)C(β) RDCs and corresponding values predicted by the structure, thereby validating the high quality of the single-conformer structural model. This result indicates that fitting of a single model to experimental data provides a better description of the average conformation than does averaging over previously reported NMR-derived ensemble representations. The latter can capture dynamic aspects of a protein, thus making the two representations valuable complements to one another. PMID:24568736

  8. Improved Cross Validation of a Static Ubiquitin Structure Derived from High Precision Residual Dipolar Couplings Measured in a Drug-Based Liquid Crystalline Phase

    PubMed Central

    2014-01-01

    The antibiotic squalamine forms a lyotropic liquid crystal at very low concentrations in water (0.3-3.5% w/v), which remains stable over a wide range of temperature (1-40 °C) and pH (4-8). Squalamine is positively charged, and comparison of the alignment of ubiquitin relative to 36 previously reported alignment conditions shows that it differs substantially from most of these, but is closest to liquid crystalline cetyl pyridinium bromide. High precision residual dipolar couplings (RDCs) measured for the backbone 1H-15N, 15N-13C′, 1Hα-13Cα, and 13C′-13Cα one-bond interactions in the squalamine medium fit well to the static structural model previously derived from NMR data. Inclusion into the structure refinement procedure of these RDCs, together with 1H-15N and 1Hα-13Cα RDCs newly measured in Pf1, results in improved agreement between alignment-induced changes in 13C′ chemical shift, 3JHNHα values, and 13Cα-13Cβ RDCs and corresponding values predicted by the structure, thereby validating the high quality of the single-conformer structural model. This result indicates that fitting of a single model to experimental data provides a better description of the average conformation than does averaging over previously reported NMR-derived ensemble representations. The latter can capture dynamic aspects of a protein, thus making the two representations valuable complements to one another. PMID:24568736

  9. Cross-validation and reliability of the line-drill test of anaerobic performance in basketball players 14-16 years.

    PubMed

    Carvalho, Humberto M; Coelho e Silva, Manuel J; Figueiredo, António J; Gonçalves, Carlos E; Castagna, Carlo; Philippaerts, Renaat M; Malina, Robert M

    2011-04-01

    This study evaluates the validity and reliability of the line-drill (LD) test of anaerobic performance in 76 male basketball players 14.0-16.0 years of age. The Wingate Anaerobic Test (WAnT) was used as the reference for anaerobic performance. Wingate Anaerobic Test and LD test were moderately correlated (0.39 and 0.43, p < 0.01). Estimated age at peak height velocity (APHV) was moderately, negatively, and significantly (p < 0.01) correlated with WAnT peak (r = -0.69) and mean power (r = -0.71); earlier-maturing players had greater anaerobic power. Training experience was not associated with anaerobic performance, but chronological age (CA) and estimated APHV were significant covariates of the LD test (p < 0.05). National players were better than local players on the LD test (p < 0.01) after controlling for CA and body size. Short-term reliability of the LD test (n = 12, 1-week interval) was good: technical error of measurement = 0.44 seconds (95% confidence interval [CI] 0.31-0.75 seconds), intraclass correlation coefficient = 0.91 (95% CI 0.68-0.97), and coefficient of variation = 1.4% (95% CI 1.0-2.3%). Although the relationship between the LD test and WAnT was moderate, the LD test effectively distinguished local- and national-level adolescent basketball players. In contrast to WAnT, the LD test was not influenced by estimated biological maturity status. Thus, the LD test may be suitable for field assessment of anaerobic performance of youth basketball players.

  10. Rehabilitation approaches to stroke.

    PubMed

    Aichner, F; Adelwöhrer, C; Haring, H P

    2002-01-01

    This article describes the state of the science in stroke rehabilitation dealing with three main topics: (1) General approach to stroke rehabilitation (stroke services and stroke units), (2) Neurophysiological and pharmacological interventions (facilitation of brain repair mechanisms) and (3) Experimental approaches (neuronal transplantation). Stroke rehabilitation is an active process beginning during acute hospitalisation, progressing to a systematic program of rehabilitation services and continuing after the individual returns to the community. There is world-wide consensus that stroke patients should be treated at specialised stroke unit with specially trained medical and nursing staff, co-ordinated multidisciplinary rehabilitation and education programs for patients and their families. Stroke Unit has been shown to be associated with a long-term reduction of death and of the combined poor outcomes of death and dependency, independent of patients age, sex, or variations in stroke unit organisations. No study has clearly shown to what extent the beneficial effect is due to specific rehabilitation strategies. New imaging studies in stroke patients indicate altered post stroke activation patterns, which suggest some functional reorganisation. Reorganisation may be the principle process responsible for recovery after stroke. It is assumed that different post ischaemic interventions like physiotherapy, occupational therapy, speech therapy, electrical stimulation, etc. facilitates such changes. Scientific evidence demonstrating the values of specific rehabilitation interventions after stroke is limited. Comparisons between different methods in current use have so far mostly failed to show that any particular physiotherapy, occupational therapy, speech therapy or stroke rehabilitation strategy is superior to another. Clinical data are strongly in favour of early mobilisation and training. Pharmacological interventions in animals revealed that norepinephrine

  11. Voyager Approaches Final Frontier

    NASA Technical Reports Server (NTRS)

    2003-01-01

    An artist's concept illustrates the positions of the Voyager spacecraft in relation to structures formed around our Sun by the solar wind. Also illustrated is the termination shock, a violent region the spacecraft must pass through before reaching the outer limits of the solar system. At the termination shock, the supersonic solar wind abruptly slows from an average speed of 400 kilometers per second to less than 100 kilometer per second (900,000 to less than 225,000 miles per hour). Beyond the termination shock is the solar system's final frontier, the heliosheath, a vast region where the turbulent and hot solar wind is compressed as it presses outward against the interstellar wind that is beyond the heliopause. A bow shock likely forms as the interstellar wind approaches and is deflected around the heliosphere, forcing it into a teardrop-shaped structure with a long, comet-like tail.

    The exact location of the termination shock is unknown, and it originally was thought to be closer to the Sun than Voyager 1 currently is. As Voyager 1 cruised ever farther from the Sun, it confirmed that all the planets are inside an immense bubble blown by the solar wind and the termination shock was much more distant.

  12. Halitosis: the multidisciplinary approach

    PubMed Central

    Bollen, Curd ML; Beikler, Thomas

    2012-01-01

    Halitosis, bad breath or oral malodour are all synonyms for the same pathology. Halitosis has a large social and economic impact. For the majority of patients suffering from bad breath, it causes embarrassment and affects their social communication and life. Moreover, halitosis can be indicative of underlying diseases. Only a limited number of scientific publications were presented in this field until 1995. Ever since, a large amount of research is published, often with lack of evidence. In general, intraoral conditions, like insufficient dental hygiene, periodontitis or tongue coating are considered to be the most important cause (85%) for halitosis. Therefore, dentists and periodontologists are the first-line professionals to be confronted with this problem. They should be well aware of the origin, the detection and especially of the treatment of this pathology. In addition, ear–nose–throat-associated (10%) or gastrointestinal/endocrinological (5%) disorders may contribute to the problem. In the case of halitophobia, psychiatrical or psychological problems may be present. Bad breath needs a multidisciplinary team approach: dentists, periodontologists, specialists in family medicine, ear–nose–throat surgeons, internal medicine and psychiatry need to be updated in this field, which still is surrounded by a large taboo. Multidisciplinary bad breath clinics offer the best environment to examine and treat this pathology that affects around 25% of the whole population. This article describes the origin, detection and treatment of halitosis, regarded from the different etiological origins. PMID:22722640

  13. Panniculitides, an algorithmic approach.

    PubMed

    Zelger, B

    2013-08-01

    The issue of inflammatory diseases of subcutis and its mimicries is generally considered a difficult field of dermatopathology. Yet, in my experience, with appropriate biopsies and good clinicopathological correlation, a specific diagnosis of panniculitides can usually be made. Thereby, knowledge about some basic anatomic and pathological issues is essential. Anatomy differentiates within the panniculus between the fatty lobules separated by fibrous septa. Pathologically, inflammation of panniculus is defined and recognized by an inflammatory process which leads to tissue damage and necrosis. Several types of fat necrosis are observed: xanthomatized macrophages in lipophagic necrosis; granular fat necrosis and fat micropseudocysts in liquefactive fat necrosis; mummified adipocytes in "hyalinizing" fat necrosis with/without saponification and/or calcification; and lipomembranous membranes in membranous fat necrosis. In an algorithmic approach the recognition of an inflammatory process recognized by features as elaborated above is best followed in three steps: recognition of pattern, second of subpattern, and finally of presence and composition of inflammatory cells. Pattern differentiates a mostly septal or mostly lobular distribution at scanning magnification. In the subpattern category one looks for the presence or absence of vasculitis, and, if this is the case, the size and the nature of the involved blood vessel: arterioles and small arteries or veins; capillaries or postcapillary venules. The third step will be to identify the nature of the cells present in the inflammatory infiltrate and, finally, to look for additional histopathologic features that allow for a specific final diagnosis in the language of clinical dermatology of disease involving the subcutaneous fat.

  14. Diagnostic approach to hemoglobinopathies.

    PubMed

    Kutlar, Ferdane

    2007-01-01

    Abnormalities of hemoglobin (Hb) synthesis are among the most common inherited disorders of man and can be quantitative (thalassemia syndromes) or qualitative (variant Hbs). Definite identification of hemoglobinopathies can be achieved by a stepwise algorithmic approach, starting with a detailed clinical history, through hematologic evaluation [complete blood count (CBC)], reticulocyte count, red blood cell (RBC) morphology], protein based analytic methods [Hb electrophoresis or isoelectric focusing (IEF), cation exchange high performance liquid chromatography (HPLC), reversed phase HPLC] to nucleic acid based methods [such as polymerase chain reaction (PCR), reverse transcribed (RT)-PCR, sequencing of genomic DNA and sequencing of RT-PCR amplified globin cDNA of the gene of interest]. When an abnormality of Hb function (increased or decreased oxygen affinity) or stability (unstable Hb variants) is suspected from the phenotype, special confirmatory tests (determination of p50, Heinz body prep and isopropanol or heat stability tests) can be useful. Family studies are also helpful in certain cases. A review of the application of these methods to the diagnosis of hemoglobinopathies at the Sickle Cell Center Laboratory in Augusta, GA, USA, is presented below. PMID:17486507

  15. Using a solutions approach.

    PubMed

    Kimberley, Mike

    2004-06-01

    Companies today are placing an even greater emphasis on keeping all recordable employee injuries to a minimum. A reduction in hand and finger injuries, along with their associated medical and indemnity costs, can have a positive impact on the company's bottom line. Safety actually can provide revenue when the safety program extends beyond the confines of specific product applications. Conducting a careful and complete analysis of all of the critical issues in a company's production process and the procedures in its safety program will allow the organization to identify opportunities for cutting costs while enhancing worker comfort and safety. Identifying business solutions--and not just product applications--will provide organizations with additional cost saving opportunities. Tighter controls, standardization, SKU reduction, productivity improvements, and recycling are just a few of the potential solutions that can be applied. Partnering with a reputable glove manufacturer that offers a critical safety program analysis has the potential to provide numerous, long-term advantages. A business solutions approach can provide potential productivity improvements, injury reductions, standardization of best practices, and SKU reductions, all of which result in a safer work environment. PMID:15232914

  16. COMPRENDO: Focus and Approach

    PubMed Central

    Schulte-Oehlmann, Ulrike; Albanis, Triantafyllos; Allera, Axel; Bachmann, Jean; Berntsson, Pia; Beresford, Nicola; Carnevali, Daniela Candia; Ciceri, Francesca; Dagnac, Thierry; Falandysz, Jerzy; Galassi, Silvana; Hala, David; Janer, Gemma; Jeannot, Roger; Jobling, Susan; King, Isabella; Klingmüller, Dietrich; Kloas, Werner; Kusk, Kresten Ole; Levada, Ramon; Lo, Susan; Lutz, Ilka; Oehlmann, Jörg; Oredsson, Stina; Porte, Cinta; Rand-Weaver, Marian; Sakkas, Vasilis; Sugni, Michela; Tyler, Charles; van Aerle, Ronny; van Ballegoy, Christoph; Wollenberger, Leah

    2006-01-01

    Tens of thousands of man-made chemicals are in regular use and discharged into the environment. Many of them are known to interfere with the hormonal systems in humans and wildlife. Given the complexity of endocrine systems, there are many ways in which endocrine-disrupting chemicals (EDCs) can affect the body’s signaling system, and this makes unraveling the mechanisms of action of these chemicals difficult. A major concern is that some of these EDCs appear to be biologically active at extremely low concentrations. There is growing evidence to indicate that the guiding principle of traditional toxicology that “the dose makes the poison” may not always be the case because some EDCs do not induce the classical dose–response relationships. The European Union project COMPRENDO (Comparative Research on Endocrine Disrupters—Phylogenetic Approach and Common Principles focussing on Androgenic/Antiandrogenic Compounds) therefore aims to develop an understanding of potential health problems posed by androgenic and antiandrogenic compounds (AACs) to wildlife and humans by focusing on the commonalities and differences in responses to AACs across the animal kingdom (from invertebrates to vertebrates). PMID:16818253

  17. Endoscopic Endonasal Transsphenoidal Approach

    PubMed Central

    Cappabianca, Paolo; Alfieri, Alessandra; Colao, Annamaria; Ferone, Diego; Lombardi, Gaetano; de Divitiis, Enrico

    1999-01-01

    The outcome of endoscopic endonasal transsphenoidal surgery in 10 patients with pituitary adenomas was compared with that of traditional transnasal transsphenoidal approach (TTA) in 20 subjects. Among the 10 individuals subjected to “pure endoscopy,” 2 had a microadenoma, 1 an intrasellar macroadenoma, 4 had a macroadenoma with suprasellar expansion, 2 had a macroadenoma with supra-parasellar expansion, and 1 a residual tumor; 5 had acromegaly and 5 had a nonfunctioning adenoma (NFA). Among the patients subjected to TTA, 4 had a microadenoma, 2 had an intrasellar macroadenoma, 6 had a macroadenoma with suprasellar expansion, 4 had a macroadenoma with supra-parasellar expansion, and 4 had a residual tumor; 9 patients had acromegaly, 1 hyperprolactinemia, 1 Cushing's disease, and 9 a NFA. At the macroscopic evaluation, tumor removal was total (100%) after endoscopy in 9 patients and after TTA in 14 patients. Six months after surgery, magnetic resonance imaging (MRI) confirmed the total tumor removal in 21 of 23 patients (91.3%). Circulating growth hormone (GH) and insulin-like growth factor-I (IGF-I) significantly decreased 6 months after surgery in all 14 acromegalic patients: normalization of plasma IGF-I levels was obtained in 4 of 5 patients after the endoscopic procedure and in 4 of 9 patients after TTA. Before surgery, pituitary hormone deficiency was present in 14 out of 30 patients: pituitary function improved in 4 patients, remaining unchanged in the other 10 patients. Visual field defects were present before surgery in 4 patients, and improved in all. Early surgical results in the group of 10 patients who underwent endoscopic pituitary tumor removal were at least equivalent to those of standard TTA, with excellent postoperative course. Postsurgical hospital stay was significantly shorter (3.1 ± 0.4 vs. 6.2 ± 0.3 days, p < 0.001) after endoscopy as compared to TTA. ImagesFigure 1Figure 2 PMID:17171126

  18. Integrated approach in Sichuan.

    PubMed

    Gao, M

    1996-02-01

    Sichuan province, which recently initiated an integrated approach to family planning, is described as the most populous province in China. Current population is about 110 million, and the average net increase is about 1.1 million annually. The rate of natural increase in about 10/1000. The province has a low level of socioeconomic development. Per capita income in 1993 was under 800 yuan. There is a large surplus of rural agricultural workers who migrated to urban areas seeking work. The Deputy Governor initiated a directive for encouraging all relevant departments to pool resources and help poor family planning acceptors. The proposal was for helping 10 poor family planning acceptors per township, 100 per county, 1000 per prefecture, and 10,000 per province. Officially, the integrated program was instituted in January 1995. In Xiaoquan town, Deyang City, the government was prepared to help 70 one-child households get rid of poverty and become well-off quickly. The local town cooperative was convinced to provide a loan of 2.5 million yuan for building a stock farm. Other units pitched in to build the farm and provide animals. The 70 farmers were employed in working the farm. Each earned about 1450 yuan per year. In Bajiao Township of Shifang County of Deyang City, poor family planning acceptors were employed in developing the production of the tuber of elevated gastrodia and the bark of eucommia. Funding and skill training were provided by the Bureau of Forestry of Deyang City. In Mingshan Town of Minghsan County, the government chief and family planning director helped family planning acceptors improve their skills in marketing watermelons. Family planning acceptors in Zhugengzhen Town of Leshan City were encouraged by town government to relax and wait out the temporary decline in pork prices. Acceptors later made a large profit from their sales. An outcome of these integrated programs has been "warm acceptance" from farmers and acceptors. PMID:12291340

  19. LacSubPred: predicting subtypes of Laccases, an important lignin metabolism-related enzyme class, using in silico approaches

    PubMed Central

    2014-01-01

    , a supervised learning system was developed from the clusters. The models showed high performance with an overall accuracy of 99.03%, error of 0.49%, MCC of 0.9367, precision of 94.20%, sensitivity of 94.20%, and specificity of 99.47% in a 5-fold cross-validation test. In an independent test, our models still provide a high accuracy of 97.98%, error rate of 1.02%, MCC of 0.8678, precision of 87.88%, sensitivity of 87.88% and specificity of 98.90%. Conclusion This study provides a useful classification system for better understanding of Laccases from their physicochemical properties perspective. We also developed a publically available web tool for the characterization of Laccase protein sequences (http://lacsubpred.bioinfo.ucr.edu/). Finally, the programs used in the study are made available for researchers interested in applying the system to other enzyme classes (https://github.com/tweirick/SubClPred). PMID:25350584

  20. SPINE: an integrated tracking database and data mining approach for identifying feasible targets in high-throughput structural proteomics.

    PubMed

    Bertone, P; Kluger, Y; Lan, N; Zheng, D; Christendat, D; Yee, A; Edwards, A M; Arrowsmith, C H; Montelione, G T; Gerstein, M

    2001-07-01

    proteins tend to have significantly more acidic residues and fewer hydrophobic stretches than insoluble ones. One of the characteristics of proteomics data sets, currently and in the foreseeable future, is their intermediate size ( approximately 500-5000 data points). This creates a number of issues in relation to error estimation. Initially we estimate the overall error in our trees based on standard cross-validation. However, this leaves out a significant fraction of the data in model construction and does not give error estimates on individual rules. Therefore, we present alternative methods to estimate the error in particular rules.

  1. The narrative approach to personalisation

    NASA Astrophysics Data System (ADS)

    Conlan, Owen; Staikopoulos, Athanasios; Hampson, Cormac; Lawless, Séamus; O'keeffe, Ian

    2013-06-01

    This article describes the narrative approach to personalisation. This novel approach to the generation of personalised adaptive hypermedia experiences employs runtime reconciliation between a personalisation strategy and a number of contextual models (e.g. user and domain). The approach also advocates the late binding of suitable content and services to the generated personalised pathway resulting in an interactive composition that comprises services as well as content. This article provides a detailed definition of the narrative approach to personalisation and showcases the approach through the examination of two use-cases: the personalised digital educational games developed by the ELEKTRA and 80Days projects; and the personalised learning activities realised as part of the AMAS project. These use-cases highlight the general applicability of the narrative approach and how it has been applied to create a diverse range of real-world systems.

  2. Defining biocultural approaches to conservation.

    PubMed

    Gavin, Michael C; McCarter, Joe; Mead, Aroha; Berkes, Fikret; Stepp, John Richard; Peterson, Debora; Tang, Ruifei

    2015-03-01

    We contend that biocultural approaches to conservation can achieve effective and just conservation outcomes while addressing erosion of both cultural and biological diversity. Here, we propose a set of guidelines for the adoption of biocultural approaches to conservation. First, we draw lessons from work on biocultural diversity and heritage, social-ecological systems theory, integrated conservation and development, co-management, and community-based conservation to define biocultural approaches to conservation. Second, we describe eight principles that characterize such approaches. Third, we discuss reasons for adopting biocultural approaches and challenges. If used well, biocultural approaches to conservation can be a powerful tool for reducing the global loss of both biological and cultural diversity.

  3. Prepotential approach to quasinormal modes

    SciTech Connect

    Ho, Choon-Lin

    2011-06-15

    Research Highlights: > A unified approach to both exactly and quasi-exactly solvable quasinormal modes. > A simple constructive approach without knowledge of underlying symmetry of the system. > Three new models admitting quasinormal modes. - Abstract: In this paper we demonstrate how the recently reported exactly and quasi-exactly solvable models admitting quasinormal modes can be constructed and classified very simply and directly by the newly proposed prepotential approach. These new models were previously obtained within the Lie-algebraic approach. Unlike the Lie-algebraic approach, the prepotential approach does not require any knowledge of the underlying symmetry of the system. It treats both quasi-exact and exact solvabilities on the same footing, and gives the potential as well as the eigenfunctions and eigenvalues simultaneously. We also present three new models with quasinormal modes: a new exactly solvable Morse-like model, and two new quasi-exactly solvable models of the Scarf II and generalized Poeschl-Teller types.

  4. Ablative Approaches for Pulmonary Metastases.

    PubMed

    Boyer, Matthew J; Ricardi, Umberto; Ball, David; Salama, Joseph K

    2016-02-01

    Pulmonary metastases are common in patients with cancer for which surgery is considered a standard approach in appropriately selected patients. A number of patients are not candidates for surgery due to a medical comorbidities or the extent of surgery required. For these patients, noninvasive or minimally invasive approaches to ablate pulmonary metastases are potential treatment strategies. This article summarizes the rationale and outcomes for non-surgical treatment approaches, including radiotherapy, radiofrequency and microwave ablation, for pulmonary metastases.

  5. New Approaches to Final Cooling

    SciTech Connect

    Neuffer, David

    2014-11-10

    A high-energy muon collider scenario require a “final cooling” system that reduces transverse emittances by a factor of ~10 while allowing longitudinal emittance increase. The baseline approach has low-energy transverse cooling within high-field solenoids, with strong longitudinal heating. This approach and its recent simulation are discussed. Alternative approaches which more explicitly include emittance exchange are also presented. Round-to-flat beam transform, transverse slicing, and longitudinal bunch coalescence are possible components of the alternative approach. A more explicit understanding of solenoidal cooling beam dynamics is introduced.

  6. Microbial Burden Approach : New Monitoring Approach for Measuring Microbial Burden

    NASA Technical Reports Server (NTRS)

    Venkateswaran, Kasthuri; Vaishampayan, Parag; Barmatz, Martin

    2013-01-01

    Advantages of new approach for differentiating live cells/ spores from dead cells/spores. Four examples of Salmonella outbreaks leading to costly destruction of dairy products. List of possible collaboration activities between JPL and other industries (for future discussion). Limitations of traditional microbial monitoring approaches. Introduction to new approach for rapid measurement of viable (live) bacterial cells/spores and its areas of application. Detailed example for determining live spores using new approach (similar procedure for determining live cells). JPL has developed a patented approach for measuring amount of live and dead cells/spores. This novel "molecular" method takes less than 5 to 7 hrs. compared to the seven days required using conventional techniques. Conventional "molecular" techniques can not discriminate live cells/spores among dead cells/spores. The JPL-developed novel method eliminates false positive results obtained from conventional "molecular" techniques that lead to unnecessary delay in the processing and to unnecessary destruction of food products.

  7. A zone-based approach to identifying urban land uses using nationally-available data

    NASA Astrophysics Data System (ADS)

    Falcone, James A.

    city of Boston. A generalized version of the method (six land use classes) was also developed and cross-validated among additional geographic settings: Atlanta, Los Angeles, and Providence. The results suggest that even with the thematically-detailed ten-class structure, it is feasible to map most urban land uses with reasonable accuracy at the block group scale, and results improve with class aggregation. When classified by predicted majority land use, 79% of block groups correctly matched the actual majority land use with the ten-class models. Six-class models typically performed well for the geographic area they were developed from, however models had mixed performance when transported to other geographic settings. Contextual variables, which characterized a block group's spatial relationship to city centers, transportation routes, and other amenities, were consistently strong predictors of most land uses, a result which corresponds to classic urban land use theory. The method and metrics derived here provide a prototype for mapping urban land uses from readily-available data over broader geographic areas than is generally practiced today using current image-based solutions.

  8. Approach to population control.

    PubMed

    Bose, A

    1983-01-26

    Due to the fact that India was the 1st nation to adopt population control as a state policy and that targets for various family planning methods are set by the Department of Family Planning, it is assumed that family planning is basically the concern of the Department of Family Planning. Consequently, family planning is viewed as a public sector activity that requires little participation on the part of the people. This presents a great danger to the success of the family planning program. Family planning needs to be the primary concern of the people and not the government. The new 20 point program emphasizes that family planning should be promoted on a voluntary basis as a people's movement. This has been interpreted by many to mean that increasingly more funds should be given to voluntary agencies working in the field of family planning. Yet, there is deep distrust on the part of the government departments regarding doling out funds to voluntary agencies. It is suggested that the government should adopt a new approach to the problem of population control through the promotion and mobilization of voluntary effort in rural areas. In view of the clear demonstration of the impact of female education on fertility levels, the Family Planning Department should announce a new scheme to help all voluntary agencies which focus attention on the education of girls in neglected rural areas and should launch a special schooling program for girls in the area of nutrition. In addition, the informal education of women through the mobilization of local talent and resources to generate social awareness should be undertaken. In India, Kerala has the distinction of having the highest literacy rate, the highest life expectancy, the lowest mortality and infant mortality rates, and the lowest birthrate. The birthrate in rural Kerala is 27/1000 in contrast to 40/1000 in rural Uttar Pradesh. A high average marriage age has contributed substantially to lowering the birthrate. The key

  9. Reevaluation of the rate constants for the reaction of hypochlorous acid (HOCl) with cysteine, methionine, and peptide derivatives using a new competition kinetic approach.

    PubMed

    Storkey, Corin; Davies, Michael J; Pattison, David I

    2014-08-01

    Activated white cells use oxidants generated by the heme enzyme myeloperoxidase to kill invading pathogens. This enzyme utilizes H2O2 and Cl(-), Br(-), or SCN(-) to generate the oxidants HOCl, HOBr, and HOSCN, respectively. Whereas controlled production of these species is vital in maintaining good health, their uncontrolled or inappropriate formation (as occurs at sites of inflammation) can cause host tissue damage that has been associated with multiple inflammatory pathologies including cardiovascular diseases and cancer. Previous studies have reported that sulfur-containing species are major targets for HOCl but as the reactions are fast the only physiologically relevant kinetic data available have been extrapolated from data measured at high pH (>10). In this study these values have been determined at pH 7.4 using a newly developed competition kinetic approach that employs a fluorescently tagged methionine derivative as the competitive substrate (k(HOCl + Fmoc-Met), 1.5 × 10(8)M(-1)s(-1)). This assay was validated using the known k(HOCl + NADH) value and has allowed revised k values for the reactions of HOCl with Cys, N-acetylcysteine, and glutathione to be determined as 3.6 × 10(8), 2.9 × 10(7), and 1.24 × 10(8)M(-1)s(-1), respectively. Similar experiments with methionine derivatives yielded k values of 3.4 × 10(7)M(-1)s(-1) for Met and 1.7 × 10(8)M(-1)s(-1) for N-acetylmethionine. The k values determined here for the reaction of HOCl with thiols are up to 10-fold higher than those previously determined and further emphasize the critical importance of reactions of HOCl with thiol targets in biological systems.

  10. New approach for development of sensitive and environmentally friendly immunoassay for mycotoxin fumonisin B(1) based on using peptide-MBP fusion protein as substitute for coating antigen.

    PubMed

    Xu, Yang; Chen, Bo; He, Qing-hua; Qiu, Yu-Lou; Liu, Xing; He, Zhen-yun; Xiong, Zheng-ping

    2014-08-19

    Here, on the basis of mimotope of small analytes, we demonstrated a new approach for development of sensitive and environmentally friendly immunoassay for toxic small analytes based on the peptide-MBP fusion protein. In this work, using mycotoxin fumonisin B1 (FB1) as a model hapten, phage displayed peptide (mimotope) that binds to the anti-FB1 antibody were selected by biopanning from a 12-mer peptide library. The DNA coding for the sequence of peptide was cloned into Escherichia coli ER2738 as a fusion protein with a maltose binding protein (MBP). The prepared peptide-MBP fusion protein are "clonable" homogeneous and FB1-free products and can be used as a coating antigen in the immunoassay. The half inhibition concentration of the quantitative immunoassay setup with fusion protein (F1-MBP and F15-MBP) was 2.15 ± 0.13 ng/mL and 1.26 ± 0.08 ng/mL, respectively. The fusion protein (F1-MBP) was also used to develop a qualitative Elispot assay with a cutoff level of 2.5 ng/mL, which was 10-fold more sensitive than that measured for chemically synthesized FB1-BSA conjugates based Elispot immunoassay. The peptide-MBP fusion protein not only can be prepared reproducibly as homogeneous and FB1-free products in a large-scale but also can contribute to the development of a highly sensitive immunoassay for analyzing FB1. Furthermore, the novel concept might provide potential applications to a general method for the immunoassay of various toxic small molecules.

  11. Mitochondrial biogenesis: pharmacological approaches.

    PubMed

    Valero, Teresa

    2014-01-01

    Organelle biogenesis is concomitant to organelle inheritance during cell division. It is necessary that organelles double their size and divide to give rise to two identical daughter cells. Mitochondrial biogenesis occurs by growth and division of pre-existing organelles and is temporally coordinated with cell cycle events [1]. However, mitochondrial biogenesis is not only produced in association with cell division. It can be produced in response to an oxidative stimulus, to an increase in the energy requirements of the cells, to exercise training, to electrical stimulation, to hormones, during development, in certain mitochondrial diseases, etc. [2]. Mitochondrial biogenesis is therefore defined as the process via which cells increase their individual mitochondrial mass [3]. Recent discoveries have raised attention to mitochondrial biogenesis as a potential target to treat diseases which up to date do not have an efficient cure. Mitochondria, as the major ROS producer and the major antioxidant producer exert a crucial role within the cell mediating processes such as apoptosis, detoxification, Ca2+ buffering, etc. This pivotal role makes mitochondria a potential target to treat a great variety of diseases. Mitochondrial biogenesis can be pharmacologically manipulated. This issue tries to cover a number of approaches to treat several diseases through triggering mitochondrial biogenesis. It contains recent discoveries in this novel field, focusing on advanced mitochondrial therapies to chronic and degenerative diseases, mitochondrial diseases, lifespan extension, mitohormesis, intracellular signaling, new pharmacological targets and natural therapies. It contributes to the field by covering and gathering the scarcely reported pharmacological approaches in the novel and promising field of mitochondrial biogenesis. There are several diseases that have a mitochondrial origin such as chronic progressive external ophthalmoplegia (CPEO) and the Kearns- Sayre syndrome (KSS

  12. Mitochondrial biogenesis: pharmacological approaches.

    PubMed

    Valero, Teresa

    2014-01-01

    Organelle biogenesis is concomitant to organelle inheritance during cell division. It is necessary that organelles double their size and divide to give rise to two identical daughter cells. Mitochondrial biogenesis occurs by growth and division of pre-existing organelles and is temporally coordinated with cell cycle events [1]. However, mitochondrial biogenesis is not only produced in association with cell division. It can be produced in response to an oxidative stimulus, to an increase in the energy requirements of the cells, to exercise training, to electrical stimulation, to hormones, during development, in certain mitochondrial diseases, etc. [2]. Mitochondrial biogenesis is therefore defined as the process via which cells increase their individual mitochondrial mass [3]. Recent discoveries have raised attention to mitochondrial biogenesis as a potential target to treat diseases which up to date do not have an efficient cure. Mitochondria, as the major ROS producer and the major antioxidant producer exert a crucial role within the cell mediating processes such as apoptosis, detoxification, Ca2+ buffering, etc. This pivotal role makes mitochondria a potential target to treat a great variety of diseases. Mitochondrial biogenesis can be pharmacologically manipulated. This issue tries to cover a number of approaches to treat several diseases through triggering mitochondrial biogenesis. It contains recent discoveries in this novel field, focusing on advanced mitochondrial therapies to chronic and degenerative diseases, mitochondrial diseases, lifespan extension, mitohormesis, intracellular signaling, new pharmacological targets and natural therapies. It contributes to the field by covering and gathering the scarcely reported pharmacological approaches in the novel and promising field of mitochondrial biogenesis. There are several diseases that have a mitochondrial origin such as chronic progressive external ophthalmoplegia (CPEO) and the Kearns- Sayre syndrome (KSS

  13. Systems Approach to Environmental Pollution.

    ERIC Educational Resources Information Center

    Chacko, George K., Ed.

    The objective of a two-day Symposium on Systems Approach to Environmental Pollution of the Operations Research Society of America at the 137th Annual Meeting of the American Association for the Advancement of Science, December 27-28, 1970 in Chicago, Illinois, was not to raise the litany of a systems approach as the answer to all environmental…

  14. Approaches to Teaching Foreign Languages.

    ERIC Educational Resources Information Center

    Hesse, M. G., Ed.

    Works by European and American educators from the Renaissance to the twentieth century are presented. A historical re-evaluation of foreign-language teaching combined with the scientific approach of modern linguistics can provide valuable insights for current teaching and learning approaches. Selections are presented from the writings of the…

  15. [Endoscopic approaches to the orbit].

    PubMed

    Cebula, H; Lahlou, A; De Battista, J C; Debry, C; Froelich, S

    2010-01-01

    During the last decade, the use of endoscopic endonasal approaches to the pituitary has increased considerably. The endoscopic endonasal and transantral approaches offer a minimally invasive alternative to the classic transcranial or transconjunctival approaches to the medial aspect of the orbit. The medial wall of the orbit, the orbital apex, and the optic canal can be exposed through a middle meatal antrostomy, an anterior and posterior ethmoidectomy, and a sphenoidotomy. The inferomedial wall of the orbit can be also perfectly visualized through a sublabial antrostomy or an inferior meatal antrostomy. Several reports have described the use of an endoscopic approach for the resection or the biopsy of lesions located on the medial extraconal aspect of the orbit and orbital apex. However, the resection of intraconal lesions is still limited by inadequate instrumentation. Other indications for the endoscopic approach to the orbit are the decompression of the orbit for Graves' ophthalmopathy and traumatic optic neuropathy. However, the optimal management of traumatic optic neuropathy remains very controversial. Endoscopic endonasal decompression of the optic nerve in case of tumor compression could be a more valid indication in combination with radiation therapy. Finally, the endoscopic transantral treatment of blowout fracture of the floor of the orbit is an interesting option that avoids the eyelid or conjunctive incision of traditional approaches. The collaboration between the neurosurgeon and the ENT surgeon is mandatory and reduces the morbidity of the approach. Progress in instrumentation and optical devices will certainly make this approach promising for intraconal tumor of the orbit.

  16. Spouse Assault: A Community Approach.

    ERIC Educational Resources Information Center

    Yoder, David R.

    1980-01-01

    A comprehensive community approach to the problem of spouse assault involves education of a grassroots community organization, legal incorporation, establishing organizational goals and plans, publicity, state coalition of all such community groups, and new legislation. Such an approach led to the establishment of Michigan's Domestic Violence…

  17. Science Focus: The Salters' Approach.

    ERIC Educational Resources Information Center

    Berg, Kevin de

    1995-01-01

    Outlines the Salter's approach to teaching and learning science at the Junior Secondary level by showing how the phenomenon of fire is treated in curriculum materials. Discusses contents of the teachers' guide, student texts, and assessment pack. Gives an evaluation of the usefulness of the approach in the Australian context. (Author/MKR)

  18. Engineering approaches to ecosystem restoration

    SciTech Connect

    Hayes, D.F.

    1998-07-01

    This proceedings CD ROM contains 127 papers on developing and evaluating engineering approaches to wetlands and river restoration. The latest engineering developments are discussed, providing valuable insights to successful approaches for river restoration, wetlands restoration, watershed management, and constructed wetlands for stormwater and wastewater treatment. Potential solutions to a wide variety of ecosystem concerns in urban, suburban, and coastal environments are presented.

  19. A Systems Approach to Teaching.

    ERIC Educational Resources Information Center

    Kelly, Robert E.

    The systematic approach to teaching provides a method for the functional organization and development of instruction. This method applies to preparation of materials for classroom use, as well as for print and non-print media. Inputs to the systems approach include well defined objectives, analysis of the intended audience, special criteria…

  20. Alternatives in Education -- 54 Approaches.

    ERIC Educational Resources Information Center

    Jekel, Jerome R.; Johnson, Robert E.

    Fifty-four approaches identify ways by which students can learn, methods for teachers to employ, and approaches to a sequence of studies. A statement of philosophy notes the book's goal of providing a transition from individualized instruction to personalized instruction. The purpose, needs, philosophy and objectives of the open studies program…

  1. Project Approach: Teaching. Second Edition.

    ERIC Educational Resources Information Center

    Ho, Rose

    The primary objective of the action research chronicled (in English and Chinese) in this book was to shift the teaching method used by preschool teachers in Hong Kong from a teacher-directed mode by training them to use the Project Approach. The secondary objective was to measure children's achievement while using the Project Approach, focusing on…

  2. Surgical Approaches to Breast Augmentation: The Transaxillary Approach.

    PubMed

    Strock, Louis L

    2015-10-01

    The transaxillary approach to breast augmentation has the advantage of allowing breast implants to be placed with no incisions on the breasts. There has been a general perception of a lack of technical control compared with the inframammary approach. This article presents the transaxillary approach from the perspective of the technical control gained with the aid of an endoscope, which allows precise creation of the tissue pocket with optimal visualization. The aspects of technique that allow optimal technical control are discussed, in addition to postoperative processes that aid in stabilizing the device position and allow consistent and predictable outcomes.

  3. Statistical machine learning to identify traumatic brain injury (TBI) from structural disconnections of white matter networks.

    PubMed

    Mitra, Jhimli; Shen, Kai-kai; Ghose, Soumya; Bourgeat, Pierrick; Fripp, Jurgen; Salvado, Olivier; Pannek, Kerstin; Taylor, D Jamie; Mathias, Jane L; Rose, Stephen

    2016-04-01

    Identifying diffuse axonal injury (DAI) in patients with traumatic brain injury (TBI) presenting with normal appearing radiological MRI presents a significant challenge. Neuroimaging methods such as diffusion MRI and probabilistic tractography, which probe the connectivity of neural networks, show significant promise. We present a machine learning approach to classify TBI participants primarily with mild traumatic brain injury (mTBI) based on altered structural connectivity patterns derived through the network based statistical analysis of structural connectomes generated from TBI and age-matched control groups. In this approach, higher order diffusion models were used to map white matter connections between 116 cortical and subcortical regions. Tracts between these regions were generated using probabilistic tracking and mean fractional anisotropy (FA) measures along these connections were encoded in the connectivity matrices. Network-based statistical analysis of the connectivity matrices was performed to identify the network differences between a representative subset of the two groups. The affected network connections provided the feature vectors for principal component analysis and subsequent classification by random forest. The validity of the approach was tested using data acquired from a total of 179 TBI patients and 146 controls participants. The analysis revealed altered connectivity within a number of intra- and inter-hemispheric white matter pathways associated with DAI, in consensus with existing literature. A mean classification accuracy of 68.16%±1.81% and mean sensitivity of 80.0%±2.36% were achieved in correctly classifying the TBI patients evaluated on the subset of the participants that was not used for the statistical analysis, in a 10-fold cross-validation framework. These results highlight the potential for statistical machine learning approaches applied to structural connectomes to identify patients with diffusive axonal injury. PMID

  4. Identification of human protein complexes from local sub-graphs of protein-protein interaction network based on random forest with topological structure features.

    PubMed

    Li, Zhan-Chao; Lai, Yan-Hua; Chen, Li-Li; Zhou, Xuan; Dai, Zong; Zou, Xiao-Yong

    2012-03-01

    In the post-genomic era, one of the most important and challenging tasks is to identify protein complexes and further elucidate its molecular mechanisms in specific biological processes. Previous computational approaches usually identify protein complexes from protein interaction network based on dense sub-graphs and incomplete priori information. Additionally, the computational approaches have little concern about the biological properties of proteins and there is no a common evaluation metric to evaluate the performance. So, it is necessary to construct novel method for identifying protein complexes and elucidating the function of protein complexes. In this study, a novel approach is proposed to identify protein complexes using random forest and topological structure. Each protein complex is represented by a graph of interactions, where descriptor of the protein primary structure is used to characterize biological properties of protein and vertex is weighted by the descriptor. The topological structure features are developed and used to characterize protein complexes. Random forest algorithm is utilized to build prediction model and identify protein complexes from local sub-graphs instead of dense sub-graphs. As a demonstration, the proposed approach is applied to protein interaction data in human, and the satisfied results are obtained with accuracy of 80.24%, sensitivity of 81.94%, specificity of 80.07%, and Matthew's correlation coefficient of 0.4087 in 10-fold cross-validation test. Some new protein complexes are identified, and analysis based on Gene Ontology shows that the complexes are likely to be true complexes and play important roles in the pathogenesis of some diseases. PCI-RFTS, a corresponding executable program for protein complexes identification, can be acquired freely on request from the authors.

  5. Heterogeneous Ensemble Combination Search Using Genetic Algorithm for Class Imbalanced Data Classification.

    PubMed

    Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo

    2016-01-01

    Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble's output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) - k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer's disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases. PMID:26764911

  6. The biogenic approach to cognition.

    PubMed

    Lyon, Pamela

    2006-03-01

    After half a century of cognitive revolution we remain far from agreement about what cognition is and what cognition does. It was once thought that these questions could wait until the data were in. Today there is a mountain of data, but no way of making sense of it. The time for tackling the fundamental issues has arrived. The biogenic approach to cognition is introduced not as a solution but as a means of approaching the issues. The traditional, and still predominant, methodological stance in cognitive inquiry is what I call the anthropogenic approach: assume human cognition as the paradigm and work 'down' to a more general explanatory concept. The biogenic approach, on the other hand, starts with the facts of biology as the basis for theorizing and works 'up' to the human case by asking psychological questions as if they were biological questions. Biogenic explanations of cognition are currently clustered around two main frameworks for understanding biology: self-organizing complex systems and autopoiesis. The paper describes the frameworks and infers from them ten empirical principles--the biogenic 'family traits'--that constitute constraints on biogenic theorizing. Because the anthropogenic approach to cognition is not constrained empirically to the same degree, I argue that the biogenic approach is superior for approaching a general theory of cognition as a natural phenomenon.

  7. A general approach for developing system-specific functions to score protein-ligand docked complexes using support vector inductive logic programming.

    PubMed

    Amini, Ata; Shrimpton, Paul J; Muggleton, Stephen H; Sternberg, Michael J E

    2007-12-01

    Despite the increased recent use of protein-ligand and protein-protein docking in the drug discovery process due to the increases in computational power, the difficulty of accurately ranking the binding affinities of a series of ligands or a series of proteins docked to a protein receptor remains largely unsolved. This problem is of major concern in lead optimization procedures and has lead to the development of scoring functions tailored to rank the binding affinities of a series of ligands to a specific system. However, such methods can take a long time to develop and their transferability to other systems remains open to question. Here we demonstrate that given a suitable amount of background information a new approach using support vector inductive logic programming (SVILP) can be used to produce system-specific scoring functions. Inductive logic programming (ILP) learns logic-based rules for a given dataset that can be used to describe properties of each member of the set in a qualitative manner. By combining ILP with support vector machine regression, a quantitative set of rules can be obtained. SVILP has previously been used in a biological context to examine datasets containing a series of singular molecular structures and properties. Here we describe the use of SVILP to produce binding affinity predictions of a series of ligands to a particular protein. We also for the first time examine the applicability of SVILP techniques to datasets consisting of protein-ligand complexes. Our results show that SVILP performs comparably with other state-of-the-art methods on five protein-ligand systems as judged by similar cross-validated squares of their correlation coefficients. A McNemar test comparing SVILP to CoMFA and CoMSIA across the five systems indicates our method to be significantly better on one occasion. The ability to graphically display and understand the SVILP-produced rules is demonstrated and this feature of ILP can be used to derive hypothesis for

  8. A novel approach to monitor the hydrolysis of barley (Hordeum vulgare L) malt: a chemometrics approach.

    PubMed

    Cozzolino, D; Degner, S; Eglinton, J

    2014-12-01

    Malting barley is a process that has been profusely studied and is known to be influenced by several physical and biochemical properties of the grain. In particular, the amount of material that can be extracted from the malt (malt extract) is an important measure of brewing performance and end quality. The objectives of this study were (a) to compare the time course of hydrolysis of different malting barley (Hordeum vulgare L) varieties and (b) to evaluate the usefulness of mid-infrared (MIR) spectroscopy as high-throughput method to monitor malt hydrolysis. Differences in the pattern of hydrolysis were observed between the malt samples analyzed where samples from the same variety that have similar hot water extract (HWE) values tend to have the same pattern of hydrolysis. Principal component score plots based on the MIR spectra showed similar results. Partial least-squares discriminate analysis (PLS-DA) was used to classify malt samples according to their corresponding variety and time course of hydrolysis. The coefficient of determination (R(2)) and the standard error of cross validation (SECV) obtained for the prediction of variety and time course of hydrolysis were 0.67 (1.01) and 0.38 (19.90), respectively. These differences might be the result of the different composition in sugars between the barley varieties analyzed after malting, measured as wort density and not observed when only the HWE value at the end point is reported. This method offers the possibility to measure several parameters in malt simultaneously, reducing the time of analysis as well as requiring minimal sample preparation.

  9. A novel approach to monitor the hydrolysis of barley (Hordeum vulgare L) malt: a chemometrics approach.

    PubMed

    Cozzolino, D; Degner, S; Eglinton, J

    2014-12-01

    Malting barley is a process that has been profusely studied and is known to be influenced by several physical and biochemical properties of the grain. In particular, the amount of material that can be extracted from the malt (malt extract) is an important measure of brewing performance and end quality. The objectives of this study were (a) to compare the time course of hydrolysis of different malting barley (Hordeum vulgare L) varieties and (b) to evaluate the usefulness of mid-infrared (MIR) spectroscopy as high-throughput method to monitor malt hydrolysis. Differences in the pattern of hydrolysis were observed between the malt samples analyzed where samples from the same variety that have similar hot water extract (HWE) values tend to have the same pattern of hydrolysis. Principal component score plots based on the MIR spectra showed similar results. Partial least-squares discriminate analysis (PLS-DA) was used to classify malt samples according to their corresponding variety and time course of hydrolysis. The coefficient of determination (R(2)) and the standard error of cross validation (SECV) obtained for the prediction of variety and time course of hydrolysis were 0.67 (1.01) and 0.38 (19.90), respectively. These differences might be the result of the different composition in sugars between the barley varieties analyzed after malting, measured as wort density and not observed when only the HWE value at the end point is reported. This method offers the possibility to measure several parameters in malt simultaneously, reducing the time of analysis as well as requiring minimal sample preparation. PMID:25393707

  10. Potential alternative approaches to xenotransplantation.

    PubMed

    Mou, Lisha; Chen, Fengjiao; Dai, Yifan; Cai, Zhiming; Cooper, David K C

    2015-11-01

    There is an increasing worldwide shortage of organs and cells for transplantation in patients with end-stage organ failure or cellular dysfunction. This shortage could be resolved by the transplantation of organs or cells from pigs into humans. What competing approaches might provide support for the patient with end-stage organ or cell failure? Four main approaches are receiving increasing attention - (i) implantable mechanical devices, although these are currently limited almost entirely to devices aimed at supporting or replacing the heart, (ii) stem cell technology, at present directed mainly to replace absent or failing cells, but which is also fundamental to progress in (iii) tissue engineering and regenerative medicine, in which the ultimate aim is to replace an entire organ. A final novel potential approach is (iv) blastocyst complementation. These potential alternative approaches are briefly reviewed, and comments added on their current status and whether they are now (or will soon become) realistic alternative therapies to xenotransplantation.

  11. Four Approaches to Entrepreneurship II.

    ERIC Educational Resources Information Center

    Meyer, Earl C.; Nauta, Tom

    1994-01-01

    Four approaches to teaching advanced entrepreneurship in current use are as follows: (1) advanced options such as franchises and buyouts and international entrepreneurship; (2) preentrepreneurship courses; (3) starting a business; and (4) structured experience. (JOW)

  12. A Mathematical Approach to Hybridization

    ERIC Educational Resources Information Center

    Matthews, P. S. C.; Thompson, J. J.

    1975-01-01

    Presents an approach to hybridization which exploits the similarities between the algebra of wave functions and vectors. This method will account satisfactorily for the number of orbitals formed when applied to hybrids involving the s and p orbitals. (GS)

  13. Asteroid 433 Eros Approaches Earth

    NASA Video Gallery

    Asteroid 433 Eros made a close approach to Earth the morning of January 31st coming within 0.17 AU (15 million miles) of our planet. In this set of images taken that morning, the bright moving dot ...

  14. Approaches to Teaching Organizational Communication.

    ERIC Educational Resources Information Center

    Applebaum, Ronald L.

    1998-01-01

    Discusses fundamental problems in selecting an approach to organizational communications; the purpose of an organizational communication course; the structure and content of organizational communication coursework; and teaching strategies used in the basic course in organizational communication. (RS)

  15. Innovative approaches to recurrent training

    NASA Technical Reports Server (NTRS)

    Noon, H.; Murphy, M.

    1984-01-01

    Innovative approaches to recurrent training for regional airline aircrews are explored. Guidelines for recurrent training programs which include in corporation of cockpit resource management are discussed. B.W.

  16. Humane Education: A Curriculum Approach.

    ERIC Educational Resources Information Center

    Pearce, Robert W.

    1980-01-01

    Describes a curriculum-based approach to humane education and addresses the role of humane education in the school curriculum as well as the relationship's of education to other facets of animal welfare work. (Author/DS)

  17. Determination of fetal state from cardiotocogram using LS-SVM with particle swarm optimization and binary decision tree.

    PubMed

    Yılmaz, Ersen; Kılıkçıer, Cağlar

    2013-01-01

    We use least squares support vector machine (LS-SVM) utilizing a binary decision tree for classification of cardiotocogram to determine the fetal state. The parameters of LS-SVM are optimized by particle swarm optimization. The robustness of the method is examined by running 10-fold cross-validation. The performance of the method is evaluated in terms of overall classification accuracy. Additionally, receiver operation characteristic analysis and cobweb representation are presented in order to analyze and visualize the performance of the method. Experimental results demonstrate that the proposed method achieves a remarkable classification accuracy rate of 91.62%.

  18. Mexican sign language recognition using normalized moments and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Solís-V., J.-Francisco; Toxqui-Quitl, Carina; Martínez-Martínez, David; H.-G., Margarita

    2014-09-01

    This work presents a framework designed for the Mexican Sign Language (MSL) recognition. A data set was recorded with 24 static signs from the MSL using 5 different versions, this MSL dataset was captured using a digital camera in incoherent light conditions. Digital Image Processing was used to segment hand gestures, a uniform background was selected to avoid using gloved hands or some special markers. Feature extraction was performed by calculating normalized geometric moments of gray scaled signs, then an Artificial Neural Network performs the recognition using a 10-fold cross validation tested in weka, the best result achieved 95.83% of recognition rate.

  19. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method. PMID:25868233

  20. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  1. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  2. Explicit modeling of ancestry improves polygenic risk scores and BLUP prediction

    PubMed Central

    Chen, Chia-Yen; Han, Jiali; Hunter, David J.; Kraft, Peter; Price, Alkes L.

    2016-01-01

    Polygenic prediction using genome-wide SNPs can provide high prediction accuracy for complex traits. Here, we investigate the question of how to account for genetic ancestry when conducting polygenic prediction. We show that the accuracy of polygenic prediction in structured populations may be partly due to genetic ancestry. However, we hypothesized that explicitly modeling ancestry could improve polygenic prediction accuracy. We analyzed three GWAS of hair color, tanning ability and basal cell carcinoma (BCC) in European Americans (sample size from 7,440 to 9,822) and considered two widely used polygenic prediction approaches: polygenic risk scores (PRS) and Best Linear Unbiased Prediction (BLUP). We compared polygenic prediction without correction for ancestry to polygenic prediction with ancestry as a separate component in the model. In 10-fold cross-validation using the PRS approach, the R2 for hair color increased by 66% (0.0456 to 0.0755; p<10−16), the R2 for tanning ability increased by 123% (0.0154 to 0.0344; p<10−16) and the liability-scale R2 for BCC increased by 68% (0.0138 to 0.0232; p<10−16) when explicitly modeling ancestry, which prevents ancestry effects from entering into each SNP effect and being over-weighted. Surprisingly, explicitly modeling ancestry produces a similar improvement when using the BLUP approach, which fits all SNPs simultaneously in a single variance component and causes ancestry to be underweighted. We validate our findings via simulations, which show that the differences in prediction accuracy will increase in magnitude as sample sizes increase. In summary, our results show that explicitly modeling ancestry can be important in both PRS and BLUP prediction. PMID:25995153

  3. DephosSite: a machine learning approach for discovering phosphotase-specific dephosphorylation sites.

    PubMed

    Wang, Xiaofeng; Yan, Renxiang; Song, Jiangning

    2016-01-01

    Protein dephosphorylation, which is an inverse process of phosphorylation, plays a crucial role in a myriad of cellular processes, including mitotic cycle, proliferation, differentiation, and cell growth. Compared with tyrosine kinase substrate and phosphorylation site prediction, there is a paucity of studies focusing on computational methods of predicting protein tyrosine phosphatase substrates and dephosphorylation sites. In this work, we developed two elegant models for predicting the substrate dephosphorylation sites of three specific phosphatases, namely, PTP1B, SHP-1, and SHP-2. The first predictor is called MGPS-DEPHOS, which is modified from the GPS (Group-based Prediction System) algorithm with an interpretable capability. The second predictor is called CKSAAP-DEPHOS, which is built through the combination of support vector machine (SVM) and the composition of k-spaced amino acid pairs (CKSAAP) encoding scheme. Benchmarking experiments using jackknife cross validation and 30 repeats of 5-fold cross validation tests show that MGPS-DEPHOS and CKSAAP-DEPHOS achieved AUC values of 0.921, 0.914 and 0.912, for predicting dephosphorylation sites of the three phosphatases PTP1B, SHP-1, and SHP-2, respectively. Both methods outperformed the previously developed kNN-DEPHOS algorithm. In addition, a web server implementing our algorithms is publicly available at http://genomics.fzu.edu.cn/dephossite/ for the research community.

  4. Laparoscopic approach in gastrointestinal emergencies.

    PubMed

    Jimenez Rodriguez, Rosa M; Segura-Sampedro, Juan José; Flores-Cortés, Mercedes; López-Bernal, Francisco; Martín, Cristobalina; Diaz, Verónica Pino; Ciuro, Felipe Pareja; Ruiz, Javier Padillo

    2016-03-01

    This review focuses on the laparoscopic approach to gastrointestinal emergencies and its more recent indications. Laparoscopic surgery has a specific place in elective procedures, but that does not apply in emergency situations. In specific emergencies, there is a huge range of indications and different techniques to apply, and not all of them are equally settle. We consider that the most controversial points in minimally invasive procedures are indications in emergency situations due to technical difficulties. Some pathologies, such as oesophageal emergencies, obstruction due to colon cancer, abdominal hernias or incarcerated postsurgical hernias, are nearly always resolved by conventional surgery, that is, an open approach due to limited intraabdominal cavity space or due to the vulnerability of the bowel. These technical problems have been solved in many diseases, such as for perforated peptic ulcer or acute appendectomy for which a laparoscopic approach has become a well-known and globally supported procedure. On the other hand, endoscopic procedures have acquired further indications, relegating surgical solutions to a second place; this happens in cholangitis or pancreatic abscess drainage. This endoluminal approach avoids the need for laparoscopic development in these diseases. Nevertheless, new instruments and new technologies could extend the laparoscopic approach to a broader array of potentials procedures. There remains, however, a long way to go. PMID:26973409

  5. Laparoscopic approach in gastrointestinal emergencies

    PubMed Central

    Jimenez Rodriguez, Rosa M; Segura-Sampedro, Juan José; Flores-Cortés, Mercedes; López-Bernal, Francisco; Martín, Cristobalina; Diaz, Verónica Pino; Ciuro, Felipe Pareja; Ruiz, Javier Padillo

    2016-01-01

    This review focuses on the laparoscopic approach to gastrointestinal emergencies and its more recent indications. Laparoscopic surgery has a specific place in elective procedures, but that does not apply in emergency situations. In specific emergencies, there is a huge range of indications and different techniques to apply, and not all of them are equally settle. We consider that the most controversial points in minimally invasive procedures are indications in emergency situations due to technical difficulties. Some pathologies, such as oesophageal emergencies, obstruction due to colon cancer, abdominal hernias or incarcerated postsurgical hernias, are nearly always resolved by conventional surgery, that is, an open approach due to limited intraabdominal cavity space or due to the vulnerability of the bowel. These technical problems have been solved in many diseases, such as for perforated peptic ulcer or acute appendectomy for which a laparoscopic approach has become a well-known and globally supported procedure. On the other hand, endoscopic procedures have acquired further indications, relegating surgical solutions to a second place; this happens in cholangitis or pancreatic abscess drainage. This endoluminal approach avoids the need for laparoscopic development in these diseases. Nevertheless, new instruments and new technologies could extend the laparoscopic approach to a broader array of potentials procedures. There remains, however, a long way to go. PMID:26973409

  6. Operational validation of a multi-period and multi-criteria model conditioning approach for the prediction of rainfall-runoff processes in small forest catchments

    NASA Astrophysics Data System (ADS)

    Choi, H.; Kim, S.

    2012-12-01

    Limestone. The study is progressed based on the followings. Firstly, hydrological time series of each catchment are sampled and clustered into multi-period having distinctly different temporal characteristics, and secondly, behavioural parameter distributions are determined in each multi-period based on the specification of multi-criteria model performance measures. Finally, behavioural parameter sets of each multi-period of single catchment are applied on the corresponding period of other catchments, and the cross-validations are conducted in this manner for all catchments The multi-period model conditioning approach is clearly effective to reduce the width of prediction limits, giving better model performance against the temporal variability of hydrological characteristics, and has enough potential to be the effective prediction tool for ungauged catchments. However, more advanced and continuous studies are needed to expand the application of this approach in prediction of hydrological responses in ungauged catchments,

  7. Systems Science Approach to Data

    NASA Astrophysics Data System (ADS)

    Kadirkamanathan, Visakan

    Behaviours of many complex systems of interest cannot be adequately described since the underlying science has not advanced enough to be able to tease out the mathematical relationships. There is a need therefore to use methods and tools that capture the structure in the data that is representative of the systems behaviour. The subject of system identification allows us to deduce mathematical relations that govern the dynamics of systems based on the observed data. In addition, it can also be used to understand the system from basic principles. In this brief talk, the main approaches of systems science to data are reviewed identifying their strengths and limitations. The approaches include computational intelligence methods such as neural networks, genetic algorithms and fuzzy logic, as well as system identification methods in both time and frequency domains. Examples from physical science, neuroscience and social science serve to highlight achievements of the systems science approach to data.

  8. Anomaly Detection Using Behavioral Approaches

    NASA Astrophysics Data System (ADS)

    Benferhat, Salem; Tabia, Karim

    Behavioral approaches, which represent normal/abnormal activities, have been widely used during last years in intrusion detection and computer security. Nevertheless, most works showed that they are ineffective for detecting novel attacks involving new behaviors. In this paper, we first study this recurring problem due on one hand to inadequate handling of anomalous and unusual audit events and on other hand to insufficient decision rules which do not meet behavioral approach objectives. We then propose to enhance the standard decision rules in order to fit behavioral approach requirements and better detect novel attacks. Experimental studies carried out on real and simulated http traffic show that these enhanced decision rules improve detecting most novel attacks without triggering higher false alarm rates.

  9. Infratemporal approaches to nasopharyngeal tumors.

    PubMed

    Suárez, C; Garćia, L A; Fernández de Leon, R; Rodrigo, J P; Ruiz, B

    1997-01-01

    Twenty patients with neoplasms originating in the nasopharynx were operated using the infratemporal fossa approach with facial translocation (15 cases), the subtemporal-preauricular infratemporal approach (2 cases), and the transmandibular approach (3 cases). A craniectomy was also required in 14 cases. Fifteen tumors were malignant, while 5 were juvenile angiofibromas with infratemporal and intracranial extensions. Most of the lesions were large and involved multiple areas of the skull base. Tumor excision was total in all but 3 patients. Local flaps were utilized in all patients to seal the operative cavity and consisted of temporalis muscle flaps. The most frequent postoperative complications were wound infections and cerebrospinal leaks. Two patients died as a result of postoperative complications. To date, 1 patient has died from disease and 3 are alive with local or distant disease.

  10. Employee Reactions to Merit Pay: Cognitive Approach and Social Approach

    ERIC Educational Resources Information Center

    Wang, Yingchun

    2010-01-01

    The dissertation aims to tackle one of the most pressing questions facing the merit pay system researchers and practitioners: Why do merit pay raises have such a small effect on employees' satisfaction, commitment and job performance? My approach to the study of this question is to develop explanatory frameworks from two perspectives: cognitive…

  11. Systems biology approach to bioremediation

    SciTech Connect

    Chakraborty, Romy; Wu, Cindy H.; Hazen, Terry C.

    2012-06-01

    Bioremediation has historically been approached as a ‘black box’ in terms of our fundamental understanding. Thus it succeeds and fails, seldom without a complete understanding of why. Systems biology is an integrated research approach to study complex biological systems, by investigating interactions and networks at the molecular, cellular, community, and ecosystem level. The knowledge of these interactions within individual components is fundamental to understanding the dynamics of the ecosystem under investigation. Finally, understanding and modeling functional microbial community structure and stress responses in environments at all levels have tremendous implications for our fundamental understanding of hydrobiogeochemical processes and the potential for making bioremediation breakthroughs and illuminating the ‘black box’.

  12. Writing approaches of nursing students.

    PubMed

    Lavelle, Ellen; Ball, Susan C; Maliszewski, Genevieve

    2013-01-01

    Over the past 20years, research has focused on the writing processes of college students, however, despite recent support for writing as a tool of reflection in nursing education, little is known about how it is that nursing students go about writing papers and assignments as part of their professional education. In order to determine the writing processes of nursing students, the Inventory of Processes in College Composition, a self-response questionnaire, was administered to 169 nursing students. Results support the independence of the writing approaches that nursing students use and similarity to the writing approaches of a general college student population.

  13. An approach to duodenal biopsies

    PubMed Central

    Serra, S; Jani, P A

    2006-01-01

    The introduction of endoscopy of the upper digestive tract as a routine diagnostic procedure has increased the number of duodenal biopsy specimens. Consequently, the pathologist is often asked to evaluate them. In this review, a practical approach to the evaluation of a duodenal biopsy specimen is discussed. An overview of the handling of specimens is given and the normal histology and commonly encountered diseases are discussed. Finally, a description of commonly seen infections is provided, together with an algorithmic approach for diagnosis. PMID:16679353

  14. [Three approaches to culpability. 2].

    PubMed

    Guyot-Gans, F

    1995-11-01

    The third and psychoanalytical approach to culpability will be conducted through the study of Freud's and A. Hesnard's works. We will see how Freud unravels the feeling of culpability and solves the enigma of its origins; in his opinion, il is transmitted through a philogenetical agency. Thanks to this conceptualisation we will be able to measure in what way Freud's writings were influenced by occidental culture. Eventually, we will tackle the question of A. Hesnard's phenomenological approach. It will lead to an original outlook in which culpability is at the care of every ethical behaviour.

  15. Holistic approach to chronic constipation.

    PubMed

    Pescatori, Mario

    2006-01-01

    By "holistic approach" (greek "olos" = "all") we mean a clinical approach which is not only confined to the diseased segment of the body, say the inert large bowel or the spastic pelvic floor in case of constipation, but takes under consideration the whole "mind and body complex", which is a unique indivisible entity. According to a prospective study carried out in our Unit and under press in Colorectal Disease, 66% of the patients with obstructed defecation suffer either from anxiety or depression, thus showing the major role played by an altered psyche in the etiology of their constipation.

  16. Support for Quitting: Choose Your Approach

    MedlinePlus

    ... Choose your approach Self-help strategies for quiting Social support Professional help Tools Calculators Cocktail content calculator Drink ... your approach Self-help strategies for quitting drinking Social support to stop drinking Professional help Choose your approach ...

  17. Approaches to Academic Growth Assessment

    ERIC Educational Resources Information Center

    Anderman, Eric M.; Gimbert, Belinda; O'Connell, Ann A.; Riegel, Lisa

    2015-01-01

    Background: There is much interest in assessing growth in student learning. Assessments of growth have important implications and affect many policy decisions at many levels. Aims: In the present article, we review some of the different approaches to measuring growth and examine the implications of their usage. Sample: Samples used in research on…

  18. Family Research: An Ethnographic Approach

    PubMed Central

    Stein, Howard F.

    1991-01-01

    An ethnographic approach based on in-depth interviewing, naturalistic and participant observation, narrative description, and contextual interpretation is proposed as a tool for family health care research. The multiple meanings of family, both for research clinicians and for society, are considered. The problem of how a family orientation is incorporated into biomedical' health care is discussed. PMID:21229058

  19. Institutional Planning: A Systems Approach.

    ERIC Educational Resources Information Center

    Adamson, Willie D.

    This four-chapter report explores the possible contributions of a systems approach to institutional planning. After introductory comments, Chapter I reviews the management theory of Henry Fayol, which emphasizes management tasks, such as planning, organizing, commanding, coordinating, and controlling, which are "universal" regardless of the level…

  20. Interdisciplinary Approach to Building Construction.

    ERIC Educational Resources Information Center

    Armstrong, Harry

    The paper discusses the interdisciplinary approach used by the Amity High School House Construction Project to develop a construction cluster in a small high school, to give students on-the-job training, and to teach them the relevancy of academic education. The project's monthly plan of action is briefly described. Suggested activities,…

  1. Building America Systems Engineering Approach

    SciTech Connect

    2011-12-15

    The Building America Research Teams use a systems engineering approach to achieve higher quality and energy savings in homes. Using these techniques, the energy consumption of new houses can be reduced by 40% or more with little or no impact on the cost of ownership.

  2. Clinical Approach to Teacher Evaluation.

    ERIC Educational Resources Information Center

    Tipton, William

    This manual, prepared for the state of Washington, provides tools and strategies aimed at assisting building administrators in clinical approaches to teacher evaluation. The first section provides preliminary thoughts on the evaluation process and discusses the two major problems: acceptance and time. The second section discusses the sources and…

  3. Cleft Palate; A Multidiscipline Approach.

    ERIC Educational Resources Information Center

    Stark, Richard B., Ed.

    Nineteen articles present a multidisciplinary approach to the management of facial clefts. The following subjects are discussed: the history of cleft lip and cleft palate surgery; cogenital defects; classification; the operation of a cleft palate clinic; physical examination of newborns with cleft lip and/or palate; nursing care; anesthesia;…

  4. Active Approaches to Social Education.

    ERIC Educational Resources Information Center

    Lindoe, Sylvia; Bond, Julia

    1987-01-01

    A three-day model inservice program for special education teachers in Coventry (England) focused on facilitating personal and social development in disabled students through group experiences based on materials and principles of the Active Tutorial Work project, a structured, developmentally based approach emphasizing active learning. (JW)

  5. Science Teaching: A Dilemmatic Approach

    ERIC Educational Resources Information Center

    Traianou, Anna

    2012-01-01

    In this paper, I examine the nature of primary science expertise using an ethnographic and sociocultural approach and a theoretical analysis that conceptualises educational practice in terms of the resolution of dilemmas. Using data from an in-depth investigation of the perspective and practice of a single teacher, I discuss some of the "dilemmas"…

  6. The "Rear View Mirror" Approach.

    ERIC Educational Resources Information Center

    Nord, James R.

    1987-01-01

    The new interactive videodisk systems with augmented audio capabilities have great potential for improving the teaching of foreign languages. At present that potential is unfulfilled because the profession is following a "rear view mirror" approach to media use: first, to fixate current practice; second, to distribute it broadly; and last, to…

  7. New Approaches to Comparative Education.

    ERIC Educational Resources Information Center

    Altbach, Philip G., Ed.; Kelly, Gail P., Ed.

    Perspectives on research in comparative education are presented in 17 articles originally published in the "Comparative Education Review." The objective is to present an array of new viewpoints, orientations, and approaches. Titles and authors are: "Introduction: Perspectives on Comparative Education" (Philip G. Altbach, Gail P. Kelly); "Critical…

  8. ENGLISH WRITING, APPROACHES TO COMPOSITION.

    ERIC Educational Resources Information Center

    Euclid English Demonstration Center, OH.

    THIS COLLECTION OF PAPERS BY STAFF MEMBERS OF THE EUCLID ENGLISH DEMONSTRATION CENTER FOCUSES ON APPROACHES TO THE TEACHING OF COMPOSITION IN THE JUNIOR HIGH SCHOOL. THE PAPERS ARE (1) "LITERATURE AND COMPOSITION," BY JAMES F. MCCAMPBELL, (2) "COMPOSING--EPIPHANY AND DETAIL," BY JOSEPH DYESS, (3) "THE LANGUAGE COMPOSITION ACT," BY LESTER E.…

  9. Innovative Approaches to Career Guidance.

    ERIC Educational Resources Information Center

    Freeman, Andrew R.

    A key part of a broad-based approach to career education in Australian schools is vocational/career guidance. Various vocational guidance programs have been developed for specific groups in Australian society, including work experience, caravans, and micrographics technology for the handicapped; pre-employment courses and a family education center…

  10. Multidisciplinary Approaches in Evolutionary Linguistics

    ERIC Educational Resources Information Center

    Gong, Tao; Shuai, Lan; Wu, Yicheng

    2013-01-01

    Studying language evolution has become resurgent in modern scientific research. In this revival field, approaches from a number of disciplines other than linguistics, including (paleo)anthropology and archaeology, animal behaviors, genetics, neuroscience, computer simulation, and psychological experimentation, have been adopted, and a wide scope…

  11. Eight Approaches to Language Teaching.

    ERIC Educational Resources Information Center

    Doggett, Gina

    Important features of eight second language teaching methods--grammar-translation, direct, audiolingual, the Silent Way, Suggestopedia, community language learning, Total Physical Response, and the communicative approach--are summarized . A chart outlines characteristics of these aspects of the methods: goals, teacher and student roles, the…

  12. Partnership in Sector Wide Approaches

    ERIC Educational Resources Information Center

    Tolley, Hilary

    2011-01-01

    Within the context of bilateral support to the education sector in Tonga and the Solomon Islands, this paper will explore how the discourse of "partnership" has been interpreted and activated within the Sector wide approach (SWAp). In concentrating particularly on the relationship between the respective Ministries of Education and New Zealand's…

  13. Using Natural Approach Teaching Techniques.

    ERIC Educational Resources Information Center

    Whitman, Charles

    1986-01-01

    Describes a beginning foreign language class applying the principles of Stephen Krashen's "Natural Approach" and James Asher's "Total Physical Response" method. Initially students carry out the instructor's commands in the form of actions rather than being required to speak. In later stages role play and simple discussions are introduced. (LMO)

  14. Guitar Class: A Multifaceted Approach.

    ERIC Educational Resources Information Center

    Bartel, Lee R.

    1990-01-01

    Suggests that the bias linking guitars to popular culture has needlessly limited approaches to teaching guitar. Examines how each of five current programs develops different music skills. Advocates a comprehensive, multifaceted program capable of emphasizing student skills in melody, harmony, perception, creativity, and performance over six years…

  15. Restaurant Sanitation: A New Approach

    ERIC Educational Resources Information Center

    Hinckley, Walter J.

    1974-01-01

    The new approach taken by the New York City Health Department to the problems of surveillance and maintenance of high sanitation standards in food service establishments is explained. Results of a pilot study are presented followed by the new procedures and evaluation. Future plans are then indicated. (LS)

  16. Teacher Training: A Personal Approach.

    ERIC Educational Resources Information Center

    Henson, Kenneth T.

    Indiana State University has developed an experimental program to develop a personal approach to teacher training. The ultimate intention of the program is to produce educators who are personally committed to the development of the young people often collectively labeled "students." Devices used in the program include the use of student names,…

  17. Toxicological approaches to complex mixtures.

    PubMed Central

    Mauderly, J L

    1993-01-01

    This paper reviews the role of toxicological studies in understanding the health effects of environmental exposures to mixtures. The approach taken is to review mixtures that have received the greatest emphasis from toxicology; major mixtures research programs; the toxicologist's view of mixtures and approaches to their study; and the complementary roles of toxicological, clinical, and epidemiological studies. Studies of tobacco smoke, engine exhaust, combustion products, and air pollutants comprise most of the past research on mixtures. Because of their great experimental control over subjects, exposures, and endpoints, toxicologists tend to consider a wider range of toxic interactions among mixture components and sequential exposures than is practical for human studies. The three fundamental experimental approaches used by toxicologists are integrative (studying the mixture as a whole), dissective (dissecting a mixture to determine causative constituents), and synthetic (studying interactions between agents in simple combinations). Toxicology provides information on potential hazards, mechanisms by which mixture constituents interact to cause effects, and exposure dose-effect relationships; but extrapolation from laboratory data to quantitative human health risks is problematic. Toxicological, clinical, and epidemiological approaches are complementary but are seldom coordinated. Fostering synergistic interactions among the disciplines in studying the risks from mixtures could be advantageous. PMID:7515806

  18. Comparative Psychology: An Epigenetic Approach.

    ERIC Educational Resources Information Center

    Greenberg, Gary

    1987-01-01

    A comparative psychology course oriented around the themes of phylogeny and ontogeny is described. The course emphasizes the evolution and development of behavioral processes and includes a discussion of the concept of integrative levels and Schneirla's approach/withdrawal theory. The course evaluates genetic determinism and stresses the principle…

  19. A Freudian Approach to Education.

    ERIC Educational Resources Information Center

    Gartner, Sandra L.

    This document offers the point of view that Bruno Bettelheim's writings, based on Sigmund Freud's approach to education, suggest the most practical applications for achieving positive results within the classroom. The overall result of a student being taught all through school by the Freudian method would be an extremely positive one. Such a…

  20. Novel Approaches to Surfactant Administration

    PubMed Central

    Gupta, Samir; Donn, Steven M.

    2012-01-01

    Surfactant replacement therapy has been the mainstay of treatment for preterm infants with respiratory distress syndrome for more than twenty years. For the most part, surfactant is administered intratracheally, followed by mechanical ventilation. In recent years, the growing interest in noninvasive ventilation has led to novel approaches of administration. This paper will review these techniques and the associated clinical evidence. PMID:23243504