Sample records for dimensional variable selection

  1. A Selective Overview of Variable Selection in High Dimensional Feature Space

    PubMed Central

    Fan, Jianqing

    2010-01-01

    High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods. PMID:21572976

  2. Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model

    NASA Astrophysics Data System (ADS)

    Song, Yunquan; Lin, Lu; Jian, Ling

    2016-07-01

    Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.

  3. Evaluation of variable selection methods for random forests and omics data sets.

    PubMed

    Degenhardt, Frauke; Seifert, Stephan; Szymczak, Silke

    2017-10-16

    Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta.In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. © The Author 2017. Published by Oxford University Press.

  4. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  5. A review of covariate selection for non-experimental comparative effectiveness research.

    PubMed

    Sauer, Brian C; Brookhart, M Alan; Roy, Jason; VanderWeele, Tyler

    2013-11-01

    This paper addresses strategies for selecting variables for adjustment in non-experimental comparative effectiveness research and uses causal graphs to illustrate the causal network that relates treatment to outcome. Variables in the causal network take on multiple structural forms. Adjustment for a common cause pathway between treatment and outcome can remove confounding, whereas adjustment for other structural types may increase bias. For this reason, variable selection would ideally be based on an understanding of the causal network; however, the true causal network is rarely known. Therefore, we describe more practical variable selection approaches based on background knowledge when the causal structure is only partially known. These approaches include adjustment for all observed pretreatment variables thought to have some connection to the outcome, all known risk factors for the outcome, and all direct causes of the treatment or the outcome. Empirical approaches, such as forward and backward selection and automatic high-dimensional proxy adjustment, are also discussed. As there is a continuum between knowing and not knowing the causal, structural relations of variables, we recommend addressing variable selection in a practical way that involves a combination of background knowledge and empirical selection and that uses high-dimensional approaches. This empirical approach can be used to select from a set of a priori variables based on the researcher's knowledge to be included in the final analysis or to identify additional variables for consideration. This more limited use of empirically derived variables may reduce confounding while simultaneously reducing the risk of including variables that may increase bias. Copyright © 2013 John Wiley & Sons, Ltd.

  6. A Review of Covariate Selection for Nonexperimental Comparative Effectiveness Research

    PubMed Central

    Sauer, Brian C.; Brookhart, Alan; Roy, Jason; Vanderweele, Tyler

    2014-01-01

    This paper addresses strategies for selecting variables for adjustment in non-experimental comparative effectiveness research (CER), and uses causal graphs to illustrate the causal network that relates treatment to outcome. Variables in the causal network take on multiple structural forms. Adjustment for on a common cause pathway between treatment and outcome can remove confounding, while adjustment for other structural types may increase bias. For this reason variable selection would ideally be based on an understanding of the causal network; however, the true causal network is rarely know. Therefore, we describe more practical variable selection approaches based on background knowledge when the causal structure is only partially known. These approaches include adjustment for all observed pretreatment variables thought to have some connection to the outcome, all known risk factors for the outcome, and all direct causes of the treatment or the outcome. Empirical approaches, such as forward and backward selection and automatic high-dimensional proxy adjustment, are also discussed. As there is a continuum between knowing and not knowing the causal, structural relations of variables, we recommend addressing variable selection in a practical way that involves a combination of background knowledge and empirical selection and that uses the high-dimensional approaches. This empirical approach can be used to select from a set of a priori variables based on the researcher’s knowledge to be included in the final analysis or to identify additional variables for consideration. This more limited use of empirically-derived variables may reduce confounding while simultaneously reducing the risk of including variables that may increase bias. PMID:24006330

  7. Is hyporheic flow an indicator for salmonid spawning site selection?

    NASA Astrophysics Data System (ADS)

    Benjankar, R. M.; Tonina, D.; Marzadri, A.; McKean, J. A.; Isaak, D.

    2015-12-01

    Several studies have investigated the role of hydraulic variables in the selection of spawning sites by salmonids. Some recent studies suggest that the intensity of the ambient hyporheic flow, that present without a salmon egg pocket, is a cue for spawning site selection, but others have argued against it. We tested this hypothesis by using a unique dataset of field surveyed spawning site locations and an unprecedented meter-scale resolution bathymetry of a 13.5 km long reach of Bear Valley Creek (Idaho, USA), an important Chinook salmon spawning stream. We used a two-dimensional surface water model to quantify stream hydraulics and a three-dimensional hyporheic model to quantify the hyporheic flows. Our results show that the intensity of ambient hyporheic flows is not a statistically significant variable for spawning site selection. Conversely, the intensity of the water surface curvature and the habitat quality, quantified as a function of stream hydraulics and morphology, are the most important variables for salmonid spawning site selection. KEY WORDS: Salmonid spawning habitat, pool-riffle system, habitat quality, surface water curvature, hyporheic flow

  8. Effects of selected design variables on three ramp, external compression inlet performance. [boundary layer control bypasses, and mass flow rate

    NASA Technical Reports Server (NTRS)

    Kamman, J. H.; Hall, C. L.

    1975-01-01

    Two inlet performance tests and one inlet/airframe drag test were conducted in 1969 at the NASA-Ames Research Center. The basic inlet system was two-dimensional, three ramp (overhead), external compression, with variable capture area. The data from these tests were analyzed to show the effects of selected design variables on the performance of this type of inlet system. The inlet design variables investigated include inlet bleed, bypass, operating mass flow ratio, inlet geometry, and variable capture area.

  9. The Fisher-Markov selector: fast selecting maximally separable feature subset for multiclass classification with applications to high-dimensional data.

    PubMed

    Cheng, Qiang; Zhou, Hongbo; Cheng, Jie

    2011-06-01

    Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selector--which we call the Fisher-Markov selector--to identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid--dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.

  10. Independence screening for high dimensional nonlinear additive ODE models with applications to dynamic gene regulatory networks.

    PubMed

    Xue, Hongqi; Wu, Shuang; Wu, Yichao; Ramirez Idarraga, Juan C; Wu, Hulin

    2018-05-02

    Mechanism-driven low-dimensional ordinary differential equation (ODE) models are often used to model viral dynamics at cellular levels and epidemics of infectious diseases. However, low-dimensional mechanism-based ODE models are limited for modeling infectious diseases at molecular levels such as transcriptomic or proteomic levels, which is critical to understand pathogenesis of diseases. Although linear ODE models have been proposed for gene regulatory networks (GRNs), nonlinear regulations are common in GRNs. The reconstruction of large-scale nonlinear networks from time-course gene expression data remains an unresolved issue. Here, we use high-dimensional nonlinear additive ODEs to model GRNs and propose a 4-step procedure to efficiently perform variable selection for nonlinear ODEs. To tackle the challenge of high dimensionality, we couple the 2-stage smoothing-based estimation method for ODEs and a nonlinear independence screening method to perform variable selection for the nonlinear ODE models. We have shown that our method possesses the sure screening property and it can handle problems with non-polynomial dimensionality. Numerical performance of the proposed method is illustrated with simulated data and a real data example for identifying the dynamic GRN of Saccharomyces cerevisiae. Copyright © 2018 John Wiley & Sons, Ltd.

  11. A Selective Review of Group Selection in High-Dimensional Models

    PubMed Central

    Huang, Jian; Breheny, Patrick; Ma, Shuangge

    2013-01-01

    Grouping structures arise naturally in many statistical modeling problems. Several methods have been proposed for variable selection that respect grouping structure in variables. Examples include the group LASSO and several concave group selection methods. In this article, we give a selective review of group selection concerning methodological developments, theoretical properties and computational algorithms. We pay particular attention to group selection methods involving concave penalties. We address both group selection and bi-level selection methods. We describe several applications of these methods in nonparametric additive models, semiparametric regression, seemingly unrelated regressions, genomic data analysis and genome wide association studies. We also highlight some issues that require further study. PMID:24174707

  12. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  13. Variable screening via quantile partial correlation

    PubMed Central

    Ma, Shujie; Tsai, Chih-Ling

    2016-01-01

    In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683

  14. Improved Sparse Multi-Class SVM and Its Application for Gene Selection in Cancer Classification

    PubMed Central

    Huang, Lingkang; Zhang, Hao Helen; Zeng, Zhao-Bang; Bushel, Pierre R.

    2013-01-01

    Background Microarray techniques provide promising tools for cancer diagnosis using gene expression profiles. However, molecular diagnosis based on high-throughput platforms presents great challenges due to the overwhelming number of variables versus the small sample size and the complex nature of multi-type tumors. Support vector machines (SVMs) have shown superior performance in cancer classification due to their ability to handle high dimensional low sample size data. The multi-class SVM algorithm of Crammer and Singer provides a natural framework for multi-class learning. Despite its effective performance, the procedure utilizes all variables without selection. In this paper, we propose to improve the procedure by imposing shrinkage penalties in learning to enforce solution sparsity. Results The original multi-class SVM of Crammer and Singer is effective for multi-class classification but does not conduct variable selection. We improved the method by introducing soft-thresholding type penalties to incorporate variable selection into multi-class classification for high dimensional data. The new methods were applied to simulated data and two cancer gene expression data sets. The results demonstrate that the new methods can select a small number of genes for building accurate multi-class classification rules. Furthermore, the important genes selected by the methods overlap significantly, suggesting general agreement among different variable selection schemes. Conclusions High accuracy and sparsity make the new methods attractive for cancer diagnostics with gene expression data and defining targets of therapeutic intervention. Availability: The source MATLAB code are available from http://math.arizona.edu/~hzhang/software.html. PMID:23966761

  15. Radiograph and passive data analysis using mixed variable optimization

    DOEpatents

    Temple, Brian A.; Armstrong, Jerawan C.; Buescher, Kevin L.; Favorite, Jeffrey A.

    2015-06-02

    Disclosed herein are representative embodiments of methods, apparatus, and systems for performing radiography analysis. For example, certain embodiments perform radiographic analysis using mixed variable computation techniques. One exemplary system comprises a radiation source, a two-dimensional detector for detecting radiation transmitted through a object between the radiation source and detector, and a computer. In this embodiment, the computer is configured to input the radiographic image data from the two-dimensional detector and to determine one or more materials that form the object by using an iterative analysis technique that selects the one or more materials from hierarchically arranged solution spaces of discrete material possibilities and selects the layer interfaces from the optimization of the continuous interface data.

  16. EXTRACTING PRINCIPLE COMPONENTS FOR DISCRIMINANT ANALYSIS OF FMRI IMAGES

    PubMed Central

    Liu, Jingyu; Xu, Lai; Caprihan, Arvind; Calhoun, Vince D.

    2009-01-01

    This paper presents an approach for selecting optimal components for discriminant analysis. Such an approach is useful when further detailed analyses for discrimination or characterization requires dimensionality reduction. Our approach can accommodate a categorical variable such as diagnosis (e.g. schizophrenic patient or healthy control), or a continuous variable like severity of the disorder. This information is utilized as a reference for measuring a component’s discriminant power after principle component decomposition. After sorting each component according to its discriminant power, we extract the best components for discriminant analysis. An application of our reference selection approach is shown using a functional magnetic resonance imaging data set in which the sample size is much less than the dimensionality. The results show that the reference selection approach provides an improved discriminant component set as compared to other approaches. Our approach is general and provides a solid foundation for further discrimination and classification studies. PMID:20582334

  17. EXTRACTING PRINCIPLE COMPONENTS FOR DISCRIMINANT ANALYSIS OF FMRI IMAGES.

    PubMed

    Liu, Jingyu; Xu, Lai; Caprihan, Arvind; Calhoun, Vince D

    2008-05-12

    This paper presents an approach for selecting optimal components for discriminant analysis. Such an approach is useful when further detailed analyses for discrimination or characterization requires dimensionality reduction. Our approach can accommodate a categorical variable such as diagnosis (e.g. schizophrenic patient or healthy control), or a continuous variable like severity of the disorder. This information is utilized as a reference for measuring a component's discriminant power after principle component decomposition. After sorting each component according to its discriminant power, we extract the best components for discriminant analysis. An application of our reference selection approach is shown using a functional magnetic resonance imaging data set in which the sample size is much less than the dimensionality. The results show that the reference selection approach provides an improved discriminant component set as compared to other approaches. Our approach is general and provides a solid foundation for further discrimination and classification studies.

  18. Anthropometry

    NASA Technical Reports Server (NTRS)

    Mcconville, J. T.; Laubach, L. L.

    1978-01-01

    Data on body-size measurement are presented to aid in spacecraft design. Tabulated dimensional anthropometric data on 59 variables for 12 selected populations are given. The variables chosen were those judged most relevant to the manned space program. A glossary of anatomical and anthropometric terms is included. Selected body dimensions of males and females from the potential astronaut population projected to the 1980-1990 time frame are given. Illustrations of drawing-board manikins based on those anticipated body sizes are included.

  19. Canonical Measure of Correlation (CMC) and Canonical Measure of Distance (CMD) between sets of data. Part 3. Variable selection in classification.

    PubMed

    Ballabio, Davide; Consonni, Viviana; Mauri, Andrea; Todeschini, Roberto

    2010-01-11

    In multivariate regression and classification issues variable selection is an important procedure used to select an optimal subset of variables with the aim of producing more parsimonious and eventually more predictive models. Variable selection is often necessary when dealing with methodologies that produce thousands of variables, such as Quantitative Structure-Activity Relationships (QSARs) and highly dimensional analytical procedures. In this paper a novel method for variable selection for classification purposes is introduced. This method exploits the recently proposed Canonical Measure of Correlation between two sets of variables (CMC index). The CMC index is in this case calculated for two specific sets of variables, the former being comprised of the independent variables and the latter of the unfolded class matrix. The CMC values, calculated by considering one variable at a time, can be sorted and a ranking of the variables on the basis of their class discrimination capabilities results. Alternatively, CMC index can be calculated for all the possible combinations of variables and the variable subset with the maximal CMC can be selected, but this procedure is computationally more demanding and classification performance of the selected subset is not always the best one. The effectiveness of the CMC index in selecting variables with discriminative ability was compared with that of other well-known strategies for variable selection, such as the Wilks' Lambda, the VIP index based on the Partial Least Squares-Discriminant Analysis, and the selection provided by classification trees. A variable Forward Selection based on the CMC index was finally used in conjunction of Linear Discriminant Analysis. This approach was tested on several chemical data sets. Obtained results were encouraging.

  20. Group Variable Selection Via Convex Log-Exp-Sum Penalty with Application to a Breast Cancer Survivor Study

    PubMed Central

    Geng, Zhigeng; Wang, Sijian; Yu, Menggang; Monahan, Patrick O.; Champion, Victoria; Wahba, Grace

    2017-01-01

    Summary In many scientific and engineering applications, covariates are naturally grouped. When the group structures are available among covariates, people are usually interested in identifying both important groups and important variables within the selected groups. Among existing successful group variable selection methods, some methods fail to conduct the within group selection. Some methods are able to conduct both group and within group selection, but the corresponding objective functions are non-convex. Such a non-convexity may require extra numerical effort. In this article, we propose a novel Log-Exp-Sum(LES) penalty for group variable selection. The LES penalty is strictly convex. It can identify important groups as well as select important variables within the group. We develop an efficient group-level coordinate descent algorithm to fit the model. We also derive non-asymptotic error bounds and asymptotic group selection consistency for our method in the high-dimensional setting where the number of covariates can be much larger than the sample size. Numerical results demonstrate the good performance of our method in both variable selection and prediction. We applied the proposed method to an American Cancer Society breast cancer survivor dataset. The findings are clinically meaningful and may help design intervention programs to improve the qualify of life for breast cancer survivors. PMID:25257196

  1. Retention modelling of polychlorinated biphenyls in comprehensive two-dimensional gas chromatography.

    PubMed

    D'Archivio, Angelo Antonio; Incani, Angela; Ruggieri, Fabrizio

    2011-01-01

    In this paper, we use a quantitative structure-retention relationship (QSRR) method to predict the retention times of polychlorinated biphenyls (PCBs) in comprehensive two-dimensional gas chromatography (GC×GC). We analyse the GC×GC retention data taken from the literature by comparing predictive capability of different regression methods. The various models are generated using 70 out of 209 PCB congeners in the calibration stage, while their predictive performance is evaluated on the remaining 139 compounds. The two-dimensional chromatogram is initially estimated by separately modelling retention times of PCBs in the first and in the second column ((1) t (R) and (2) t (R), respectively). In particular, multilinear regression (MLR) combined with genetic algorithm (GA) variable selection is performed to extract two small subsets of predictors for (1) t (R) and (2) t (R) from a large set of theoretical molecular descriptors provided by the popular software Dragon, which after removal of highly correlated or almost constant variables consists of 237 structure-related quantities. Based on GA-MLR analysis, a four-dimensional and a five-dimensional relationship modelling (1) t (R) and (2) t (R), respectively, are identified. Single-response partial least square (PLS-1) regression is alternatively applied to independently model (1) t (R) and (2) t (R) without the need for preliminary GA variable selection. Further, we explore the possibility of predicting the two-dimensional chromatogram of PCBs in a single calibration procedure by using a two-response PLS (PLS-2) model or a feed-forward artificial neural network (ANN) with two output neurons. In the first case, regression is carried out on the full set of 237 descriptors, while the variables previously selected by GA-MLR are initially considered as ANN inputs and subjected to a sensitivity analysis to remove the redundant ones. Results show PLS-1 regression exhibits a noticeably better descriptive and predictive performance than the other investigated approaches. The observed values of determination coefficients for (1) t (R) and (2) t (R) in calibration (0.9999 and 0.9993, respectively) and prediction (0.9987 and 0.9793, respectively) provided by PLS-1 demonstrate that GC×GC behaviour of PCBs is properly modelled. In particular, the predicted two-dimensional GC×GC chromatogram of 139 PCBs not involved in the calibration stage closely resembles the experimental one. Based on the above lines of evidence, the proposed approach ensures accurate simulation of the whole GC×GC chromatogram of PCBs using experimental determination of only 1/3 retention data of representative congeners.

  2. Variable importance in nonlinear kernels (VINK): classification of digitized histopathology.

    PubMed

    Ginsburg, Shoshana; Ali, Sahirzeeshan; Lee, George; Basavanhally, Ajay; Madabhushi, Anant

    2013-01-01

    Quantitative histomorphometry is the process of modeling appearance of disease morphology on digitized histopathology images via image-based features (e.g., texture, graphs). Due to the curse of dimensionality, building classifiers with large numbers of features requires feature selection (which may require a large training set) or dimensionality reduction (DR). DR methods map the original high-dimensional features in terms of eigenvectors and eigenvalues, which limits the potential for feature transparency or interpretability. Although methods exist for variable selection and ranking on embeddings obtained via linear DR schemes (e.g., principal components analysis (PCA)), similar methods do not yet exist for nonlinear DR (NLDR) methods. In this work we present a simple yet elegant method for approximating the mapping between the data in the original feature space and the transformed data in the kernel PCA (KPCA) embedding space; this mapping provides the basis for quantification of variable importance in nonlinear kernels (VINK). We show how VINK can be implemented in conjunction with the popular Isomap and Laplacian eigenmap algorithms. VINK is evaluated in the contexts of three different problems in digital pathology: (1) predicting five year PSA failure following radical prostatectomy, (2) predicting Oncotype DX recurrence risk scores for ER+ breast cancers, and (3) distinguishing good and poor outcome p16+ oropharyngeal tumors. We demonstrate that subsets of features identified by VINK provide similar or better classification or regression performance compared to the original high dimensional feature sets.

  3. Linear and nonlinear pattern selection in Rayleigh-Benard stability problems

    NASA Technical Reports Server (NTRS)

    Davis, Sanford S.

    1993-01-01

    A new algorithm is introduced to compute finite-amplitude states using primitive variables for Rayleigh-Benard convection on relatively coarse meshes. The algorithm is based on a finite-difference matrix-splitting approach that separates all physical and dimensional effects into one-dimensional subsets. The nonlinear pattern selection process for steady convection in an air-filled square cavity with insulated side walls is investigated for Rayleigh numbers up to 20,000. The internalization of disturbances that evolve into coherent patterns is investigated and transient solutions from linear perturbation theory are compared with and contrasted to the full numerical simulations.

  4. Nonparametric regression applied to quantitative structure-activity relationships

    PubMed

    Constans; Hirst

    2000-03-01

    Several nonparametric regressors have been applied to modeling quantitative structure-activity relationship (QSAR) data. The simplest regressor, the Nadaraya-Watson, was assessed in a genuine multivariate setting. Other regressors, the local linear and the shifted Nadaraya-Watson, were implemented within additive models--a computationally more expedient approach, better suited for low-density designs. Performances were benchmarked against the nonlinear method of smoothing splines. A linear reference point was provided by multilinear regression (MLR). Variable selection was explored using systematic combinations of different variables and combinations of principal components. For the data set examined, 47 inhibitors of dopamine beta-hydroxylase, the additive nonparametric regressors have greater predictive accuracy (as measured by the mean absolute error of the predictions or the Pearson correlation in cross-validation trails) than MLR. The use of principal components did not improve the performance of the nonparametric regressors over use of the original descriptors, since the original descriptors are not strongly correlated. It remains to be seen if the nonparametric regressors can be successfully coupled with better variable selection and dimensionality reduction in the context of high-dimensional QSARs.

  5. Incorporating biological information in sparse principal component analysis with application to genomic data.

    PubMed

    Li, Ziyi; Safo, Sandra E; Long, Qi

    2017-07-11

    Sparse principal component analysis (PCA) is a popular tool for dimensionality reduction, pattern recognition, and visualization of high dimensional data. It has been recognized that complex biological mechanisms occur through concerted relationships of multiple genes working in networks that are often represented by graphs. Recent work has shown that incorporating such biological information improves feature selection and prediction performance in regression analysis, but there has been limited work on extending this approach to PCA. In this article, we propose two new sparse PCA methods called Fused and Grouped sparse PCA that enable incorporation of prior biological information in variable selection. Our simulation studies suggest that, compared to existing sparse PCA methods, the proposed methods achieve higher sensitivity and specificity when the graph structure is correctly specified, and are fairly robust to misspecified graph structures. Application to a glioblastoma gene expression dataset identified pathways that are suggested in the literature to be related with glioblastoma. The proposed sparse PCA methods Fused and Grouped sparse PCA can effectively incorporate prior biological information in variable selection, leading to improved feature selection and more interpretable principal component loadings and potentially providing insights on molecular underpinnings of complex diseases.

  6. Bayesian block-diagonal variable selection and model averaging

    PubMed Central

    Papaspiliopoulos, O.; Rossell, D.

    2018-01-01

    Summary We propose a scalable algorithmic framework for exact Bayesian variable selection and model averaging in linear models under the assumption that the Gram matrix is block-diagonal, and as a heuristic for exploring the model space for general designs. In block-diagonal designs our approach returns the most probable model of any given size without resorting to numerical integration. The algorithm also provides a novel and efficient solution to the frequentist best subset selection problem for block-diagonal designs. Posterior probabilities for any number of models are obtained by evaluating a single one-dimensional integral, and other quantities of interest such as variable inclusion probabilities and model-averaged regression estimates are obtained by an adaptive, deterministic one-dimensional numerical integration. The overall computational cost scales linearly with the number of blocks, which can be processed in parallel, and exponentially with the block size, rendering it most adequate in situations where predictors are organized in many moderately-sized blocks. For general designs, we approximate the Gram matrix by a block-diagonal matrix using spectral clustering and propose an iterative algorithm that capitalizes on the block-diagonal algorithms to explore efficiently the model space. All methods proposed in this paper are implemented in the R library mombf. PMID:29861501

  7. Advanced supersonic propulsion system technology study, phase 2

    NASA Technical Reports Server (NTRS)

    Allan, R. D.

    1975-01-01

    Variable cycle engines were identified, based on the mixed-flow low-bypass-ratio augmented turbofan cycle, which has shown excellent range capability in the AST airplane. The best mixed-flow augmented turbofan engine was selected based on range in the AST Baseline Airplane. Selected variable cycle engine features were added to this best conventional baseline engine, and the Dual-Cycle VCE and Double-Bypass VCE were defined. The conventional mixed-flow turbofan and the Double-Bypass VCE were on the subjects of engine preliminary design studies to determine mechanical feasibility, confirm weight and dimensional estimates, and identify the necessary technology considered not yet available. Critical engine components were studied and incorporated into the variable cycle engine design.

  8. System for selecting relevant information for decision support.

    PubMed

    Kalina, Jan; Seidl, Libor; Zvára, Karel; Grünfeldová, Hana; Slovák, Dalibor; Zvárová, Jana

    2013-01-01

    We implemented a prototype of a decision support system called SIR which has a form of a web-based classification service for diagnostic decision support. The system has the ability to select the most relevant variables and to learn a classification rule, which is guaranteed to be suitable also for high-dimensional measurements. The classification system can be useful for clinicians in primary care to support their decision-making tasks with relevant information extracted from any available clinical study. The implemented prototype was tested on a sample of patients in a cardiological study and performs an information extraction from a high-dimensional set containing both clinical and gene expression data.

  9. Clustering high-dimensional mixed data to uncover sub-phenotypes: joint analysis of phenotypic and genotypic data.

    PubMed

    McParland, D; Phillips, C M; Brennan, L; Roche, H M; Gormley, I C

    2017-12-10

    The LIPGENE-SU.VI.MAX study, like many others, recorded high-dimensional continuous phenotypic data and categorical genotypic data. LIPGENE-SU.VI.MAX focuses on the need to account for both phenotypic and genetic factors when studying the metabolic syndrome (MetS), a complex disorder that can lead to higher risk of type 2 diabetes and cardiovascular disease. Interest lies in clustering the LIPGENE-SU.VI.MAX participants into homogeneous groups or sub-phenotypes, by jointly considering their phenotypic and genotypic data, and in determining which variables are discriminatory. A novel latent variable model that elegantly accommodates high dimensional, mixed data is developed to cluster LIPGENE-SU.VI.MAX participants using a Bayesian finite mixture model. A computationally efficient variable selection algorithm is incorporated, estimation is via a Gibbs sampling algorithm and an approximate BIC-MCMC criterion is developed to select the optimal model. Two clusters or sub-phenotypes ('healthy' and 'at risk') are uncovered. A small subset of variables is deemed discriminatory, which notably includes phenotypic and genotypic variables, highlighting the need to jointly consider both factors. Further, 7 years after the LIPGENE-SU.VI.MAX data were collected, participants underwent further analysis to diagnose presence or absence of the MetS. The two uncovered sub-phenotypes strongly correspond to the 7-year follow-up disease classification, highlighting the role of phenotypic and genotypic factors in the MetS and emphasising the potential utility of the clustering approach in early screening. Additionally, the ability of the proposed approach to define the uncertainty in sub-phenotype membership at the participant level is synonymous with the concepts of precision medicine and nutrition. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Multiple-input multiple-output causal strategies for gene selection.

    PubMed

    Bontempi, Gianluca; Haibe-Kains, Benjamin; Desmedt, Christine; Sotiriou, Christos; Quackenbush, John

    2011-11-25

    Traditional strategies for selecting variables in high dimensional classification problems aim to find sets of maximally relevant variables able to explain the target variations. If these techniques may be effective in generalization accuracy they often do not reveal direct causes. The latter is essentially related to the fact that high correlation (or relevance) does not imply causation. In this study, we show how to efficiently incorporate causal information into gene selection by moving from a single-input single-output to a multiple-input multiple-output setting. We show in synthetic case study that a better prioritization of causal variables can be obtained by considering a relevance score which incorporates a causal term. In addition we show, in a meta-analysis study of six publicly available breast cancer microarray datasets, that the improvement occurs also in terms of accuracy. The biological interpretation of the results confirms the potential of a causal approach to gene selection. Integrating causal information into gene selection algorithms is effective both in terms of prediction accuracy and biological interpretation.

  11. Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces

    PubMed Central

    Hochberg, Leigh R.; Donoghue, John P.; Brown, Emery N.

    2015-01-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID:25265627

  12. Classification of the European Union member states according to the relative level of sustainable development.

    PubMed

    Anna, Bluszcz

    Nowadays methods of measurement and assessment of the level of sustained development at the international, national and regional level are a current research problem, which requires multi-dimensional analysis. The relative assessment of the sustainability level of the European Union member states and the comparative analysis of the position of Poland relative to other countries was the aim of the conducted studies in the article. EU member states were treated as objects in the multi-dimensional space. Dimensions of space were specified by ten diagnostic variables describing the sustainability level of UE countries in three dimensions, i.e., social, economic and environmental. Because the compiled statistical data were expressed in different units of measure, taxonomic methods were used for building an aggregated measure to assess the level of sustainable development of EU member states, which through normalisation of variables enabled the comparative analysis between countries. Methodology of studies consisted of eight stages, which included, among others: defining data matrices, calculating the variability coefficient for all variables, which variability coefficient was under 10 %, division of variables into stimulants and destimulants, selection of the method of variable normalisation, developing matrices of normalised data, selection of the formula and calculating the aggregated indicator of the relative level of sustainable development of the EU countries, calculating partial development indicators for three studies dimensions: social, economic and environmental and the classification of the EU countries according to the relative level of sustainable development. Statistical date were collected based on the Polish Central Statistical Office publication.

  13. Selection of key ambient particulate variables for epidemiological studies - applying cluster and heatmap analyses as tools for data reduction.

    PubMed

    Gu, Jianwei; Pitz, Mike; Breitner, Susanne; Birmili, Wolfram; von Klot, Stephanie; Schneider, Alexandra; Soentgen, Jens; Reller, Armin; Peters, Annette; Cyrys, Josef

    2012-10-01

    The success of epidemiological studies depends on the use of appropriate exposure variables. The purpose of this study is to extract a relatively small selection of variables characterizing ambient particulate matter from a large measurement data set. The original data set comprised a total of 96 particulate matter variables that have been continuously measured since 2004 at an urban background aerosol monitoring site in the city of Augsburg, Germany. Many of the original variables were derived from measured particle size distribution (PSD) across the particle diameter range 3 nm to 10 μm, including size-segregated particle number concentration, particle length concentration, particle surface concentration and particle mass concentration. The data set was complemented by integral aerosol variables. These variables were measured by independent instruments, including black carbon, sulfate, particle active surface concentration and particle length concentration. It is obvious that such a large number of measured variables cannot be used in health effect analyses simultaneously. The aim of this study is a pre-screening and a selection of the key variables that will be used as input in forthcoming epidemiological studies. In this study, we present two methods of parameter selection and apply them to data from a two-year period from 2007 to 2008. We used the agglomerative hierarchical cluster method to find groups of similar variables. In total, we selected 15 key variables from 9 clusters which are recommended for epidemiological analyses. We also applied a two-dimensional visualization technique called "heatmap" analysis to the Spearman correlation matrix. 12 key variables were selected using this method. Moreover, the positive matrix factorization (PMF) method was applied to the PSD data to characterize the possible particle sources. Correlations between the variables and PMF factors were used to interpret the meaning of the cluster and the heatmap analyses. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. GWASinlps: Nonlocal prior based iterative SNP selection tool for genome-wide association studies.

    PubMed

    Sanyal, Nilotpal; Lo, Min-Tzu; Kauppi, Karolina; Djurovic, Srdjan; Andreassen, Ole A; Johnson, Valen E; Chen, Chi-Hua

    2018-06-19

    Multiple marker analysis of the genome-wide association study (GWAS) data has gained ample attention in recent years. However, because of the ultra high-dimensionality of GWAS data, such analysis is challenging. Frequently used penalized regression methods often lead to large number of false positives, whereas Bayesian methods are computationally very expensive. Motivated to ameliorate these issues simultaneously, we consider the novel approach of using nonlocal priors in an iterative variable selection framework. We develop a variable selection method, named, iterative nonlocal prior based selection for GWAS, or GWASinlps, that combines, in an iterative variable selection framework, the computational efficiency of the screen-and-select approach based on some association learning and the parsimonious uncertainty quantification provided by the use of nonlocal priors. The hallmark of our method is the introduction of 'structured screen-and-select' strategy, that considers hierarchical screening, which is not only based on response-predictor associations, but also based on response-response associations, and concatenates variable selection within that hierarchy. Extensive simulation studies with SNPs having realistic linkage disequilibrium structures demonstrate the advantages of our computationally efficient method compared to several frequentist and Bayesian variable selection methods, in terms of true positive rate, false discovery rate, mean squared error, and effect size estimation error. Further, we provide empirical power analysis useful for study design. Finally, a real GWAS data application was considered with human height as phenotype. An R-package for implementing the GWASinlps method is available at https://cran.r-project.org/web/packages/GWASinlps/index.html. Supplementary data are available at Bioinformatics online.

  15. Decomposition and model selection for large contingency tables.

    PubMed

    Dahinden, Corinne; Kalisch, Markus; Bühlmann, Peter

    2010-04-01

    Large contingency tables summarizing categorical variables arise in many areas. One example is in biology, where large numbers of biomarkers are cross-tabulated according to their discrete expression level. Interactions of the variables are of great interest and are generally studied with log-linear models. The structure of a log-linear model can be visually represented by a graph from which the conditional independence structure can then be easily read off. However, since the number of parameters in a saturated model grows exponentially in the number of variables, this generally comes with a heavy computational burden. Even if we restrict ourselves to models of lower-order interactions or other sparse structures, we are faced with the problem of a large number of cells which play the role of sample size. This is in sharp contrast to high-dimensional regression or classification procedures because, in addition to a high-dimensional parameter, we also have to deal with the analogue of a huge sample size. Furthermore, high-dimensional tables naturally feature a large number of sampling zeros which often leads to the nonexistence of the maximum likelihood estimate. We therefore present a decomposition approach, where we first divide the problem into several lower-dimensional problems and then combine these to form a global solution. Our methodology is computationally feasible for log-linear interaction models with many categorical variables each or some of them having many levels. We demonstrate the proposed method on simulated data and apply it to a bio-medical problem in cancer research.

  16. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    PubMed

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  17. An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.

    PubMed

    Binder, Harald; Porzelius, Christine; Schumacher, Martin

    2011-03-01

    Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Analysis of Sediment Transport for Rivers in South Korea based on Data Mining technique

    NASA Astrophysics Data System (ADS)

    Jang, Eun-kyung; Ji, Un; Yeo, Woonkwang

    2017-04-01

    The purpose of this study is to calculate of sediment discharge assessment using data mining in South Korea. The Model Tree was selected for this study which is the most suitable technique to explicitly analyze the relationship between input and output variables in various and diverse databases among the Data Mining. In order to derive the sediment discharge equation using the Model Tree of Data Mining used the dimensionless variables used in Engelund and Hansen, Ackers and White, Brownlie and van Rijn equations as the analytical condition. In addition, total of 14 analytical conditions were set considering the conditions dimensional variables and the combination conditions of the dimensionless variables and the dimensional variables according to the relationship between the flow and the sediment transport. For each case, the analysis results were analyzed by mean of discrepancy ratio, root mean square error, mean absolute percent error, correlation coefficient. The results showed that the best fit was obtained by using five dimensional variables such as velocity, depth, slope, width and Median Diameter. And closest approximation to the best goodness-of-fit was estimated from the depth, slope, width, main grain size of bed material and dimensionless tractive force and except for the slope in the single variable. In addition, the three types of Model Tree that are most appropriate are compared with the Ackers and White equation which is the best fit among the existing equations, the mean discrepancy ration and the correlation coefficient of the Model Tree are improved compared to the Ackers and White equation.

  19. Data re-arranging techniques leading to proper variable selections in high energy physics

    NASA Astrophysics Data System (ADS)

    Kůs, Václav; Bouř, Petr

    2017-12-01

    We introduce a new data based approach to homogeneity testing and variable selection carried out in high energy physics experiments, where one of the basic tasks is to test the homogeneity of weighted samples, mainly the Monte Carlo simulations (weighted) and real data measurements (unweighted). This technique is called ’data re-arranging’ and it enables variable selection performed by means of the classical statistical homogeneity tests such as Kolmogorov-Smirnov, Anderson-Darling, or Pearson’s chi-square divergence test. P-values of our variants of homogeneity tests are investigated and the empirical verification through 46 dimensional high energy particle physics data sets is accomplished under newly proposed (equiprobable) quantile binning. Particularly, the procedure of homogeneity testing is applied to re-arranged Monte Carlo samples and real DATA sets measured at the particle accelerator Tevatron in Fermilab at DØ experiment originating from top-antitop quark pair production in two decay channels (electron, muon) with 2, 3, or 4+ jets detected. Finally, the variable selections in the electron and muon channels induced by the re-arranging procedure for homogeneity testing are provided for Tevatron top-antitop quark data sets.

  20. High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis

    PubMed Central

    Daye, Z. John; Chen, Jinbo; Li, Hongzhe

    2011-01-01

    Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833

  1. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    PubMed

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  2. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property

    PubMed Central

    Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen

    2010-01-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586

  3. Relationships between convective storms and their environment in AVE IV determined from a three-dimensional subsynoptic-scale, trajectory model

    NASA Technical Reports Server (NTRS)

    Wilson, G. S.

    1977-01-01

    The paper describes interrelationships between synoptic-scale and convective-scale systems obtained by following individual air parcels as they traveled within the convective storm environment of AVE IV. (NASA's fourth Atmospheric Variability Experiment, AVE IV, was a 36-hour study in April 1975 of the atmospheric variability and structure in regions of convective storms.) A three-dimensional trajectory model was used to calculate parcel paths, and manually digitized radar was employed to locate convective activity of various intensities and to determine those trajectories that traversed the storm environment. Spatial and temporal interrelationships are demonstrated by reference to selected time periods of AVE IV which contain the development and movement of the squall line in which the Neosho tornado was created.

  4. Variable dimensionality in the uranium fluoride/2-methyl-piperazine system: Synthesis and structures of UFO-5, -6, and -7; Zero-, one-, and two-dimensional materials with unprecedented topologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Francis, R.J.; Halasyamani, P.S.; Bee, J.S.

    Recently, low temperature (T < 300 C) hydrothermal reactions of inorganic precursors in the presence of organic cations have proven highly productive for the synthesis of novel solid-state materials. Interest in these materials is driven by the astonishingly diverse range of structures produced, as well as by their many potential materials chemistry applications. This report describes the high yield, phase pure hydrothermal syntheses of three new uranium fluoride phases with unprecedented structure types. Through the systematic control of the synthesis conditions the authors have successfully controlled the architecture and dimensionality of the phase formed and selectively synthesized novel zero-, one-,more » and two-dimensional materials.« less

  5. Random forest feature selection approach for image segmentation

    NASA Astrophysics Data System (ADS)

    Lefkovits, László; Lefkovits, Szidónia; Emerich, Simina; Vaida, Mircea Florin

    2017-03-01

    In the field of image segmentation, discriminative models have shown promising performance. Generally, every such model begins with the extraction of numerous features from annotated images. Most authors create their discriminative model by using many features without using any selection criteria. A more reliable model can be built by using a framework that selects the important variables, from the point of view of the classification, and eliminates the unimportant once. In this article we present a framework for feature selection and data dimensionality reduction. The methodology is built around the random forest (RF) algorithm and its variable importance evaluation. In order to deal with datasets so large as to be practically unmanageable, we propose an algorithm based on RF that reduces the dimension of the database by eliminating irrelevant features. Furthermore, this framework is applied to optimize our discriminative model for brain tumor segmentation.

  6. Rank-based estimation in the {ell}1-regularized partly linear model for censored outcomes with application to integrated analyses of clinical predictors and gene expression data.

    PubMed

    Johnson, Brent A

    2009-10-01

    We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.

  7. Dimensional control of die castings

    NASA Astrophysics Data System (ADS)

    Karve, Aniruddha Ajit

    The demand for net shape die castings, which require little or no machining, is steadily increasing. Stringent customer requirements are forcing die casters to deliver high quality castings in increasingly short lead times. Dimensional conformance to customer specifications is an inherent part of die casting quality. The dimensional attributes of a die casting are essentially dependent upon many factors--the quality of the die and the degree of control over the process variables being the two major sources of dimensional error in die castings. This study focused on investigating the nature and the causes of dimensional error in die castings. The two major components of dimensional error i.e., dimensional variability and die allowance were studied. The major effort of this study was to qualitatively and quantitatively study the effects of casting geometry and process variables on die casting dimensional variability and die allowance. This was accomplished by detailed dimensional data collection at production die casting sites. Robust feature characterization schemes were developed to describe complex casting geometry in quantitative terms. Empirical modeling was utilized to quantify the effects of the casting variables on dimensional variability and die allowance for die casting features. A number of casting geometry and process variables were found to affect dimensional variability in die castings. The dimensional variability was evaluated by comparisons with current published dimensional tolerance standards. The casting geometry was found to play a significant role in influencing the die allowance of the features measured. The predictive models developed for dimensional variability and die allowance were evaluated to test their effectiveness. Finally, the relative impact of all the components of dimensional error in die castings was put into perspective, and general guidelines for effective dimensional control in the die casting plant were laid out. The results of this study will contribute to enhancement of dimensional quality and lead time compression in the die casting industry, thus making it competitive with other net shape manufacturing processes.

  8. Path Finding on High-Dimensional Free Energy Landscapes

    NASA Astrophysics Data System (ADS)

    Díaz Leines, Grisell; Ensing, Bernd

    2012-07-01

    We present a method for determining the average transition path and the free energy along this path in the space of selected collective variables. The formalism is based upon a history-dependent bias along a flexible path variable within the metadynamics framework but with a trivial scaling of the cost with the number of collective variables. Controlling the sampling of the orthogonal modes recovers the average path and the minimum free energy path as the limiting cases. The method is applied to resolve the path and the free energy of a conformational transition in alanine dipeptide.

  9. Development of custom measurement system for biomechanical evaluation of independent wheelchair transfers.

    PubMed

    Koontz, Alicia M; Lin, Yen-Sheng; Kankipati, Padmaja; Boninger, Michael L; Cooper, Rory A

    2011-01-01

    This study describes a new custom measurement system designed to investigate the biomechanics of sitting-pivot wheelchair transfers and assesses the reliability of selected biomechanical variables. Variables assessed include horizontal and vertical reaction forces underneath both hands and three-dimensional trunk, shoulder, and elbow range of motion. We examined the reliability of these measures between 5 consecutive transfer trials for 5 subjects with spinal cord injury and 12 nondisabled subjects while they performed a self-selected sitting pivot transfer from a wheelchair to a level bench. A majority of the biomechanical variables demonstrated moderate to excellent reliability (r > 0.6). The transfer measurement system recorded reliable and valid biomechanical data for future studies of sitting-pivot wheelchair transfers.We recommend a minimum of five transfer trials to obtain a reliable measure of transfer technique for future studies.

  10. Variable Selection for Support Vector Machines in Moderately High Dimensions

    PubMed Central

    Zhang, Xiang; Wu, Yichao; Wang, Lan; Li, Runze

    2015-01-01

    Summary The support vector machine (SVM) is a powerful binary classification tool with high accuracy and great flexibility. It has achieved great success, but its performance can be seriously impaired if many redundant covariates are included. Some efforts have been devoted to studying variable selection for SVMs, but asymptotic properties, such as variable selection consistency, are largely unknown when the number of predictors diverges to infinity. In this work, we establish a unified theory for a general class of nonconvex penalized SVMs. We first prove that in ultra-high dimensions, there exists one local minimizer to the objective function of nonconvex penalized SVMs possessing the desired oracle property. We further address the problem of nonunique local minimizers by showing that the local linear approximation algorithm is guaranteed to converge to the oracle estimator even in the ultra-high dimensional setting if an appropriate initial estimator is available. This condition on initial estimator is verified to be automatically valid as long as the dimensions are moderately high. Numerical examples provide supportive evidence. PMID:26778916

  11. Sparse partial least squares regression for simultaneous dimension reduction and variable selection

    PubMed Central

    Chun, Hyonho; Keleş, Sündüz

    2010-01-01

    Partial least squares regression has been an alternative to ordinary least squares for handling multicollinearity in several areas of scientific research since the 1960s. It has recently gained much attention in the analysis of high dimensional genomic data. We show that known asymptotic consistency of the partial least squares estimator for a univariate response does not hold with the very large p and small n paradigm. We derive a similar result for a multivariate response regression with partial least squares. We then propose a sparse partial least squares formulation which aims simultaneously to achieve good predictive performance and variable selection by producing sparse linear combinations of the original predictors. We provide an efficient implementation of sparse partial least squares regression and compare it with well-known variable selection and dimension reduction approaches via simulation experiments. We illustrate the practical utility of sparse partial least squares regression in a joint analysis of gene expression and genomewide binding data. PMID:20107611

  12. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    PubMed Central

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  13. Variable Selection for Confounder Control, Flexible Modeling and Collaborative Targeted Minimum Loss-Based Estimation in Causal Inference.

    PubMed

    Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan

    2016-05-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.

  14. Combining techniques for screening and evaluating interaction terms on high-dimensional time-to-event data.

    PubMed

    Sariyar, Murat; Hoffmann, Isabell; Binder, Harald

    2014-02-26

    Molecular data, e.g. arising from microarray technology, is often used for predicting survival probabilities of patients. For multivariate risk prediction models on such high-dimensional data, there are established techniques that combine parameter estimation and variable selection. One big challenge is to incorporate interactions into such prediction models. In this feasibility study, we present building blocks for evaluating and incorporating interactions terms in high-dimensional time-to-event settings, especially for settings in which it is computationally too expensive to check all possible interactions. We use a boosting technique for estimation of effects and the following building blocks for pre-selecting interactions: (1) resampling, (2) random forests and (3) orthogonalization as a data pre-processing step. In a simulation study, the strategy that uses all building blocks is able to detect true main effects and interactions with high sensitivity in different kinds of scenarios. The main challenge are interactions composed of variables that do not represent main effects, but our findings are also promising in this regard. Results on real world data illustrate that effect sizes of interactions frequently may not be large enough to improve prediction performance, even though the interactions are potentially of biological relevance. Screening interactions through random forests is feasible and useful, when one is interested in finding relevant two-way interactions. The other building blocks also contribute considerably to an enhanced pre-selection of interactions. We determined the limits of interaction detection in terms of necessary effect sizes. Our study emphasizes the importance of making full use of existing methods in addition to establishing new ones.

  15. Sufficient Statistics for Divergence and the Probability of Misclassification

    NASA Technical Reports Server (NTRS)

    Quirein, J.

    1972-01-01

    One particular aspect is considered of the feature selection problem which results from the transformation x=Bz, where B is a k by n matrix of rank k and k is or = to n. It is shown that in general, such a transformation results in a loss of information. In terms of the divergence, this is equivalent to the fact that the average divergence computed using the variable x is less than or equal to the average divergence computed using the variable z. A loss of information in terms of the probability of misclassification is shown to be equivalent to the fact that the probability of misclassification computed using variable x is greater than or equal to the probability of misclassification computed using variable z. First, the necessary facts relating k-dimensional and n-dimensional integrals are derived. Then the mentioned results about the divergence and probability of misclassification are derived. Finally it is shown that if no information is lost (in x = Bz) as measured by the divergence, then no information is lost as measured by the probability of misclassification.

  16. Visions of visualization aids - Design philosophy and observations

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    1989-01-01

    Aids for the visualization of high-dimensional scientific or other data must be designed. Simply casting multidimensional data into a two-dimensional or three-dimensional spatial metaphor does not guarantee that the presentation will provide insight or a parsimonious description of phenomena implicit in the data. Useful visualization, in contrast to glitzy, high-tech, computer-graphics imagery, is generally based on preexisting theoretical beliefs concerning the underlying phenomena. These beliefs guide selection and formatting of the plotted variables. Visualization tools are useful for understanding naturally three-dimensional data bases such as those used by pilots or astronauts. Two examples of such aids for spatial maneuvering illustrate that informative geometric distortion may be introduced to assist visualization and that visualization of complex dynamics alone may not be adequate to provide the necessary insight into the underlying processes.

  17. Concave 1-norm group selection

    PubMed Central

    Jiang, Dingfeng; Huang, Jian

    2015-01-01

    Grouping structures arise naturally in many high-dimensional problems. Incorporation of such information can improve model fitting and variable selection. Existing group selection methods, such as the group Lasso, require correct membership. However, in practice it can be difficult to correctly specify group membership of all variables. Thus, it is important to develop group selection methods that are robust against group mis-specification. Also, it is desirable to select groups as well as individual variables in many applications. We propose a class of concave \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm group penalties that is robust to grouping structure and can perform bi-level selection. A coordinate descent algorithm is developed to calculate solutions of the proposed group selection method. Theoretical convergence of the algorithm is proved under certain regularity conditions. Comparison with other methods suggests the proposed method is the most robust approach under membership mis-specification. Simulation studies and real data application indicate that the \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm concave group selection approach achieves better control of false discovery rates. An R package grppenalty implementing the proposed method is available at CRAN. PMID:25417206

  18. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    NASA Astrophysics Data System (ADS)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input-output relationships in high-dimensional systems for many problems in science and engineering. The HDMR method is developed to improve the efficiency of the deducing high dimensional behaviors. The method is formed by a particular organization of low dimensional component functions, in which each function is the contribution of one or more input variables to the output variables.

  19. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  20. tICA-Metadynamics: Accelerating Metadynamics by Using Kinetically Selected Collective Variables.

    PubMed

    M Sultan, Mohammad; Pande, Vijay S

    2017-06-13

    Metadynamics is a powerful enhanced molecular dynamics sampling method that accelerates simulations by adding history-dependent multidimensional Gaussians along selective collective variables (CVs). In practice, choosing a small number of slow CVs remains challenging due to the inherent high dimensionality of biophysical systems. Here we show that time-structure based independent component analysis (tICA), a recent advance in Markov state model literature, can be used to identify a set of variationally optimal slow coordinates for use as CVs for Metadynamics. We show that linear and nonlinear tICA-Metadynamics can complement existing MD studies by explicitly sampling the system's slowest modes and can even drive transitions along the slowest modes even when no such transitions are observed in unbiased simulations.

  1. Development of custom measurement system for biomechanical evaluation of independent wheelchair transfers

    PubMed Central

    Koontz, Alicia M.; Lin, Yen-Sheng; Kankipati, Padmaja; Boninger, Michael L.; Cooper, Rory A.

    2017-01-01

    This study describes a new custom measurement system designed to investigate the biomechanics of sitting-pivot wheelchair transfers and assesses the reliability of selected biomechanical variables. Variables assessed include horizontal and vertical reaction forces underneath both hands and three-dimensional trunk, shoulder, and elbow range of motion. We examined the reliability of these measures between 5 consecutive transfer trials for 5 subjects with spinal cord injury and 12 non-disabled subjects while they performed a self-selected sitting pivot transfer from a wheelchair to a level bench. A majority of the biomechanical variables demonstrated moderate to excellent reliability (r > 0.6). The transfer measurement system recorded reliable and valid biomechanical data for future studies of sitting-pivot wheelchair transfers. We recommend a minimum of five transfer trials to obtain a reliable measure of transfer technique for future studies. PMID:22068376

  2. Bayesian feature selection for high-dimensional linear regression via the Ising approximation with applications to genomics.

    PubMed

    Fisher, Charles K; Mehta, Pankaj

    2015-06-01

    Feature selection, identifying a subset of variables that are relevant for predicting a response, is an important and challenging component of many methods in statistics and machine learning. Feature selection is especially difficult and computationally intensive when the number of variables approaches or exceeds the number of samples, as is often the case for many genomic datasets. Here, we introduce a new approach--the Bayesian Ising Approximation (BIA)-to rapidly calculate posterior probabilities for feature relevance in L2 penalized linear regression. In the regime where the regression problem is strongly regularized by the prior, we show that computing the marginal posterior probabilities for features is equivalent to computing the magnetizations of an Ising model with weak couplings. Using a mean field approximation, we show it is possible to rapidly compute the feature selection path described by the posterior probabilities as a function of the L2 penalty. We present simulations and analytical results illustrating the accuracy of the BIA on some simple regression problems. Finally, we demonstrate the applicability of the BIA to high-dimensional regression by analyzing a gene expression dataset with nearly 30 000 features. These results also highlight the impact of correlations between features on Bayesian feature selection. An implementation of the BIA in C++, along with data for reproducing our gene expression analyses, are freely available at http://physics.bu.edu/∼pankajm/BIACode. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Parsimony and goodness-of-fit in multi-dimensional NMR inversion

    NASA Astrophysics Data System (ADS)

    Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos

    2017-01-01

    Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.

  4. Model-based Clustering of High-Dimensional Data in Astrophysics

    NASA Astrophysics Data System (ADS)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  5. Statistical Analysis of Big Data on Pharmacogenomics

    PubMed Central

    Fan, Jianqing; Liu, Han

    2013-01-01

    This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905

  6. New bandwidth selection criterion for Kernel PCA: approach to dimensionality reduction and classification problems.

    PubMed

    Thomas, Minta; De Brabanter, Kris; De Moor, Bart

    2014-05-10

    DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics. Several prediction tools are available based on these techniques. Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies. We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both feature selection and feature transformation perform well on classification tasks. Due to the dynamic selection property of feature selection, it is hard to define significant features for the classifier, which predicts classes of future samples. Moreover, the proposed strategy enjoys a distinctive advantage with its relatively lesser time complexity.

  7. A novel device to stretch multiple tissue samples with variable patterns: application for mRNA regulation in tissue-engineered constructs.

    PubMed

    Imsirovic, Jasmin; Derricks, Kelsey; Buczek-Thomas, Jo Ann; Rich, Celeste B; Nugent, Matthew A; Suki, Béla

    2013-01-01

    A broad range of cells are subjected to irregular time varying mechanical stimuli within the body, particularly in the respiratory and circulatory systems. Mechanical stretch is an important factor in determining cell function; however, the effects of variable stretch remain unexplored. In order to investigate the effects of variable stretch, we designed, built and tested a uniaxial stretching device that can stretch three-dimensional tissue constructs while varying the strain amplitude from cycle to cycle. The device is the first to apply variable stretching signals to cells in tissues or three dimensional tissue constructs. Following device validation, we applied 20% uniaxial strain to Gelfoam samples seeded with neonatal rat lung fibroblasts with different levels of variability (0%, 25%, 50% and 75%). RT-PCR was then performed to measure the effects of variable stretch on key molecules involved in cell-matrix interactions including: collagen 1α, lysyl oxidase, α-actin, β1 integrin, β3 integrin, syndecan-4, and vascular endothelial growth factor-A. Adding variability to the stretching signal upregulated, downregulated or had no effect on mRNA production depending on the molecule and the amount of variability. In particular, syndecan-4 showed a statistically significant peak at 25% variability, suggesting that an optimal variability of strain may exist for production of this molecule. We conclude that cycle-by-cycle variability in strain influences the expression of molecules related to cell-matrix interactions and hence may be used to selectively tune the composition of tissue constructs.

  8. Design Optimization of a Centrifugal Fan with Splitter Blades

    NASA Astrophysics Data System (ADS)

    Heo, Man-Woong; Kim, Jin-Hyuk; Kim, Kwang-Yong

    2015-05-01

    Multi-objective optimization of a centrifugal fan with additionally installed splitter blades was performed to simultaneously maximize the efficiency and pressure rise using three-dimensional Reynolds-averaged Navier-Stokes equations and hybrid multi-objective evolutionary algorithm. Two design variables defining the location of splitter, and the height ratio between inlet and outlet of impeller were selected for the optimization. In addition, the aerodynamic characteristics of the centrifugal fan were investigated with the variation of design variables in the design space. Latin hypercube sampling was used to select the training points, and response surface approximation models were constructed as surrogate models of the objective functions. With the optimization, both the efficiency and pressure rise of the centrifugal fan with splitter blades were improved considerably compared to the reference model.

  9. Dimensionality reduction in epidemic spreading models

    NASA Astrophysics Data System (ADS)

    Frasca, M.; Rizzo, A.; Gallo, L.; Fortuna, L.; Porfiri, M.

    2015-09-01

    Complex dynamical systems often exhibit collective dynamics that are well described by a reduced set of key variables in a low-dimensional space. Such a low-dimensional description offers a privileged perspective to understand the system behavior across temporal and spatial scales. In this work, we propose a data-driven approach to establish low-dimensional representations of large epidemic datasets by using a dimensionality reduction algorithm based on isometric features mapping (ISOMAP). We demonstrate our approach on synthetic data for epidemic spreading in a population of mobile individuals. We find that ISOMAP is successful in embedding high-dimensional data into a low-dimensional manifold, whose topological features are associated with the epidemic outbreak. Across a range of simulation parameters and model instances, we observe that epidemic outbreaks are embedded into a family of closed curves in a three-dimensional space, in which neighboring points pertain to instants that are close in time. The orientation of each curve is unique to a specific outbreak, and the coordinates correlate with the number of infected individuals. A low-dimensional description of epidemic spreading is expected to improve our understanding of the role of individual response on the outbreak dynamics, inform the selection of meaningful global observables, and, possibly, aid in the design of control and quarantine procedures.

  10. Analysis of Binary Multivariate Longitudinal Data via 2-Dimensional Orbits: An Application to the Agincourt Health and Socio-Demographic Surveillance System in South Africa

    PubMed Central

    Visaya, Maria Vivien; Sherwell, David; Sartorius, Benn; Cromieres, Fabien

    2015-01-01

    We analyse demographic longitudinal survey data of South African (SA) and Mozambican (MOZ) rural households from the Agincourt Health and Socio-Demographic Surveillance System in South Africa. In particular, we determine whether absolute poverty status (APS) is associated with selected household variables pertaining to socio-economic determination, namely household head age, household size, cumulative death, adults to minor ratio, and influx. For comparative purposes, households are classified according to household head nationality (SA or MOZ) and APS (rich or poor). The longitudinal data of each of the four subpopulations (SA rich, SA poor, MOZ rich, and MOZ poor) is a five-dimensional space defined by binary variables (questions), subjects, and time. We use the orbit method to represent binary multivariate longitudinal data (BMLD) of each household as a two-dimensional orbit and to visualise dynamics and behaviour of the population. At each time step, a point (x, y) from the orbit of a household corresponds to the observation of the household, where x is a binary sequence of responses and y is an ordering of variables. The ordering of variables is dynamically rearranged such that clusters and holes associated to least and frequently changing variables in the state space respectively, are exposed. Analysis of orbits reveals information of change at both individual- and population-level, change patterns in the data, capacity of states in the state space, and density of state transitions in the orbits. Analysis of household orbits of the four subpopulations show association between (i) households headed by older adults and rich households, (ii) large household size and poor households, and (iii) households with more minors than adults and poor households. Our results are compared to other methods of BMLD analysis. PMID:25919116

  11. Constructing Compact Takagi-Sugeno Rule Systems: Identification of Complex Interactions in Epidemiological Data

    PubMed Central

    Zhou, Shang-Ming; Lyons, Ronan A.; Brophy, Sinead; Gravenor, Mike B.

    2012-01-01

    The Takagi-Sugeno (TS) fuzzy rule system is a widely used data mining technique, and is of particular use in the identification of non-linear interactions between variables. However the number of rules increases dramatically when applied to high dimensional data sets (the curse of dimensionality). Few robust methods are available to identify important rules while removing redundant ones, and this results in limited applicability in fields such as epidemiology or bioinformatics where the interaction of many variables must be considered. Here, we develop a new parsimonious TS rule system. We propose three statistics: R, L, and ω-values, to rank the importance of each TS rule, and a forward selection procedure to construct a final model. We use our method to predict how key components of childhood deprivation combine to influence educational achievement outcome. We show that a parsimonious TS model can be constructed, based on a small subset of rules, that provides an accurate description of the relationship between deprivation indices and educational outcomes. The selected rules shed light on the synergistic relationships between the variables, and reveal that the effect of targeting specific domains of deprivation is crucially dependent on the state of the other domains. Policy decisions need to incorporate these interactions, and deprivation indices should not be considered in isolation. The TS rule system provides a basis for such decision making, and has wide applicability for the identification of non-linear interactions in complex biomedical data. PMID:23272108

  12. Constructing compact Takagi-Sugeno rule systems: identification of complex interactions in epidemiological data.

    PubMed

    Zhou, Shang-Ming; Lyons, Ronan A; Brophy, Sinead; Gravenor, Mike B

    2012-01-01

    The Takagi-Sugeno (TS) fuzzy rule system is a widely used data mining technique, and is of particular use in the identification of non-linear interactions between variables. However the number of rules increases dramatically when applied to high dimensional data sets (the curse of dimensionality). Few robust methods are available to identify important rules while removing redundant ones, and this results in limited applicability in fields such as epidemiology or bioinformatics where the interaction of many variables must be considered. Here, we develop a new parsimonious TS rule system. We propose three statistics: R, L, and ω-values, to rank the importance of each TS rule, and a forward selection procedure to construct a final model. We use our method to predict how key components of childhood deprivation combine to influence educational achievement outcome. We show that a parsimonious TS model can be constructed, based on a small subset of rules, that provides an accurate description of the relationship between deprivation indices and educational outcomes. The selected rules shed light on the synergistic relationships between the variables, and reveal that the effect of targeting specific domains of deprivation is crucially dependent on the state of the other domains. Policy decisions need to incorporate these interactions, and deprivation indices should not be considered in isolation. The TS rule system provides a basis for such decision making, and has wide applicability for the identification of non-linear interactions in complex biomedical data.

  13. Feature weight estimation for gene selection: a local hyperlinear learning approach

    PubMed Central

    2014-01-01

    Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071

  14. Sparse Zero-Sum Games as Stable Functional Feature Selection

    PubMed Central

    Sokolovska, Nataliya; Teytaud, Olivier; Rizkalla, Salwa; Clément, Karine; Zucker, Jean-Daniel

    2015-01-01

    In large-scale systems biology applications, features are structured in hidden functional categories whose predictive power is identical. Feature selection, therefore, can lead not only to a problem with a reduced dimensionality, but also reveal some knowledge on functional classes of variables. In this contribution, we propose a framework based on a sparse zero-sum game which performs a stable functional feature selection. In particular, the approach is based on feature subsets ranking by a thresholding stochastic bandit. We provide a theoretical analysis of the introduced algorithm. We illustrate by experiments on both synthetic and real complex data that the proposed method is competitive from the predictive and stability viewpoints. PMID:26325268

  15. The Effect of Biological Movement Variability on the Performance of the Golf Swing in High- and Low-Handicapped Players

    ERIC Educational Resources Information Center

    Bradshaw, Elizabeth J.; Keogh, Justin W. L.; Hume, Patria A.; Maulder, Peter S.; Nortje, Jacques; Marnewick, Michel

    2009-01-01

    The purpose of this study was to examine the role of neuromotor noise on golf swing performance in high- and low-handicap players. Selected two-dimensional kinematic measures of 20 male golfers (n = 10 per high- or low-handicap group) performing 10 golf swings with a 5-iron club was obtained through video analysis. Neuromotor noise was calculated…

  16. Single mode variable-sensitivity fiber optic sensors

    NASA Technical Reports Server (NTRS)

    Murphy, K. A.; Fogg, B. R.; Gunther, M. F.; Claus, R. O.

    1992-01-01

    We review spatially-weighted optical fiber sensors that filter specific vibration modes from one dimensional beams placed in clamped-free and clamped-clamped configurations. The sensitivity of the sensor is varied along the length of the fiber by tapering circular-core, dual-mode optical fibers. Selective vibration mode suppression on the order of 10 dB was obtained. We describe experimental results and propose future extensions to single mode sensor applications.

  17. Application of the Galerkin/least-squares formulation to the analysis of hypersonic flows. I - Flow over a two-dimensional ramp

    NASA Technical Reports Server (NTRS)

    Chalot, F.; Hughes, T. J. R.; Johan, Z.; Shakib, F.

    1991-01-01

    An FEM for the compressible Navier-Stokes equations is introduced. The discretization is based on entropy variables. The methodology is developed within the framework of a Galerkin/least-squares formulation to which a discontinuity-capturing operator is added. Results for three test cases selected among those of the Workshop on Hypersonic Flows for Reentry Problems are presented.

  18. Piezoelectric Nanogenerators for Self-Powered Nanosystems and Nanosensors

    DTIC Science & Technology

    2013-05-15

    mechanical triggering applied onto the nanogenerator. The structure and general working principle of the spring-substrated nanogenerator ( SNG ) are...schematically shown in Fig. 3a–c. Compressive springs with variable sizes were selected as the skeletons of the SNG devices. The helix-shaped spring surface...In the measurement for the output performance of the SNG , one end of the spring was fixed onto a three-dimensional stage; meanwhile a mechanical

  19. Identification of material constants for piezoelectric transformers by three-dimensional, finite-element method and a design-sensitivity method.

    PubMed

    Joo, Hyun-Woo; Lee, Chang-Hwan; Rho, Jong-Seok; Jung, Hyun-Kyo

    2003-08-01

    In this paper, an inversion scheme for piezoelectric constants of piezoelectric transformers is proposed. The impedance of piezoelectric transducers is calculated using a three-dimensional finite element method. The validity of this is confirmed experimentally. The effects of material coefficients on piezoelectric transformers are investigated numerically. Six material coefficient variables for piezoelectric transformers were selected, and a design sensitivity method was adopted as an inversion scheme. The validity of the proposed method was confirmed by step-up ratio calculations. The proposed method is applied to the analysis of a sample piezoelectric transformer, and its resonance characteristics are obtained by numerically combined equivalent circuit method.

  20. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    PubMed

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  1. Comparison of MPEG-1 digital videotape with digitized sVHS videotape for quantitative echocardiographic measurements

    NASA Technical Reports Server (NTRS)

    Garcia, M. J.; Thomas, J. D.; Greenberg, N.; Sandelski, J.; Herrera, C.; Mudd, C.; Wicks, J.; Spencer, K.; Neumann, A.; Sankpal, B.; hide

    2001-01-01

    Digital format is rapidly emerging as a preferred method for displaying and retrieving echocardiographic studies. The qualitative diagnostic accuracy of Moving Pictures Experts Group (MPEG-1) compressed digital echocardiographic studies has been previously reported. The goals of the present study were to compare quantitative measurements derived from MPEG-1 recordings with the super-VHS (sVHS) videotape clinical standard. Six reviewers performed blinded measurements from still-frame images selected from 20 echocardiographic studies that were simultaneously acquired in sVHS and MPEG-1 formats. Measurements were obtainable in 1401 (95%) of 1486 MPEG-1 variables compared with 1356 (91%) of 1486 sVHS variables (P <.001). Excellent agreement existed between MPEG-1 and sVHS 2-dimensional linear measurements (r = 0.97; MPEG-1 = 0.95[sVHS] + 1.1 mm; P <.001; Delta = 9% +/- 10%), 2-dimensional area measurements (r = 0.89), color jet areas (r = 0.87, p <.001), and Doppler velocities (r = 0.92, p <.001). Interobserver variability was similar for both sVHS and MPEG-1 readings. Our results indicate that quantitative off-line measurements from MPEG-1 digitized echocardiographic studies are feasible and comparable to those obtained from sVHS.

  2. Information Gain Based Dimensionality Selection for Classifying Text Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexitymore » is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.« less

  3. Bayesian Analysis of High Dimensional Classification

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Subhadeep; Liang, Faming

    2009-12-01

    Modern data mining and bioinformatics have presented an important playground for statistical learning techniques, where the number of input variables is possibly much larger than the sample size of the training data. In supervised learning, logistic regression or probit regression can be used to model a binary output and form perceptron classification rules based on Bayesian inference. In these cases , there is a lot of interest in searching for sparse model in High Dimensional regression(/classification) setup. we first discuss two common challenges for analyzing high dimensional data. The first one is the curse of dimensionality. The complexity of many existing algorithms scale exponentially with the dimensionality of the space and by virtue of that algorithms soon become computationally intractable and therefore inapplicable in many real applications. secondly, multicollinearities among the predictors which severely slowdown the algorithm. In order to make Bayesian analysis operational in high dimension we propose a novel 'Hierarchical stochastic approximation monte carlo algorithm' (HSAMC), which overcomes the curse of dimensionality, multicollinearity of predictors in high dimension and also it possesses the self-adjusting mechanism to avoid the local minima separated by high energy barriers. Models and methods are illustrated by simulation inspired from from the feild of genomics. Numerical results indicate that HSAMC can work as a general model selection sampler in high dimensional complex model space.

  4. Finite Adaptation and Multistep Moves in the Metropolis-Hastings Algorithm for Variable Selection in Genome-Wide Association Analysis

    PubMed Central

    Peltola, Tomi; Marttinen, Pekka; Vehtari, Aki

    2012-01-01

    High-dimensional datasets with large amounts of redundant information are nowadays available for hypothesis-free exploration of scientific questions. A particular case is genome-wide association analysis, where variations in the genome are searched for effects on disease or other traits. Bayesian variable selection has been demonstrated as a possible analysis approach, which can account for the multifactorial nature of the genetic effects in a linear regression model. Yet, the computation presents a challenge and application to large-scale data is not routine. Here, we study aspects of the computation using the Metropolis-Hastings algorithm for the variable selection: finite adaptation of the proposal distributions, multistep moves for changing the inclusion state of multiple variables in a single proposal and multistep move size adaptation. We also experiment with a delayed rejection step for the multistep moves. Results on simulated and real data show increase in the sampling efficiency. We also demonstrate that with application specific proposals, the approach can overcome a specific mixing problem in real data with 3822 individuals and 1,051,811 single nucleotide polymorphisms and uncover a variant pair with synergistic effect on the studied trait. Moreover, we illustrate multimodality in the real dataset related to a restrictive prior distribution on the genetic effect sizes and advocate a more flexible alternative. PMID:23166669

  5. Optical phase conjugation assisted scattering lens: variable focusing and 3D patterning

    PubMed Central

    Ryu, Jihee; Jang, Mooseok; Eom, Tae Joong; Yang, Changhuei; Chung, Euiheon

    2016-01-01

    Variable light focusing is the ability to flexibly select the focal distance of a lens. This feature presents technical challenges, but is significant for optical interrogation of three-dimensional objects. Numerous lens designs have been proposed to provide flexible light focusing, including zoom, fluid, and liquid-crystal lenses. Although these lenses are useful for macroscale applications, they have limited utility in micron-scale applications due to restricted modulation range and exacting requirements for fabrication and control. Here, we present a holographic focusing method that enables variable light focusing without any physical modification to the lens element. In this method, a scattering layer couples low-angle (transverse wave vector) components into a full angular spectrum, and a digital optical phase conjugation (DOPC) system characterizes and plays back the wavefront that focuses through the scattering layer. We demonstrate micron-scale light focusing and patterning over a wide range of focal distances of 22–51 mm. The interferometric nature of the focusing scheme also enables an aberration-free scattering lens. The proposed method provides a unique variable focusing capability for imaging thick specimens or selective photoactivation of neuronal networks. PMID:27049442

  6. Biomotor structures in elite female handball players.

    PubMed

    Katić, Ratko; Cavala, Marijana; Srhoj, Vatromir

    2007-09-01

    In order to identify biomotor structures in elite female handball players, factor structures of morphological characteristics and basic motor abilities of elite female handball players (N = 53) were determined first, followed by determination of relations between the morphological-motor space factors obtained and the set of criterion variables evaluating situation motor abilities in handball. Factor analysis of 14 morphological measures produced three morphological factors, i.e. factor of absolute voluminosity (mesoendomorph), factor of longitudinal skeleton dimensionality, and factor of transverse hand dimensionality. Factor analysis of 15 motor variables yielded five basic motor dimensions, i.e. factor of agility, factor of jumping explosive strength, factor of throwing explosive strength, factor of movement frequency rate, and factor of running explosive strength (sprint). Four significant canonic correlations, i.e. linear combinations, explained the correlation between the set of eight latent variables of the morphological and basic motor space and five variables of situation motoricity. First canonic linear combination is based on the positive effect of the factors of agility/coordination on the ability of fast movement without ball. Second linear combination is based on the effect of jumping explosive strength and transverse hand dimensionality on ball manipulation, throw precision, and speed of movement with ball. Third linear combination is based on the running explosive strength determination by the speed of movement with ball, whereas fourth combination is determined by throwing and jumping explosive strength, and agility on ball pass. The results obtained were consistent with the model of selection in female handball proposed (Srhoj et al., 2006), showing the speed of movement without ball and the ability of ball manipulation to be the predominant specific abilities, as indicated by the first and second linear combination.

  7. Respiratory gating during stereotactic body radiotherapy for lung cancer reduces tumor position variability.

    PubMed

    Saito, Tetsuo; Matsuyama, Tomohiko; Toya, Ryo; Fukugawa, Yoshiyuki; Toyofuku, Takamasa; Semba, Akiko; Oya, Natsuo

    2014-01-01

    We evaluated the effects of respiratory gating on treatment accuracy in lung cancer patients undergoing lung stereotactic body radiotherapy by using electronic portal imaging device (EPID) images. Our study population consisted of 30 lung cancer patients treated with stereotactic body radiotherapy (48 Gy/4 fractions/4 to 9 days). Of these, 14 were treated with- (group A) and 16 without gating (group B); typically the patients whose tumors showed three-dimensional respiratory motion ≧5 mm were selected for gating. Tumor respiratory motion was estimated using four-dimensional computed tomography images acquired during treatment simulation. Tumor position variability during all treatment sessions was assessed by measuring the standard deviation (SD) and range of tumor displacement on EPID images. The two groups were compared for tumor respiratory motion and position variability using the Mann-Whitney U test. The median three-dimensional tumor motion during simulation was greater in group A than group B (9 mm, range 3-30 mm vs. 2 mm, range 0-4 mm; p<0.001). In groups A and B the median SD of the tumor position was 1.1 mm and 0.9 mm in the craniocaudal- (p = 0.24) and 0.7 mm and 0.6 mm in the mediolateral direction (p = 0.89), respectively. The median range of the tumor position was 4.0 mm and 3.0 mm in the craniocaudal- (p = 0.21) and 2.0 mm and 1.5 mm in the mediolateral direction (p = 0.20), respectively. Although patients treated with respiratory gating exhibited greater respiratory tumor motion during treatment simulation, tumor position variability in the EPID images was low and comparable to patients treated without gating. This demonstrates the benefit of respiratory gating.

  8. Multicomponent Supramolecular Systems: Self-Organization in Coordination-Driven Self-Assembly

    PubMed Central

    Zheng, Yao-Rong; Yang, Hai-Bo; Ghosh, Koushik; Zhao, Liang; Stang, Peter J.

    2009-01-01

    The self-organization of multicomponent supramolecular systems involving a variety of two-dimensional (2-D) polygons and three-dimensional (3-D) cages is presented. Nine self-organizing systems, SS1–SS9, have been studied. Each involving the simultaneous mixing of organoplatinum acceptors and pyridyl donors of varying geometry and their selective self-assembly into three to four specific 2-D (rectangular, triangular, and rhomboid) and/or 3-D (triangular prism and distorted and nondistorted trigonal bipyramidal) supramolecules. The formation of these discrete structures is characterized using NMR spectroscopy and electrospray ionization mass spectrometry (ESI-MS). In all cases, the self-organization process is directed by: (1) the geometric information encoded within the molecular subunits and (2) a thermodynamically driven dynamic self-correction process. The result is the selective self-assembly of multiple discrete products from a randomly formed complex. The influence of key experimental variables – temperature and solvent – on the self-correction process and the fidelity of the resulting self-organization systems is also described. PMID:19544512

  9. Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.

    PubMed

    Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack

    2017-06-01

    In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.

  10. Network-based regularization for matched case-control analysis of high-dimensional DNA methylation data.

    PubMed

    Sun, Hokeun; Wang, Shuang

    2013-05-30

    The matched case-control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case-control studies with high-dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier paper, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network-based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non-tumor tissues or between pre-treatment and post-treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this paper, we developed a penalized conditional logistic model using the network-based penalty that encourages a grouping effect of (1) linked Cytosine-phosphate-Guanine (CpG) sites within a gene or (2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high-dimensional variable selection problems for matched case-control data. We further investigated the benefits of utilizing biological group or graph information for matched case-control data. We applied the proposed method to a genome-wide DNA methylation study on hepatocellular carcinoma (HCC) where we investigated the DNA methylation levels of tumor and adjacent non-tumor tissues from HCC patients by using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Ceramic component reliability with the restructured NASA/CARES computer program

    NASA Technical Reports Server (NTRS)

    Powers, Lynn M.; Starlinger, Alois; Gyekenyesi, John P.

    1992-01-01

    The Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design program on statistical fast fracture reliability and monolithic ceramic components is enhanced to include the use of a neutral data base, two-dimensional modeling, and variable problem size. The data base allows for the efficient transfer of element stresses, temperatures, and volumes/areas from the finite element output to the reliability analysis program. Elements are divided to insure a direct correspondence between the subelements and the Gaussian integration points. Two-dimensional modeling is accomplished by assessing the volume flaw reliability with shell elements. To demonstrate the improvements in the algorithm, example problems are selected from a round-robin conducted by WELFEP (WEakest Link failure probability prediction by Finite Element Postprocessors).

  12. Teaching a Machine to Feel Postoperative Pain: Combining High-Dimensional Clinical Data with Machine Learning Algorithms to Forecast Acute Postoperative Pain

    PubMed Central

    Tighe, Patrick J.; Harle, Christopher A.; Hurley, Robert W.; Aytug, Haldun; Boezaart, Andre P.; Fillingim, Roger B.

    2015-01-01

    Background Given their ability to process highly dimensional datasets with hundreds of variables, machine learning algorithms may offer one solution to the vexing challenge of predicting postoperative pain. Methods Here, we report on the application of machine learning algorithms to predict postoperative pain outcomes in a retrospective cohort of 8071 surgical patients using 796 clinical variables. Five algorithms were compared in terms of their ability to forecast moderate to severe postoperative pain: Least Absolute Shrinkage and Selection Operator (LASSO), gradient-boosted decision tree, support vector machine, neural network, and k-nearest neighbor, with logistic regression included for baseline comparison. Results In forecasting moderate to severe postoperative pain for postoperative day (POD) 1, the LASSO algorithm, using all 796 variables, had the highest accuracy with an area under the receiver-operating curve (ROC) of 0.704. Next, the gradient-boosted decision tree had an ROC of 0.665 and the k-nearest neighbor algorithm had an ROC of 0.643. For POD 3, the LASSO algorithm, using all variables, again had the highest accuracy, with an ROC of 0.727. Logistic regression had a lower ROC of 0.5 for predicting pain outcomes on POD 1 and 3. Conclusions Machine learning algorithms, when combined with complex and heterogeneous data from electronic medical record systems, can forecast acute postoperative pain outcomes with accuracies similar to methods that rely only on variables specifically collected for pain outcome prediction. PMID:26031220

  13. A multi-fidelity analysis selection method using a constrained discrete optimization formulation

    NASA Astrophysics Data System (ADS)

    Stults, Ian C.

    The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model uncertainty present in analyses with 4 or fewer input variables could be effectively quantified using a strategic distribution creation method; if more than 4 input variables exist, a Frontier Finding Particle Swarm Optimization should instead be used. Once model uncertainty in contributing analysis code choices has been quantified, a selection method is required to determine which of these choices should be used in simulations. Because much of the selection done for engineering problems is driven by the physics of the problem, these are poor candidate problems for testing the true fitness of a candidate selection method. Specifically moderate and high dimensional problems' variability can often be reduced to only a few dimensions and scalability often cannot be easily addressed. For these reasons a simple academic function was created for the uncertainty quantification, and a canonical form of the Fidelity Selection Problem (FSP) was created. Fifteen best- and worst-case scenarios were identified in an effort to challenge the candidate selection methods both with respect to the characteristics of the tradeoff between time cost and model uncertainty and with respect to the stringency of the constraints and problem dimensionality. The results from this experiment show that a Genetic Algorithm (GA) was able to consistently find the correct answer, but under certain circumstances, a discrete form of Particle Swarm Optimization (PSO) was able to find the correct answer more quickly. To better illustrate how the uncertainty quantification and discrete optimization might be conducted for a "real world" problem, an illustrative example was conducted using gas turbine engines.

  14. Exploring high dimensional data with Butterfly: a novel classification algorithm based on discrete dynamical systems.

    PubMed

    Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken

    2014-03-01

    We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer dataset that comes along with the included Butterfly R package. In the included R script, a univariate feature selection method is used for the dimension reduction step, but in the future we wish to use a more powerful multivariate feature reduction method based on neural networks (Kriesel, 2007). A script written in R (designed to run on R studio) accompanies this article that implements this algorithm and is available at http://butterflygeraci.codeplex.com/. For details on the R package or for help installing the software refer to the accompanying document, Supporting Material and Appendix.

  15. Evaluation of training nurses to perform semi-automated three-dimensional left ventricular ejection fraction using a customised workstation-based training protocol.

    PubMed

    Guppy-Coles, Kristyan B; Prasad, Sandhir B; Smith, Kym C; Hillier, Samuel; Lo, Ada; Atherton, John J

    2015-06-01

    We aimed to determine the feasibility of training cardiac nurses to evaluate left ventricular function utilising a semi-automated, workstation-based protocol on three dimensional echocardiography images. Assessment of left ventricular function by nurses is an attractive concept. Recent developments in three dimensional echocardiography coupled with border detection assistance have reduced inter- and intra-observer variability and analysis time. This could allow abbreviated training of nurses to assess cardiac function. A comparative, diagnostic accuracy study evaluating left ventricular ejection fraction assessment utilising a semi-automated, workstation-based protocol performed by echocardiography-naïve nurses on previously acquired three dimensional echocardiography images. Nine cardiac nurses underwent two brief lectures about cardiac anatomy, physiology and three dimensional left ventricular ejection fraction assessment, before a hands-on demonstration in 20 cases. We then selected 50 cases from our three dimensional echocardiography library based on optimal image quality with a broad range of left ventricular ejection fractions, which was quantified by two experienced sonographers and the average used as the comparator for the nurses. Nurses independently measured three dimensional left ventricular ejection fraction using the Auto lvq package with semi-automated border detection. The left ventricular ejection fraction range was 25-72% (70% with a left ventricular ejection fraction <55%). All nurses showed excellent agreement with the sonographers. Minimal intra-observer variability was noted on both short-term (same day) and long-term (>2 weeks later) retest. It is feasible to train nurses to measure left ventricular ejection fraction utilising a semi-automated, workstation-based protocol on previously acquired three dimensional echocardiography images. Further study is needed to determine the feasibility of training nurses to acquire three dimensional echocardiography images on real-world patients to measure left ventricular ejection fraction. Nurse-performed evaluation of left ventricular function could facilitate the broader application of echocardiography to allow cost-effective screening and monitoring for left ventricular dysfunction in high-risk populations. © 2014 John Wiley & Sons Ltd.

  16. [Rapid assessment of critical quality attributes of Chinese materia medica (II): strategy of NIR assignment].

    PubMed

    Pei, Yan-Ling; Wu, Zhi-Sheng; Shi, Xin-Yuan; Zhou, Lu-Wei; Qiao, Yan-Jiang

    2014-09-01

    The present paper firstly reviewed the research progress and main methods of NIR spectral assignment coupled with our research results. Principal component analysis was focused on characteristic signal extraction to reflect spectral differences. Partial least squares method was concerned with variable selection to discover characteristic absorption band. Two-dimensional correlation spectroscopy was mainly adopted for spectral assignment. Autocorrelation peaks were obtained from spectral changes, which were disturbed by external factors, such as concentration, temperature and pressure. Density functional theory was used to calculate energy from substance structure to establish the relationship between molecular energy and spectra change. Based on the above reviewed method, taking a NIR spectral assignment of chlorogenic acid as example, a reliable spectral assignment for critical quality attributes of Chinese materia medica (CMM) was established using deuterium technology and spectral variable selection. The result demonstrated the assignment consistency according to spectral features of different concentrations of chlorogenic acid and variable selection region of online NIR model in extract process. Although spectral assignment was initial using an active pharmaceutical ingredient, it is meaningful to look forward to the futurity of the complex components in CMM. Therefore, it provided methodology for NIR spectral assignment of critical quality attributes in CMM.

  17. Decorrelation of the true and estimated classifier errors in high-dimensional settings.

    PubMed

    Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R

    2007-01-01

    The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.

  18. A system of three-dimensional complex variables

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1986-01-01

    Some results of a new theory of multidimensional complex variables are reported, including analytic functions of a three-dimensional (3-D) complex variable. Three-dimensional complex numbers are defined, including vector properties and rules of multiplication. The necessary conditions for a function of a 3-D variable to be analytic are given and shown to be analogous to the 2-D Cauchy-Riemann equations. A simple example also demonstrates the analogy between the newly defined 3-D complex velocity and 3-D complex potential and the corresponding ordinary complex velocity and complex potential in two dimensions.

  19. Men and women are from Earth: examining the latent structure of gender.

    PubMed

    Carothers, Bobbi J; Reis, Harry T

    2013-02-01

    Taxometric methods enable determination of whether the latent structure of a construct is dimensional or taxonic (nonarbitrary categories). Although sex as a biological category is taxonic, psychological gender differences have not been examined in this way. The taxometric methods of mean above minus below a cut, maximum eigenvalue, and latent mode were used to investigate whether gender is taxonic or dimensional. Behavioral measures of stereotyped hobbies and physiological characteristics (physical strength, anthropometric measurements) were examined for validation purposes, and were taxonic by sex. Psychological indicators included sexuality and mating (sexual attitudes and behaviors, mate selectivity, sociosexual orientation), interpersonal orientation (empathy, relational-interdependent self-construal), gender-related dispositions (masculinity, femininity, care orientation, unmitigated communion, fear of success, science inclination, Big Five personality), and intimacy (intimacy prototypes and stages, social provisions, intimacy with best friend). Constructs were with few exceptions dimensional, speaking to Spence's (1993) gender identity theory. Average differences between men and women are not under dispute, but the dimensionality of gender indicates that these differences are inappropriate for diagnosing gender-typical psychological variables on the basis of sex. (c) 2013 APA, all rights reserved.

  20. A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval.

    PubMed

    Cai, Jia; Tang, Yi

    2018-02-01

    Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Computer simulation of two-dimensional unsteady flows in estuaries and embayments by the method of characteristics : basic theory and the formulation of the numerical method

    USGS Publications Warehouse

    Lai, Chintu

    1977-01-01

    Two-dimensional unsteady flows of homogeneous density in estuaries and embayments can be described by hyperbolic, quasi-linear partial differential equations involving three dependent and three independent variables. A linear combination of these equations leads to a parametric equation of characteristic form, which consists of two parts: total differentiation along the bicharacteristics and partial differentiation in space. For its numerical solution, the specified-time-interval scheme has been used. The unknown, partial space-derivative terms can be eliminated first by suitable combinations of difference equations, converted from the corresponding differential forms and written along four selected bicharacteristics and a streamline. Other unknowns are thus made solvable from the known variables on the current time plane. The computation is carried to the second-order accuracy by using trapezoidal rule of integration. Means to handle complex boundary conditions are developed for practical application. Computer programs have been written and a mathematical model has been constructed for flow simulation. The favorable computer outputs suggest further exploration and development of model worthwhile. (Woodard-USGS)

  2. An Investigation of Bilateral Symmetry During Manual Wheelchair Propulsion.

    PubMed

    Soltau, Shelby L; Slowik, Jonathan S; Requejo, Philip S; Mulroy, Sara J; Neptune, Richard R

    2015-01-01

    Studies of manual wheelchair propulsion often assume bilateral symmetry to simplify data collection, processing, and analysis. However, the validity of this assumption is unclear. Most investigations of wheelchair propulsion symmetry have been limited by a relatively small sample size and a focus on a single propulsion condition (e.g., level propulsion at self-selected speed). The purpose of this study was to evaluate bilateral symmetry during manual wheelchair propulsion in a large group of subjects across different propulsion conditions. Three-dimensional kinematics and handrim kinetics along with spatiotemporal variables were collected and processed from 80 subjects with paraplegia while propelling their wheelchairs on a stationary ergometer during three different conditions: level propulsion at their self-selected speed (free), level propulsion at their fastest comfortable speed (fast), and propulsion on an 8% grade at their level, self-selected speed (graded). All kinematic variables had significant side-to-side differences, primarily in the graded condition. Push angle was the only spatiotemporal variable with a significant side-to-side difference, and only during the graded condition. No kinetic variables had significant side-to-side differences. The magnitudes of the kinematic differences were low, with only one difference exceeding 5°. With differences of such small magnitude, the bilateral symmetry assumption appears to be reasonable during manual wheelchair propulsion in subjects without significant upper-extremity pain or impairment. However, larger asymmetries may exist in individuals with secondary injuries and pain in their upper extremity and different etiologies of their neurological impairment.

  3. An Investigation of Bilateral Symmetry During Manual Wheelchair Propulsion

    PubMed Central

    Soltau, Shelby L.; Slowik, Jonathan S.; Requejo, Philip S.; Mulroy, Sara J.; Neptune, Richard R.

    2015-01-01

    Studies of manual wheelchair propulsion often assume bilateral symmetry to simplify data collection, processing, and analysis. However, the validity of this assumption is unclear. Most investigations of wheelchair propulsion symmetry have been limited by a relatively small sample size and a focus on a single propulsion condition (e.g., level propulsion at self-selected speed). The purpose of this study was to evaluate bilateral symmetry during manual wheelchair propulsion in a large group of subjects across different propulsion conditions. Three-dimensional kinematics and handrim kinetics along with spatiotemporal variables were collected and processed from 80 subjects with paraplegia while propelling their wheelchairs on a stationary ergometer during three different conditions: level propulsion at their self-selected speed (free), level propulsion at their fastest comfortable speed (fast), and propulsion on an 8% grade at their level, self-selected speed (graded). All kinematic variables had significant side-to-side differences, primarily in the graded condition. Push angle was the only spatiotemporal variable with a significant side-to-side difference, and only during the graded condition. No kinetic variables had significant side-to-side differences. The magnitudes of the kinematic differences were low, with only one difference exceeding 5°. With differences of such small magnitude, the bilateral symmetry assumption appears to be reasonable during manual wheelchair propulsion in subjects without significant upper-extremity pain or impairment. However, larger asymmetries may exist in individuals with secondary injuries and pain in their upper extremity and different etiologies of their neurological impairment. PMID:26125019

  4. A Global Interpolation Function (GIF) boundary element code for viscous flows

    NASA Technical Reports Server (NTRS)

    Reddy, D. R.; Lafe, O.; Cheng, A. H-D.

    1995-01-01

    Using global interpolation functions (GIF's), boundary element solutions are obtained for two- and three-dimensional viscous flows. The solution is obtained in the form of a boundary integral plus a series of global basis functions. The unknown coefficients of the GIF's are determined to ensure the satisfaction of the governing equations at selected collocation points. The values of the coefficients involved in the boundary integral equations are determined by enforcing the boundary conditions. Both primitive variable and vorticity-velocity formulations are examined.

  5. Active Response Gravity Offload and Method

    NASA Technical Reports Server (NTRS)

    Dungan, Larry K. (Inventor); Lieberman, Asher P. (Inventor); Shy, Cecil (Inventor); Bankieris, Derek R. (Inventor); Valle, Paul S. (Inventor); Redden, Lee (Inventor)

    2015-01-01

    A variable gravity field simulator can be utilized to provide three dimensional simulations for simulated gravity fields selectively ranging from Moon, Mars, and micro-gravity environments and/or other selectable gravity fields. The gravity field simulator utilizes a horizontally moveable carriage with a cable extending from a hoist. The cable can be attached to a load which experiences the effects of the simulated gravity environment. The load can be a human being or robot that makes movements that induce swinging of the cable whereby a horizontal control system reduces swinging energy. A vertical control system uses a non-linear feedback filter to remove noise from a load sensor that is in the same frequency range as signals from the load sensor.

  6. A Review on Dimension Reduction

    PubMed Central

    Ma, Yanyuan; Zhu, Liping

    2013-01-01

    Summary Summarizing the effect of many covariates through a few linear combinations is an effective way of reducing covariate dimension and is the backbone of (sufficient) dimension reduction. Because the replacement of high-dimensional covariates by low-dimensional linear combinations is performed with a minimum assumption on the specific regression form, it enjoys attractive advantages as well as encounters unique challenges in comparison with the variable selection approach. We review the current literature of dimension reduction with an emphasis on the two most popular models, where the dimension reduction affects the conditional distribution and the conditional mean, respectively. We discuss various estimation and inference procedures in different levels of detail, with the intention of focusing on their underneath idea instead of technicalities. We also discuss some unsolved problems in this area for potential future research. PMID:23794782

  7. Development of a two-dimensional skin friction balance nulling circuit using multivariable control theory

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Patek, Stephen D.

    1988-01-01

    Measurement of planar skin friction forces in aerodynamic testing currently requires installation of two perpendicularly mounted, single-axis balances; consequently, force components must be sensed at two distinct locations. A two-axis instrument developed at the Langley Research Center to overcome this disadvantage allows measurement of a two-dimensional force at one location. This paper describes a feedback-controlled nulling circuit developed for the NASA two-axis balance which, without external compensation, is inherently unstable because of its low friction mechanical design. Linear multivariable control theory is applied to an experimentally validated mathematical model of the balance to synthesize a state-variable feedback control law. Pole placement techniques and computer simulation studies are employed to select eigenvalues which provide ideal transient response with decoupled sensing dynamics.

  8. Determination of statistics for any rotation of axes of a bivariate normal elliptical distribution. [of wind vector components

    NASA Technical Reports Server (NTRS)

    Falls, L. W.; Crutcher, H. L.

    1976-01-01

    Transformation of statistics from a dimensional set to another dimensional set involves linear functions of the original set of statistics. Similarly, linear functions will transform statistics within a dimensional set such that the new statistics are relevant to a new set of coordinate axes. A restricted case of the latter is the rotation of axes in a coordinate system involving any two correlated random variables. A special case is the transformation for horizontal wind distributions. Wind statistics are usually provided in terms of wind speed and direction (measured clockwise from north) or in east-west and north-south components. A direct application of this technique allows the determination of appropriate wind statistics parallel and normal to any preselected flight path of a space vehicle. Among the constraints for launching space vehicles are critical values selected from the distribution of the expected winds parallel to and normal to the flight path. These procedures are applied to space vehicle launches at Cape Kennedy, Florida.

  9. Comprehensive two-dimensional gas chromatography for the analysis of Fischer-Tropsch oil products.

    PubMed

    van der Westhuizen, Rina; Crous, Renier; de Villiers, André; Sandra, Pat

    2010-12-24

    The Fischer-Tropsch (FT) process involves a series of catalysed reactions of carbon monoxide and hydrogen, originating from coal, natural gas or biomass, leading to a variety of synthetic chemicals and fuels. The benefits of comprehensive two-dimensional gas chromatography (GC×GC) compared to one-dimensional GC (1D-GC) for the detailed investigation of the oil products of low and high temperature FT processes are presented. GC×GC provides more accurate quantitative data to construct Anderson-Schultz-Flory (ASF) selectivity models that correlate the FT product distribution with reaction variables. On the other hand, the high peak capacity and sensitivity of GC×GC allow the detailed study of components present at trace level. Analyses of the aromatic and oxygenated fractions of a high temperature FT (HT-FT) process are presented. GC×GC data have been used to optimise or tune the HT-FT process by using a lab-scale micro-FT-reactor. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. New horizons for study of the cardiopulmonary and circulatory systems. [image reconstruction techniques

    NASA Technical Reports Server (NTRS)

    Wood, E. H.

    1976-01-01

    The paper discusses the development of computer-controlled three-dimensional reconstruction techniques designed to determine the dynamic changes in the true shape and dimensions of the epi- and endocardial surfaces of the heart, along with variable time base (stop-action to real-time) displays of the transmural distribution of the coronary microcirculation and the three-dimensional anatomy of the macrovasculature in all regions of the body throughout individual cardiac and/or respiratory cycles. A technique for reconstructing a cross section of the heart from multiplanar videoroentgenograms is outlined. The capability of high spatial and high temporal resolution scanning videodensitometry makes possible measurement of the appearance, mean transit and clearance of roentgen opaque substances in three-dimensional space through the myocardium with a degree of simultaneous anatomic and temporal resolution not obtainable by current isotope techniques. The distribution of a variety of selected chemical elements or biologic materials within a body portion can also be determined.

  11. Reinforcement Learning Trees

    PubMed Central

    Zhu, Ruoqing; Zeng, Donglin; Kosorok, Michael R.

    2015-01-01

    In this paper, we introduce a new type of tree-based method, reinforcement learning trees (RLT), which exhibits significantly improved performance over traditional methods such as random forests (Breiman, 2001) under high-dimensional settings. The innovations are three-fold. First, the new method implements reinforcement learning at each selection of a splitting variable during the tree construction processes. By splitting on the variable that brings the greatest future improvement in later splits, rather than choosing the one with largest marginal effect from the immediate split, the constructed tree utilizes the available samples in a more efficient way. Moreover, such an approach enables linear combination cuts at little extra computational cost. Second, we propose a variable muting procedure that progressively eliminates noise variables during the construction of each individual tree. The muting procedure also takes advantage of reinforcement learning and prevents noise variables from being considered in the search for splitting rules, so that towards terminal nodes, where the sample size is small, the splitting rules are still constructed from only strong variables. Last, we investigate asymptotic properties of the proposed method under basic assumptions and discuss rationale in general settings. PMID:26903687

  12. One- and Two-dimensional Solitary Wave States in the Nonlinear Kramers Equation with Movement Direction as a Variable

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Hidetsugu; Ishibashi, Kazuya

    2018-06-01

    We study self-propelled particles by direct numerical simulation of the nonlinear Kramers equation for self-propelled particles. In our previous paper, we studied self-propelled particles with velocity variables in one dimension. In this paper, we consider another model in which each particle exhibits directional motion. The movement direction is expressed with a variable ϕ. We show that one-dimensional solitary wave states appear in direct numerical simulations of the nonlinear Kramers equation in one- and two-dimensional systems, which is a generalization of our previous result. Furthermore, we find two-dimensionally localized states in the case that each self-propelled particle exhibits rotational motion. The center of mass of the two-dimensionally localized state exhibits circular motion, which implies collective rotating motion. Finally, we consider a simple one-dimensional model equation to qualitatively understand the formation of the solitary wave state.

  13. Low-Dimensional Statistics of Anatomical Variability via Compact Representation of Image Deformations.

    PubMed

    Zhang, Miaomiao; Wells, William M; Golland, Polina

    2016-10-01

    Using image-based descriptors to investigate clinical hypotheses and therapeutic implications is challenging due to the notorious "curse of dimensionality" coupled with a small sample size. In this paper, we present a low-dimensional analysis of anatomical shape variability in the space of diffeomorphisms and demonstrate its benefits for clinical studies. To combat the high dimensionality of the deformation descriptors, we develop a probabilistic model of principal geodesic analysis in a bandlimited low-dimensional space that still captures the underlying variability of image data. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than models based on the high-dimensional state-of-the-art approaches such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA).

  14. Slow-fast stochastic diffusion dynamics and quasi-stationarity for diploid populations with varying size.

    PubMed

    Coron, Camille

    2016-01-01

    We are interested in the long-time behavior of a diploid population with sexual reproduction and randomly varying population size, characterized by its genotype composition at one bi-allelic locus. The population is modeled by a 3-dimensional birth-and-death process with competition, weak cooperation and Mendelian reproduction. This stochastic process is indexed by a scaling parameter K that goes to infinity, following a large population assumption. When the individual birth and natural death rates are of order K, the sequence of stochastic processes indexed by K converges toward a new slow-fast dynamics with variable population size. We indeed prove the convergence toward 0 of a fast variable giving the deviation of the population from quasi Hardy-Weinberg equilibrium, while the sequence of slow variables giving the respective numbers of occurrences of each allele converges toward a 2-dimensional diffusion process that reaches (0,0) almost surely in finite time. The population size and the proportion of a given allele converge toward a Wright-Fisher diffusion with stochastically varying population size and diploid selection. We insist on differences between haploid and diploid populations due to population size stochastic variability. Using a non trivial change of variables, we study the absorption of this diffusion and its long time behavior conditioned on non-extinction. In particular we prove that this diffusion starting from any non-trivial state and conditioned on not hitting (0,0) admits a unique quasi-stationary distribution. We give numerical approximations of this quasi-stationary behavior in three biologically relevant cases: neutrality, overdominance, and separate niches.

  15. Selective Removal of Natural Occlusal Caries by Coupling Near-infrared Imaging with a CO2 Laser

    PubMed Central

    Tao, You-Chen; Fried, Daniel

    2011-01-01

    Laser removal of dental hard tissue can be combined with optical, spectral or acoustic feedback systems to selectively ablate dental caries and restorative materials. Near-infrared (NIR) imaging has considerable potential for the optical discrimination of sound and demineralized tissue. Last year we successfully demonstrated that near-IR images can be used to guide a CO2 laser ablation system for the selective removal of artificial caries lesions on smooth surfaces. The objective of this study was to test the hypothesis that two-dimensional near-infrared images of natural occlusal caries can be used to guide a CO2 laser for selective removal. Two-dimensional NIR images were acquired at 1310-nm of extracted human molar teeth with occlusal caries. Polarization sensitive optical coherence tomography (PS-OCT) was also used to acquire depth-resolved images of the lesion areas. An imaging processing module was developed to analyze the NIR imaging output and generate optical maps that were used to guide a CO2 laser to selectively remove the lesions at a uniform depth. Post-ablation NIR images were acquired to verify caries removal. Based on the analysis of the NIR images, caries lesions were selectively removed with a CO2 laser while sound tissues were conserved. However, the removal rate varied markedly with the severity of decay and multiple passes were required for caries removal. These initial results are promising but indicate that the selective removal of natural caries is more challenging than the selective removal of artificial lesions due to varying tooth geometry, the highly variable organic/mineral ratio in natural lesions and more complicated lesion structure. PMID:21909225

  16. Selective removal of natural occlusal caries by coupling near-infrared imaging with a CO II laser

    NASA Astrophysics Data System (ADS)

    Tao, You-Chen; Fried, Daniel

    2008-02-01

    Laser removal of dental hard tissue can be combined with optical, spectral or acoustic feedback systems to selectively ablate dental caries and restorative materials. Near-infrared (NIR) imaging has considerable potential for the optical discrimination of sound and demineralized tissue. Last year we successfully demonstrated that near-IR images can be used to guide a CO2 laser ablation system for the selective removal of artificial caries lesions on smooth surfaces. The objective of this study was to test the hypothesis that two-dimensional near-infrared images of natural occlusal caries can be used to guide a CO2 laser for selective removal. Two-dimensional NIR images were acquired at 1310-nm of extracted human molar teeth with occlusal caries. Polarization sensitive optical coherence tomography (PS-OCT) was also used to acquire depth-resolved images of the lesion areas. An imaging processing module was developed to analyze the NIR imaging output and generate optical maps that were used to guide a CO2 laser to selectively remove the lesions at a uniform depth. Post-ablation NIR images were acquired to verify caries removal. Based on the analysis of the NIR images, caries lesions were selectively removed with a CO2 laser while sound tissues were conserved. However, the removal rate varied markedly with the severity of decay and multiple passes were required for caries removal. These initial results are promising but indicate that the selective removal of natural caries is more challenging than the selective removal of artificial lesions due to varying tooth geometry, the highly variable organic/mineral ratio in natural lesions and more complicated lesion structure.

  17. Selective Removal of Natural Occlusal Caries by Coupling Near-infrared Imaging with a CO(2) Laser.

    PubMed

    Tao, You-Chen; Fried, Daniel

    2008-03-01

    Laser removal of dental hard tissue can be combined with optical, spectral or acoustic feedback systems to selectively ablate dental caries and restorative materials. Near-infrared (NIR) imaging has considerable potential for the optical discrimination of sound and demineralized tissue. Last year we successfully demonstrated that near-IR images can be used to guide a CO(2) laser ablation system for the selective removal of artificial caries lesions on smooth surfaces. The objective of this study was to test the hypothesis that two-dimensional near-infrared images of natural occlusal caries can be used to guide a CO(2) laser for selective removal. Two-dimensional NIR images were acquired at 1310-nm of extracted human molar teeth with occlusal caries. Polarization sensitive optical coherence tomography (PS-OCT) was also used to acquire depth-resolved images of the lesion areas. An imaging processing module was developed to analyze the NIR imaging output and generate optical maps that were used to guide a CO(2) laser to selectively remove the lesions at a uniform depth. Post-ablation NIR images were acquired to verify caries removal. Based on the analysis of the NIR images, caries lesions were selectively removed with a CO(2) laser while sound tissues were conserved. However, the removal rate varied markedly with the severity of decay and multiple passes were required for caries removal. These initial results are promising but indicate that the selective removal of natural caries is more challenging than the selective removal of artificial lesions due to varying tooth geometry, the highly variable organic/mineral ratio in natural lesions and more complicated lesion structure.

  18. Prediction of Malaysian monthly GDP

    NASA Astrophysics Data System (ADS)

    Hin, Pooi Ah; Ching, Soo Huei; Yeing, Pan Wei

    2015-12-01

    The paper attempts to use a method based on multivariate power-normal distribution to predict the Malaysian Gross Domestic Product next month. Letting r(t) be the vector consisting of the month-t values on m selected macroeconomic variables, and GDP, we model the month-(t+1) GDP to be dependent on the present and l-1 past values r(t), r(t-1),…,r(t-l+1) via a conditional distribution which is derived from a [(m+1)l+1]-dimensional power-normal distribution. The 100(α/2)% and 100(1-α/2)% points of the conditional distribution may be used to form an out-of sample prediction interval. This interval together with the mean of the conditional distribution may be used to predict the month-(t+1) GDP. The mean absolute percentage error (MAPE), estimated coverage probability and average length of the prediction interval are used as the criterions for selecting the suitable lag value l-1 and the subset from a pool of 17 macroeconomic variables. It is found that the relatively better models would be those of which 2 ≤ l ≤ 3, and involving one or two of the macroeconomic variables given by Market Indicative Yield, Oil Prices, Exchange Rate and Import Trade.

  19. A public dataset of running biomechanics and the effects of running speed on lower extremity kinematics and kinetics

    PubMed Central

    Fukuchi, Claudiane A.; Duarte, Marcos

    2017-01-01

    Background The goals of this study were (1) to present the set of data evaluating running biomechanics (kinematics and kinetics), including data on running habits, demographics, and levels of muscle strength and flexibility made available at Figshare (DOI: 10.6084/m9.figshare.4543435); and (2) to examine the effect of running speed on selected gait-biomechanics variables related to both running injuries and running economy. Methods The lower-extremity kinematics and kinetics data of 28 regular runners were collected using a three-dimensional (3D) motion-capture system and an instrumented treadmill while the subjects ran at 2.5 m/s, 3.5 m/s, and 4.5 m/s wearing standard neutral shoes. Results A dataset comprising raw and processed kinematics and kinetics signals pertaining to this experiment is available in various file formats. In addition, a file of metadata, including demographics, running characteristics, foot-strike patterns, and muscle strength and flexibility measurements is provided. Overall, there was an effect of running speed on most of the gait-biomechanics variables selected for this study. However, the foot-strike patterns were not affected by running speed. Discussion Several applications of this dataset can be anticipated, including testing new methods of data reduction and variable selection; for educational purposes; and answering specific research questions. This last application was exemplified in the study’s second objective. PMID:28503379

  20. A public dataset of running biomechanics and the effects of running speed on lower extremity kinematics and kinetics.

    PubMed

    Fukuchi, Reginaldo K; Fukuchi, Claudiane A; Duarte, Marcos

    2017-01-01

    The goals of this study were (1) to present the set of data evaluating running biomechanics (kinematics and kinetics), including data on running habits, demographics, and levels of muscle strength and flexibility made available at Figshare (DOI: 10.6084/m9.figshare.4543435); and (2) to examine the effect of running speed on selected gait-biomechanics variables related to both running injuries and running economy. The lower-extremity kinematics and kinetics data of 28 regular runners were collected using a three-dimensional (3D) motion-capture system and an instrumented treadmill while the subjects ran at 2.5 m/s, 3.5 m/s, and 4.5 m/s wearing standard neutral shoes. A dataset comprising raw and processed kinematics and kinetics signals pertaining to this experiment is available in various file formats. In addition, a file of metadata, including demographics, running characteristics, foot-strike patterns, and muscle strength and flexibility measurements is provided. Overall, there was an effect of running speed on most of the gait-biomechanics variables selected for this study. However, the foot-strike patterns were not affected by running speed. Several applications of this dataset can be anticipated, including testing new methods of data reduction and variable selection; for educational purposes; and answering specific research questions. This last application was exemplified in the study's second objective.

  1. Multiple-output support vector machine regression with feature selection for arousal/valence space emotion assessment.

    PubMed

    Torres-Valencia, Cristian A; Álvarez, Mauricio A; Orozco-Gutiérrez, Alvaro A

    2014-01-01

    Human emotion recognition (HER) allows the assessment of an affective state of a subject. Until recently, such emotional states were described in terms of discrete emotions, like happiness or contempt. In order to cover a high range of emotions, researchers in the field have introduced different dimensional spaces for emotion description that allow the characterization of affective states in terms of several variables or dimensions that measure distinct aspects of the emotion. One of the most common of such dimensional spaces is the bidimensional Arousal/Valence space. To the best of our knowledge, all HER systems so far have modelled independently, the dimensions in these dimensional spaces. In this paper, we study the effect of modelling the output dimensions simultaneously and show experimentally the advantages in modeling them in this way. We consider a multimodal approach by including features from the Electroencephalogram and a few physiological signals. For modelling the multiple outputs, we employ a multiple output regressor based on support vector machines. We also include an stage of feature selection that is developed within an embedded approach known as Recursive Feature Elimination (RFE), proposed initially for SVM. The results show that several features can be eliminated using the multiple output support vector regressor with RFE without affecting the performance of the regressor. From the analysis of the features selected in smaller subsets via RFE, it can be observed that the signals that are more informative into the arousal and valence space discrimination are the EEG, Electrooculogram/Electromiogram (EOG/EMG) and the Galvanic Skin Response (GSR).

  2. On-shell constrained M 2 variables with applications to mass measurements and topology disambiguation

    NASA Astrophysics Data System (ADS)

    Cho, Won Sang; Gainer, James S.; Kim, Doojin; Matchev, Konstantin T.; Moortgat, Filip; Pape, Luc; Park, Myeonghun

    2014-08-01

    We consider a class of on-shell constrained mass variables that are 3+1 dimensional generalizations of the Cambridge M T2 variable and that automatically incorporate various assumptions about the underlying event topology. The presence of additional on-shell constraints causes their kinematic distributions to exhibit sharper endpoints than the usual M T2 distribution. We study the mathematical properties of these new variables, e.g., the uniqueness of the solution selected by the minimization over the invisible particle 4-momenta. We then use this solution to reconstruct the masses of various particles along the decay chain. We propose several tests for validating the assumed event topology in missing energy events from new physics. The tests are able to determine: 1) whether the decays in the event are two-body or three-body, 2) if the decay is two-body, whether the intermediate resonances in the two decay chains are the same, and 3) the exact sequence in which the visible particles are emitted from each decay chain.

  3. Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization

    NASA Astrophysics Data System (ADS)

    Li, Li

    2018-03-01

    In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.

  4. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.

    PubMed

    Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel

    2011-05-09

    Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM classification algorithms as well as fixed grid and interval search for finding appropriate tuning parameters were implemented in our freely available R package 'penalizedSVM'.We conclude that the Elastic SCAD SVM is a flexible and robust tool for classification and feature selection tasks for high-dimensional data such as microarray data sets.

  5. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data

    PubMed Central

    2011-01-01

    Background Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net. We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone. Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Results Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error. Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. Conclusions The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters. The penalized SVM classification algorithms as well as fixed grid and interval search for finding appropriate tuning parameters were implemented in our freely available R package 'penalizedSVM'. We conclude that the Elastic SCAD SVM is a flexible and robust tool for classification and feature selection tasks for high-dimensional data such as microarray data sets. PMID:21554689

  6. An Efficient Variable Screening Method for Effective Surrogate Models for Reliability-Based Design Optimization

    DTIC Science & Technology

    2014-04-01

    surrogate model generation is difficult for high -dimensional problems, due to the curse of dimensionality. Variable screening methods have been...a variable screening model was developed for the quasi-molecular treatment of ion-atom collision [16]. In engineering, a confidence interval of...for high -level radioactive waste [18]. Moreover, the design sensitivity method can be extended to the variable screening method because vital

  7. Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture

    NASA Technical Reports Server (NTRS)

    Gloersen, Per (Inventor)

    2004-01-01

    An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.

  8. Influence plots for LASSO

    DOE PAGES

    Jang, Dae -Heung; Anderson-Cook, Christine Michaela

    2016-11-22

    With many predictors in regression, fitting the full model can induce multicollinearity problems. Least Absolute Shrinkage and Selection Operation (LASSO) is useful when the effects of many explanatory variables are sparse in a high-dimensional dataset. Influential points can have a disproportionate impact on the estimated values of model parameters. Here, this paper describes a new influence plot that can be used to increase understanding of the contributions of individual observations and the robustness of results. This can serve as a complement to other regression diagnostics techniques in the LASSO regression setting. Using this influence plot, we can find influential pointsmore » and their impact on shrinkage of model parameters and model selection. Lastly, we provide two examples to illustrate the methods.« less

  9. Influence plots for LASSO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, Dae -Heung; Anderson-Cook, Christine Michaela

    With many predictors in regression, fitting the full model can induce multicollinearity problems. Least Absolute Shrinkage and Selection Operation (LASSO) is useful when the effects of many explanatory variables are sparse in a high-dimensional dataset. Influential points can have a disproportionate impact on the estimated values of model parameters. Here, this paper describes a new influence plot that can be used to increase understanding of the contributions of individual observations and the robustness of results. This can serve as a complement to other regression diagnostics techniques in the LASSO regression setting. Using this influence plot, we can find influential pointsmore » and their impact on shrinkage of model parameters and model selection. Lastly, we provide two examples to illustrate the methods.« less

  10. Long-range prediction of Indian summer monsoon rainfall using data mining and statistical approaches

    NASA Astrophysics Data System (ADS)

    H, Vathsala; Koolagudi, Shashidhar G.

    2017-10-01

    This paper presents a hybrid model to better predict Indian summer monsoon rainfall. The algorithm considers suitable techniques for processing dense datasets. The proposed three-step algorithm comprises closed itemset generation-based association rule mining for feature selection, cluster membership for dimensionality reduction, and simple logistic function for prediction. The application of predicting rainfall into flood, excess, normal, deficit, and drought based on 36 predictors consisting of land and ocean variables is presented. Results show good accuracy in the considered study period of 37years (1969-2005).

  11. Dimensional reduction for a SIR type model

    NASA Astrophysics Data System (ADS)

    Cahyono, Edi; Soeharyadi, Yudi; Mukhsar

    2018-03-01

    Epidemic phenomena are often modeled in the form of dynamical systems. Such model has also been used to model spread of rumor, spread of extreme ideology, and dissemination of knowledge. Among the simplest is SIR (susceptible, infected and recovered) model, a model that consists of three compartments, and hence three variables. The variables are functions of time which represent the number of subpopulations, namely suspect, infected and recovery. The sum of the three is assumed to be constant. Hence, the model is actually two dimensional which sits in three-dimensional ambient space. This paper deals with the reduction of a SIR type model into two variables in two-dimensional ambient space to understand the geometry and dynamics better. The dynamics is studied, and the phase portrait is presented. The two dimensional model preserves the equilibrium and the stability. The model has been applied for knowledge dissemination, which has been the interest of knowledge management.

  12. On the use of transition matrix methods with extended ensembles.

    PubMed

    Escobedo, Fernando A; Abreu, Charlles R A

    2006-03-14

    Different extended ensemble schemes for non-Boltzmann sampling (NBS) of a selected reaction coordinate lambda were formulated so that they employ (i) "variable" sampling window schemes (that include the "successive umbrella sampling" method) to comprehensibly explore the lambda domain and (ii) transition matrix methods to iteratively obtain the underlying free-energy eta landscape (or "importance" weights) associated with lambda. The connection between "acceptance ratio" and transition matrix methods was first established to form the basis of the approach for estimating eta(lambda). The validity and performance of the different NBS schemes were then assessed using as lambda coordinate the configurational energy of the Lennard-Jones fluid. For the cases studied, it was found that the convergence rate in the estimation of eta is little affected by the use of data from high-order transitions, while it is noticeably improved by the use of a broader window of sampling in the variable window methods. Finally, it is shown how an "elastic" window of sampling can be used to effectively enact (nonuniform) preferential sampling over the lambda domain, and how to stitch the weights from separate one-dimensional NBS runs to produce a eta surface over a two-dimensional domain.

  13. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis

    PubMed Central

    Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano

    2015-01-01

    As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246

  14. Comparative Variable Temperature Studies of Polyamide II with a Benchtop Fourier Transform and a Miniature Handheld Near-Infrared Spectrometer Using 2D-COS and PCMW-2D Analysis.

    PubMed

    Unger, Miriam; Pfeifer, Frank; Siesler, Heinz W

    2016-07-01

    The main objective of this communication is to compare the performance of a miniaturized handheld near-infrared (NIR) spectrometer with a benchtop Fourier transform near-infrared (FT-NIR) spectrometer. Generally, NIR spectroscopy is an extremely powerful analytical tool to study hydrogen-bonding changes of amide functionalities in solid and liquid materials and therefore variable temperature NIR measurements of polyamide II (PAII) have been selected as a case study. The information content of the measurement data has been further enhanced by exploiting the potential of two-dimensional correlation spectroscopy (2D-COS) and the perturbation correlation moving window two-dimensional (PCMW2D) evaluation technique. The data provide valuable insights not only into the changes of the hydrogen-bonding structure and the recrystallization of the hydrocarbon segments of the investigated PAII but also in their sequential order. Furthermore, it has been demonstrated that the 2D-COS and PCMW2D results derived from the spectra measured with the miniaturized NIR instrument are equivalent to the information extracted from the data obtained with the high-performance FT-NIR instrument. © The Author(s) 2016.

  15. Data-driven clustering of rain events: microphysics information derived from macro-scale observations

    NASA Astrophysics Data System (ADS)

    Djallel Dilmi, Mohamed; Mallet, Cécile; Barthes, Laurent; Chazottes, Aymeric

    2017-04-01

    Rain time series records are generally studied using rainfall rate or accumulation parameters, which are estimated for a fixed duration (typically 1 min, 1 h or 1 day). In this study we use the concept of rain events. The aim of the first part of this paper is to establish a parsimonious characterization of rain events, using a minimal set of variables selected among those normally used for the characterization of these events. A methodology is proposed, based on the combined use of a genetic algorithm (GA) and self-organizing maps (SOMs). It can be advantageous to use an SOM, since it allows a high-dimensional data space to be mapped onto a two-dimensional space while preserving, in an unsupervised manner, most of the information contained in the initial space topology. The 2-D maps obtained in this way allow the relationships between variables to be determined and redundant variables to be removed, thus leading to a minimal subset of variables. We verify that such 2-D maps make it possible to determine the characteristics of all events, on the basis of only five features (the event duration, the peak rain rate, the rain event depth, the standard deviation of the rain rate event and the absolute rain rate variation of the order of 0.5). From this minimal subset of variables, hierarchical cluster analyses were carried out. We show that clustering into two classes allows the conventional convective and stratiform classes to be determined, whereas classification into five classes allows this convective-stratiform classification to be further refined. Finally, our study made it possible to reveal the presence of some specific relationships between these five classes and the microphysics of their associated rain events.

  16. Effects of B1 inhomogeneity correction for three-dimensional variable flip angle T1 measurements in hip dGEMRIC at 3 T and 1.5 T.

    PubMed

    Siversson, Carl; Chan, Jenny; Tiderius, Carl-Johan; Mamisch, Tallal Charles; Jellus, Vladimir; Svensson, Jonas; Kim, Young-Jo

    2012-06-01

    Delayed gadolinium-enhanced MRI of cartilage is a technique for studying the development of osteoarthritis using quantitative T(1) measurements. Three-dimensional variable flip angle is a promising method for performing such measurements rapidly, by using two successive spoiled gradient echo sequences with different excitation pulse flip angles. However, the three-dimensional variable flip angle method is very sensitive to inhomogeneities in the transmitted B(1) field in vivo. In this study, a method for correcting for such inhomogeneities, using an additional B(1) mapping spin-echo sequence, was evaluated. Phantom studies concluded that three-dimensional variable flip angle with B(1) correction calculates accurate T(1) values also in areas with high B(1) deviation. Retrospective analysis of in vivo hip delayed gadolinium-enhanced MRI of cartilage data from 40 subjects showed the difference between three-dimensional variable flip angle with and without B(1) correction to be generally two to three times higher at 3 T than at 1.5 T. In conclusion, the B(1) variations should always be taken into account, both at 1.5 T and at 3 T. Copyright © 2011 Wiley-Liss, Inc.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dayman, Ken J; Ade, Brian J; Weber, Charles F

    High-dimensional, nonlinear function estimation using large datasets is a current area of interest in the machine learning community, and applications may be found throughout the analytical sciences, where ever-growing datasets are making more information available to the analyst. In this paper, we leverage the existing relevance vector machine, a sparse Bayesian version of the well-studied support vector machine, and expand the method to include integrated feature selection and automatic function shaping. These innovations produce an algorithm that is able to distinguish variables that are useful for making predictions of a response from variables that are unrelated or confusing. We testmore » the technology using synthetic data, conduct initial performance studies, and develop a model capable of making position-independent predictions of the coreaveraged burnup using a single specimen drawn randomly from a nuclear reactor core.« less

  18. Use of shape-from-shading to characterize mucosal topography in celiac disease videocapsule images

    PubMed Central

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2017-01-01

    AIM To use a computerized shape-from-shading technique to characterize the topography of the small intestinal mucosa. METHODS Videoclips comprised of 100-200 images each were obtained from the distal duodenum in 8 celiac and 8 control patients. Images with high texture were selected from each videoclip and projected from two to three dimensions by using grayscale pixel brightness as the Z-axis spatial variable. The resulting images for celiac patients were then ordered using the Marsh score to estimate the degree of villous atrophy, and compared with control data. RESULTS Topographic changes in celiac patient three-dimensional constructs were often more variable as compared to controls. The mean absolute derivative in elevation was 2.34 ± 0.35 brightness units for celiacs vs 1.95 ± 0.28 for controls (P = 0.014). The standard deviation of the derivative in elevation was 4.87 ± 0.35 brightness units for celiacs vs 4.47 ± 0.36 for controls (P = 0.023). Celiac patients with Marsh IIIC villous atrophy tended to have the largest topographic changes. Plotted in two dimensions, celiac data could be separated from controls with 80% sensitivity and specificity. CONCLUSION Use of shape-from-shading to construct three-dimensional projections approximating the actual spatial geometry of the small intestinal substrate is useful to observe features not readily apparent in two-dimensional videocapsule images. This method represents a potentially helpful adjunct to detect areas of pathology during videocapsule analysis. PMID:28744343

  19. Visions of visualization aids: Design philosophy and experimental results

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    1990-01-01

    Aids for the visualization of high-dimensional scientific or other data must be designed. Simply casting multidimensional data into a two- or three-dimensional spatial metaphor does not guarantee that the presentation will provide insight or parsimonious description of the phenomena underlying the data. Indeed, the communication of the essential meaning of some multidimensional data may be obscured by presentation in a spatially distributed format. Useful visualization is generally based on pre-existing theoretical beliefs concerning the underlying phenomena which guide selection and formatting of the plotted variables. Two examples from chaotic dynamics are used to illustrate how a visulaization may be an aid to insight. Two examples of displays to aid spatial maneuvering are described. The first, a perspective format for a commercial air traffic display, illustrates how geometric distortion may be introduced to insure that an operator can understand a depicted three-dimensional situation. The second, a display for planning small spacecraft maneuvers, illustrates how the complex counterintuitive character of orbital maneuvering may be made more tractable by removing higher-order nonlinear control dynamics, and allowing independent satisfaction of velocity and plume impingement constraints on orbital changes.

  20. Predicting Viral Infection From High-Dimensional Biomarker Trajectories

    PubMed Central

    Chen, Minhua; Zaas, Aimee; Woods, Christopher; Ginsburg, Geoffrey S.; Lucas, Joseph; Dunson, David; Carin, Lawrence

    2013-01-01

    There is often interest in predicting an individual’s latent health status based on high-dimensional biomarkers that vary over time. Motivated by time-course gene expression array data that we have collected in two influenza challenge studies performed with healthy human volunteers, we develop a novel time-aligned Bayesian dynamic factor analysis methodology. The time course trajectories in the gene expressions are related to a relatively low-dimensional vector of latent factors, which vary dynamically starting at the latent initiation time of infection. Using a nonparametric cure rate model for the latent initiation times, we allow selection of the genes in the viral response pathway, variability among individuals in infection times, and a subset of individuals who are not infected. As we demonstrate using held-out data, this statistical framework allows accurate predictions of infected individuals in advance of the development of clinical symptoms, without labeled data and even when the number of biomarkers vastly exceeds the number of individuals under study. Biological interpretation of several of the inferred pathways (factors) is provided. PMID:23704802

  1. Cubic map algebra functions for spatio-temporal analysis

    USGS Publications Warehouse

    Mennis, J.; Viger, R.; Tomlin, C.D.

    2005-01-01

    We propose an extension of map algebra to three dimensions for spatio-temporal data handling. This approach yields a new class of map algebra functions that we call "cube functions." Whereas conventional map algebra functions operate on data layers representing two-dimensional space, cube functions operate on data cubes representing two-dimensional space over a third-dimensional period of time. We describe the prototype implementation of a spatio-temporal data structure and selected cube function versions of conventional local, focal, and zonal map algebra functions. The utility of cube functions is demonstrated through a case study analyzing the spatio-temporal variability of remotely sensed, southeastern U.S. vegetation character over various land covers and during different El Nin??o/Southern Oscillation (ENSO) phases. Like conventional map algebra, the application of cube functions may demand significant data preprocessing when integrating diverse data sets, and are subject to limitations related to data storage and algorithm performance. Solutions to these issues include extending data compression and computing strategies for calculations on very large data volumes to spatio-temporal data handling.

  2. Method of fabricating free-form, high-aspect ratio components for high-current, high-speed microelectrics

    DOEpatents

    Maxwell, James L; Rose, Chris R; Black, Marcie R; Springer, Robert W

    2014-03-11

    Microelectronic structures and devices, and method of fabricating a three-dimensional microelectronic structure is provided, comprising passing a first precursor material for a selected three-dimensional microelectronic structure into a reaction chamber at temperatures sufficient to maintain said precursor material in a predominantly gaseous state; maintaining said reaction chamber under sufficient pressures to enhance formation of a first portion of said three-dimensional microelectronic structure; applying an electric field between an electrode and said microelectronic structure at a desired point under conditions whereat said first portion of a selected three-dimensional microelectronic structure is formed from said first precursor material; positionally adjusting either said formed three-dimensional microelectronic structure or said electrode whereby further controlled growth of said three-dimensional microelectronic structure occurs; passing a second precursor material for a selected three-dimensional microelectronic structure into a reaction chamber at temperatures sufficient to maintain said precursor material in a predominantly gaseous state; maintaining said reaction chamber under sufficient pressures whereby a second portion of said three-dimensional microelectronic structure formation is enhanced; applying an electric field between an electrode and said microelectronic structure at a desired point under conditions whereat said second portion of a selected three-dimensional microelectronic structure is formed from said second precursor material; and, positionally adjusting either said formed three-dimensional microelectronic structure or said electrode whereby further controlled growth of said three-dimensional microelectronic structure occurs.

  3. Some elements of a theory of multidimensional complex variables. I - General theory. II - Expansions of analytic functions and application to fluid flows

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1989-01-01

    The paper introduces a new theory of N-dimensional complex variables and analytic functions which, for N greater than 2, is both a direct generalization and a close analog of the theory of ordinary complex variables. The algebra in the present theory is a commutative ring, not a field. Functions of a three-dimensional variable were defined and the definition of the derivative then led to analytic functions.

  4. DENSITY-DEPENDENT FLOW IN ONE-DIMENSIONAL VARIABLY-SATURATED MEDIA

    EPA Science Inventory

    A one-dimensional finite element is developed to simulate density-dependent flow of saltwater in variably saturated media. The flow and solute equations were solved in a coupled mode (iterative), in a partially coupled mode (non-iterative), and in a completely decoupled mode. P...

  5. Probabilistic modeling of anatomical variability using a low dimensional parameterization of diffeomorphisms.

    PubMed

    Zhang, Miaomiao; Wells, William M; Golland, Polina

    2017-10-01

    We present an efficient probabilistic model of anatomical variability in a linear space of initial velocities of diffeomorphic transformations and demonstrate its benefits in clinical studies of brain anatomy. To overcome the computational challenges of the high dimensional deformation-based descriptors, we develop a latent variable model for principal geodesic analysis (PGA) based on a low dimensional shape descriptor that effectively captures the intrinsic variability in a population. We define a novel shape prior that explicitly represents principal modes as a multivariate complex Gaussian distribution on the initial velocities in a bandlimited space. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than the state-of-the-art method such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA) that operate in the high dimensional image space. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Knee joint kinetics in response to multiple three-dimensional printed, customised foot orthoses for the treatment of medial compartment knee osteoarthritis.

    PubMed

    Allan, Richard; Woodburn, James; Telfer, Scott; Abbott, Mandy; Steultjens, Martijn Pm

    2017-06-01

    The knee adduction moment is consistently used as a surrogate measure of medial compartment loading. Foot orthoses are designed to reduce knee adduction moment via lateral wedging. The 'dose' of wedging required to optimally unload the affected compartment is unknown and variable between individuals. This study explores a personalised approach via three-dimensional printed foot orthotics to assess the biomechanical response when two design variables are altered: orthotic length and lateral wedging. Foot orthoses were created for 10 individuals with symptomatic medial knee osteoarthritis and 10 controls. Computer-aided design software was used to design four full and four three-quarter-length foot orthoses per participant each with lateral posting of 0° 'neutral', 5° rearfoot, 10° rearfoot and 5° forefoot/10° rearfoot. Three-dimensional printers were used to manufacture all foot orthoses. Three-dimensional gait analyses were performed and selected knee kinetics were analysed: first peak knee adduction moment, second peak knee adduction moment, first knee flexion moment and knee adduction moment impulse. Full-length foot orthoses provided greater reductions in first peak knee adduction moment (p = 0.038), second peak knee adduction moment (p = 0.018) and knee adduction moment impulse (p = 0.022) compared to three-quarter-length foot orthoses. Dose effect of lateral wedging was found for first peak knee adduction moment (p < 0.001), second peak knee adduction moment (p < 0.001) and knee adduction moment impulse (p < 0.001) indicating greater unloading for higher wedging angles. Significant interaction effects were found for foot orthosis length and participant group in second peak knee adduction moment (p = 0.028) and knee adduction moment impulse (p = 0.036). Significant interaction effects were found between orthotic length and wedging condition for second peak knee adduction moment (p = 0.002). No significant changes in first knee flexion moment were found. Individual heterogeneous responses to foot orthosis conditions were observed for first peak knee adduction moment, second peak knee adduction moment and knee adduction moment impulse. Biomechanical response is highly variable with personalised foot orthoses. Findings indicate that the tailoring of a personalised intervention could provide an additional benefit over standard interventions and that a three-dimensional printing approach to foot orthosis manufacturing is a viable alternative to the standard methods.

  7. Corrections to the Eckhaus' stability criterion for one-dimensional stationary structures

    NASA Astrophysics Data System (ADS)

    Malomed, B. A.; Staroselsky, I. E.; Konstantinov, A. B.

    1989-01-01

    Two amendments to the well-known Eckhaus' stability criterion for small-amplitude non-linear structures generated by weak instability of a spatially uniform state of a non-equilibrium one-dimensional system against small perturbations with finite wavelengths are obtained. Firstly, we evaluate small corrections to the main Eckhaus' term which, on the contrary so that term, do not have a universal form. Comparison of those non-universal corrections with experimental or numerical results gives a possibility to select a more relevant form of an effective nonlinear evolution equation. In particular, the comparison with such results for convective rolls and Taylor vortices gives arguments in favor of the Swift-Hohenberg equation. Secondly, we derive an analog of the Eckhaus criterion for systems degenerate in the sense that in an expansion of their non-linear parts in powers of dynamical variables, the second and third degree terms are absent.

  8. 1-D Photochemical Modeling of the Martian Atmosphere: Seasonal Variations

    NASA Astrophysics Data System (ADS)

    Boxe, C.; Emmanuel, S.; Hafsa, U.; Griffith, E.; Moore, J.; Tam, J.; Khan, I.; Cai, Z.; Bocolod, B.; Zhao, J.; Ahsan, S.; Tang, N.; Bartholomew, J.; Rafi, R.; Caltenco, K.; Smith, K.; Rivas, M.; Ditta, H.; Alawlaqi, H.; Rowley, N.; Khatim, F.; Ketema, N.; Strothers, J.; Diallo, I.; Owens, C.; Radosavljevic, J.; Austin, S. A.; Johnson, L. P.; Zavala-Gutierrez, R.; Breary, N.; Saint-Hilaire, D.; Skeete, D.; Stock, J.; Blue, S.; Gurung, D.; Salako, O.

    2016-12-01

    High school and undergraduate students, representative of academic institutions throughout USA's Tri-State Area (New York, New Jersey, Connecticut), utilize Caltech/JPL's one-dimensional atmospheric, photochemical models. These sophisticated models, were built over the course of the last four decades, describing all planetary bodies in our Solar System and selected extrasolar planets. Specifically, students employed the Martian one-dimensional photochemical model to assess the seasonal variability of molecules in its atmosphere. Students learned the overall model construct, running a baseline simulation, and fluctuating parameters (e.g., obliquity, orbital eccentricity) which affects the incoming solar radiation on Mars, temperature and pressure induce by seasonal variations. Students also attain a `real-world' experience that exemplifies the required level of coding competency and innovativeness needed for building an environment that can simulate observations and forecast. Such skills permeate STEM-related occupations that model systems and/or predict how that system may/will behave.

  9. Developments in Post-marketing Comparative Effectiveness Research

    PubMed Central

    S, Schneeweiss

    2010-01-01

    Physicians and insurers need to weigh the effectiveness of new drugs against existing therapeutics in routine care to make decisions about treatment and formularies. Because Food and Drug Administration (FDA) approval of most new drugs requires demonstrating efficacy and safety against placebo, there is limited interest by manufacturers in conducting such head-to-head trials. Comparative effectiveness research seeks to provide head-to-head comparisons of treatment outcomes in routine care. Health-care utilization databases record drug use and selected health outcomes for large populations in a timely way and reflect routine care, and therefore may be the preferred data source for comparative effectiveness research. Confounding caused by selective prescribing based on indication, severity, and prognosis threatens the validity of non-randomized database studies that often have limited details on clinical information. Several recent developments may bring the field closer to acceptable validity, including approaches that exploit the concepts of proxy variables using high-dimensional propensity scores, within-patient variation of drug exposure using crossover designs, and between-provider variation in prescribing preference using instrumental variable (IV) analyses. PMID:17554243

  10. Developments in post-marketing comparative effectiveness research.

    PubMed

    Schneeweiss, S

    2007-08-01

    Physicians and insurers need to weigh the effectiveness of new drugs against existing therapeutics in routine care to make decisions about treatment and formularies. Because Food and Drug Administration (FDA) approval of most new drugs requires demonstrating efficacy and safety against placebo, there is limited interest by manufacturers in conducting such head-to-head trials. Comparative effectiveness research seeks to provide head-to-head comparisons of treatment outcomes in routine care. Health-care utilization databases record drug use and selected health outcomes for large populations in a timely way and reflect routine care, and therefore may be the preferred data source for comparative effectiveness research. Confounding caused by selective prescribing based on indication, severity, and prognosis threatens the validity of non-randomized database studies that often have limited details on clinical information. Several recent developments may bring the field closer to acceptable validity, including approaches that exploit the concepts of proxy variables using high-dimensional propensity scores, within-patient variation of drug exposure using crossover designs, and between-provider variation in prescribing preference using instrumental variable (IV) analyses.

  11. Effect of Dimensional Salience and Salience of Variability on Problem Solving: A Developmental Study

    ERIC Educational Resources Information Center

    Zelniker, Tamar; And Others

    1975-01-01

    A matching task was presented to 120 subjects from 6 to 20 years of age to investigate the relative influence of dimensional salience and salience of variability on problem solving. The task included four dimensions: form, color, number, and position. (LLK)

  12. Estimation of effective hydrologic properties of soils from observations of vegetation density

    NASA Technical Reports Server (NTRS)

    Tellers, T. E.; Eagleson, P. S.

    1980-01-01

    A one-dimensional model of the annual water balance is reviewed. Improvements are made in the method of calculating the bare soil component of evaporation, and in the way surface retention is handled. A natural selection hypothesis, which specifies the equilibrium vegetation density for a given, water limited, climate soil system, is verified through comparisons with observed data. Comparison of CDF's of annual basin yield derived using these soil properties with observed CDF's provides verification of the soil-selection procedure. This method of parameterization of the land surface is useful with global circulation models, enabling them to account for both the nonlinearity in the relationship between soil moisture flux and soil moisture concentration, and the variability of soil properties from place to place over the Earth's surface.

  13. A method for analytically generating three-dimensional isocomfort workspace based on perceived discomfort.

    PubMed

    Kee, Dohyung

    2002-01-01

    The purpose of this study was to develop a new method for analytically generating three-dimensional isocomfort workspace for the upper extremities using the robot kinematics. Subjective perceived discomfort scores in varying postures for manipulating four types of controls were used. Fifteen healthy male subjects participated in the experiment. The subjects were asked to hold the given postures manipulating controls for 60 s in the seated position, and to rate their perceived discomfort during the following rest of 60 s using the magnitude estimation. Postures of the upper extremities set by shoulder and elbow motions, types of controls, and left right hand were selected as experimental variables, in which the L32 orthogonal array was adopted. The results showed that shoulder flexion and adduction-abduction, elbow flexion, and types of controls significantly affected perceived discomfort for postures operating controls, but hand used for operating controls did not. Depending upon the types of controls, four regression models predicting perceived discomfort were presented. Using the models, a sweeping algorithm to generate three-dimensional isocomfort workspace was developed, in which the robot kinematics was employed to describe the translational relationships between the upper arm and the lower arm/hand. It is expected that the isocomfort workspace can be used as a valuable design guideline when ergonomically designing three-dimensional workplaces.

  14. Robust learning for optimal treatment decision with NP-dimensionality

    PubMed Central

    Shi, Chengchun; Song, Rui; Lu, Wenbin

    2016-01-01

    In order to identify important variables that are involved in making optimal treatment decision, Lu, Zhang and Zeng (2013) proposed a penalized least squared regression framework for a fixed number of predictors, which is robust against the misspecification of the conditional mean model. Two problems arise: (i) in a world of explosively big data, effective methods are needed to handle ultra-high dimensional data set, for example, with the dimension of predictors is of the non-polynomial (NP) order of the sample size; (ii) both the propensity score and conditional mean models need to be estimated from data under NP dimensionality. In this paper, we propose a robust procedure for estimating the optimal treatment regime under NP dimensionality. In both steps, penalized regressions are employed with the non-concave penalty function, where the conditional mean model of the response given predictors may be misspecified. The asymptotic properties, such as weak oracle properties, selection consistency and oracle distributions, of the proposed estimators are investigated. In addition, we study the limiting distribution of the estimated value function for the obtained optimal treatment regime. The empirical performance of the proposed estimation method is evaluated by simulations and an application to a depression dataset from the STAR*D study. PMID:28781717

  15. Demonstration of new PCSD capabilities

    NASA Technical Reports Server (NTRS)

    Gough, M.

    1986-01-01

    The new, more flexible and more friendly graphics capabilities to be available in later releases of the Pilot Climate Data System were demonstrated. The LIMS-LAMAT data set was chosen to illustrate these new capabilities. Pseudocolor and animation were used to represent the third and fourth dimensions, expanding the analytical capabilities available through the traditional two-dimensional x-y plot. In the new version, variables for the axes are chosen by scrolling through viable selections. This scrolling feature is a function of the new user interface customization. The new graphics are extremely user friendly and should free the scientist to look at data and converse with it, without doing any programming. The system is designed to rapidly plot any variable versus any other variable and animate by any variable. Any one plot in itself is not extraordinary; however, the fact that a user can generate the plots instead of a programmer distinguishes the graphics capabilities of the PCDS from other software packages. In addition, with the new CDF design, the system will become more generic, and the new graphics will become much more rigorous in the area of correlative studies.

  16. Conditional screening for ultra-high dimensional covariates with survival outcomes

    PubMed Central

    Hong, Hyokyoung G.; Li, Yi

    2017-01-01

    Identifying important biomarkers that are predictive for cancer patients’ prognosis is key in gaining better insights into the biological influences on the disease and has become a critical component of precision medicine. The emergence of large-scale biomedical survival studies, which typically involve excessive number of biomarkers, has brought high demand in designing efficient screening tools for selecting predictive biomarkers. The vast amount of biomarkers defies any existing variable selection methods via regularization. The recently developed variable screening methods, though powerful in many practical setting, fail to incorporate prior information on the importance of each biomarker and are less powerful in detecting marginally weak while jointly important signals. We propose a new conditional screening method for survival outcome data by computing the marginal contribution of each biomarker given priorily known biological information. This is based on the premise that some biomarkers are known to be associated with disease outcomes a priori. Our method possesses sure screening properties and a vanishing false selection rate. The utility of the proposal is further confirmed with extensive simulation studies and analysis of a diffuse large B-cell lymphoma dataset. We are pleased to dedicate this work to Jack Kalbfleisch, who has made instrumental contributions to the development of modern methods of analyzing survival data. PMID:27933468

  17. On using surface-source downhole-receiver logging to determine seismic slownesses

    USGS Publications Warehouse

    Boore, D.M.; Thompson, E.M.

    2007-01-01

    We present a method to solve for slowness models from surface-source downhole-receiver seismic travel-times. The method estimates the slownesses in a single inversion of the travel-times from all receiver depths and accounts for refractions at layer boundaries. The number and location of layer interfaces in the model can be selected based on lithologic changes or linear trends in the travel-time data. The interfaces based on linear trends in the data can be picked manually or by an automated algorithm. We illustrate the method with example sites for which geologic descriptions of the subsurface materials and independent slowness measurements are available. At each site we present slowness models that result from different interpretations of the data. The examples were carefully selected to address the reliability of interface-selection and the ability of the inversion to identify thin layers, large slowness contrasts, and slowness gradients. Additionally, we compare the models in terms of ground-motion amplification. These plots illustrate the sensitivity of site amplifications to the uncertainties in the slowness model. We show that one-dimensional site amplifications are insensitive to thin layers in the slowness models; although slowness is variable over short ranges of depth, this variability has little affect on ground-motion amplification at frequencies up to 5 Hz.

  18. Assessment of Social Vulnerability Identification at Local Level around Merapi Volcano - A Self Organizing Map Approach

    NASA Astrophysics Data System (ADS)

    Lee, S.; Maharani, Y. N.; Ki, S. J.

    2015-12-01

    The application of Self-Organizing Map (SOM) to analyze social vulnerability to recognize the resilience within sites is a challenging tasks. The aim of this study is to propose a computational method to identify the sites according to their similarity and to determine the most relevant variables to characterize the social vulnerability in each cluster. For this purposes, SOM is considered as an effective platform for analysis of high dimensional data. By considering the cluster structure, the characteristic of social vulnerability of the sites identification can be fully understand. In this study, the social vulnerability variable is constructed from 17 variables, i.e. 12 independent variables which represent the socio-economic concepts and 5 dependent variables which represent the damage and losses due to Merapi eruption in 2010. These variables collectively represent the local situation of the study area, based on conducted fieldwork on September 2013. By using both independent and dependent variables, we can identify if the social vulnerability is reflected onto the actual situation, in this case, Merapi eruption 2010. However, social vulnerability analysis in the local communities consists of a number of variables that represent their socio-economic condition. Some of variables employed in this study might be more or less redundant. Therefore, SOM is used to reduce the redundant variable(s) by selecting the representative variables using the component planes and correlation coefficient between variables in order to find the effective sample size. Then, the selected dataset was effectively clustered according to their similarities. Finally, this approach can produce reliable estimates of clustering, recognize the most significant variables and could be useful for social vulnerability assessment, especially for the stakeholder as decision maker. This research was supported by a grant 'Development of Advanced Volcanic Disaster Response System considering Potential Volcanic Risk around Korea' [MPSS-NH-2015-81] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea. Keywords: Self-organizing map, Component Planes, Correlation coefficient, Cluster analysis, Sites identification, Social vulnerability, Merapi eruption 2010

  19. Multivariate Analysis of Genotype-Phenotype Association.

    PubMed

    Mitteroecker, Philipp; Cheverud, James M; Pavlicev, Mihaela

    2016-04-01

    With the advent of modern imaging and measurement technology, complex phenotypes are increasingly represented by large numbers of measurements, which may not bear biological meaning one by one. For such multivariate phenotypes, studying the pairwise associations between all measurements and all alleles is highly inefficient and prevents insight into the genetic pattern underlying the observed phenotypes. We present a new method for identifying patterns of allelic variation (genetic latent variables) that are maximally associated-in terms of effect size-with patterns of phenotypic variation (phenotypic latent variables). This multivariate genotype-phenotype mapping (MGP) separates phenotypic features under strong genetic control from less genetically determined features and thus permits an analysis of the multivariate structure of genotype-phenotype association, including its dimensionality and the clustering of genetic and phenotypic variables within this association. Different variants of MGP maximize different measures of genotype-phenotype association: genetic effect, genetic variance, or heritability. In an application to a mouse sample, scored for 353 SNPs and 11 phenotypic traits, the first dimension of genetic and phenotypic latent variables accounted for >70% of genetic variation present in all 11 measurements; 43% of variation in this phenotypic pattern was explained by the corresponding genetic latent variable. The first three dimensions together sufficed to account for almost 90% of genetic variation in the measurements and for all the interpretable genotype-phenotype association. Each dimension can be tested as a whole against the hypothesis of no association, thereby reducing the number of statistical tests from 7766 to 3-the maximal number of meaningful independent tests. Important alleles can be selected based on their effect size (additive or nonadditive effect on the phenotypic latent variable). This low dimensionality of the genotype-phenotype map has important consequences for gene identification and may shed light on the evolvability of organisms. Copyright © 2016 by the Genetics Society of America.

  20. Random Survival Forest in practice: a method for modelling complex metabolomics data in time to event analysis.

    PubMed

    Dietrich, Stefan; Floegel, Anna; Troll, Martina; Kühn, Tilman; Rathmann, Wolfgang; Peters, Anette; Sookthai, Disorn; von Bergen, Martin; Kaaks, Rudolf; Adamski, Jerzy; Prehn, Cornelia; Boeing, Heiner; Schulze, Matthias B; Illig, Thomas; Pischon, Tobias; Knüppel, Sven; Wang-Sattler, Rui; Drogan, Dagmar

    2016-10-01

    The application of metabolomics in prospective cohort studies is statistically challenging. Given the importance of appropriate statistical methods for selection of disease-associated metabolites in highly correlated complex data, we combined random survival forest (RSF) with an automated backward elimination procedure that addresses such issues. Our RSF approach was illustrated with data from the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam study, with concentrations of 127 serum metabolites as exposure variables and time to development of type 2 diabetes mellitus (T2D) as outcome variable. Out of this data set, Cox regression with a stepwise selection method was recently published. Replication of methodical comparison (RSF and Cox regression) was conducted in two independent cohorts. Finally, the R-code for implementing the metabolite selection procedure into the RSF-syntax is provided. The application of the RSF approach in EPIC-Potsdam resulted in the identification of 16 incident T2D-associated metabolites which slightly improved prediction of T2D when used in addition to traditional T2D risk factors and also when used together with classical biomarkers. The identified metabolites partly agreed with previous findings using Cox regression, though RSF selected a higher number of highly correlated metabolites. The RSF method appeared to be a promising approach for identification of disease-associated variables in complex data with time to event as outcome. The demonstrated RSF approach provides comparable findings as the generally used Cox regression, but also addresses the problem of multicollinearity and is suitable for high-dimensional data. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  1. Computational design of the basic dynamical processes of the UCLA general circulation model

    NASA Technical Reports Server (NTRS)

    Arakawa, A.; Lamb, V. R.

    1977-01-01

    The 12-layer UCLA general circulation model encompassing troposphere and stratosphere (and superjacent 'sponge layer') is described. Prognostic variables are: surface pressure, horizontal velocity, temperature, water vapor and ozone in each layer, planetary boundary layer (PBL) depth, temperature, moisture and momentum discontinuities at PBL top, ground temperature and water storage, and mass of snow on ground. Selection of space finite-difference schemes for homogeneous incompressible flow, with/without a free surface, nonlinear two-dimensional nondivergent flow, enstrophy conserving schemes, momentum advection schemes, vertical and horizontal difference schemes, and time differencing schemes are discussed.

  2. Variance-based interaction index measuring heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom

    2016-06-01

    This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.

  3. TPSLVM: a dimensionality reduction algorithm based on thin plate splines.

    PubMed

    Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming

    2014-10-01

    Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.

  4. Boosted structured additive regression for Escherichia coli fed-batch fermentation modeling.

    PubMed

    Melcher, Michael; Scharl, Theresa; Luchner, Markus; Striedner, Gerald; Leisch, Friedrich

    2017-02-01

    The quality of biopharmaceuticals and patients' safety are of highest priority and there are tremendous efforts to replace empirical production process designs by knowledge-based approaches. Main challenge in this context is that real-time access to process variables related to product quality and quantity is severely limited. To date comprehensive on- and offline monitoring platforms are used to generate process data sets that allow for development of mechanistic and/or data driven models for real-time prediction of these important quantities. Ultimate goal is to implement model based feed-back control loops that facilitate online control of product quality. In this contribution, we explore structured additive regression (STAR) models in combination with boosting as a variable selection tool for modeling the cell dry mass, product concentration, and optical density on the basis of online available process variables and two-dimensional fluorescence spectroscopic data. STAR models are powerful extensions of linear models allowing for inclusion of smooth effects or interactions between predictors. Boosting constructs the final model in a stepwise manner and provides a variable importance measure via predictor selection frequencies. Our results show that the cell dry mass can be modeled with a relative error of about ±3%, the optical density with ±6%, the soluble protein with ±16%, and the insoluble product with an accuracy of ±12%. Biotechnol. Bioeng. 2017;114: 321-334. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. Fault Diagnosis for Rolling Bearings under Variable Conditions Based on Visual Cognition

    PubMed Central

    Cheng, Yujie; Zhou, Bo; Lu, Chen; Yang, Chao

    2017-01-01

    Fault diagnosis for rolling bearings has attracted increasing attention in recent years. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper introduces a fault diagnosis method for rolling bearings under variable conditions based on visual cognition. The proposed method includes the following steps. First, the vibration signal data are transformed into a recurrence plot (RP), which is a two-dimensional image. Then, inspired by the visual invariance characteristic of the human visual system (HVS), we utilize speed up robust feature to extract fault features from the two-dimensional RP and generate a 64-dimensional feature vector, which is invariant to image translation, rotation, scaling variation, etc. Third, based on the manifold perception characteristic of HVS, isometric mapping, a manifold learning method that can reflect the intrinsic manifold embedded in the high-dimensional space, is employed to obtain a low-dimensional feature vector. Finally, a classical classification method, support vector machine, is utilized to realize fault diagnosis. Verification data were collected from Case Western Reserve University Bearing Data Center, and the experimental result indicates that the proposed fault diagnosis method based on visual cognition is highly effective for rolling bearings under variable conditions, thus providing a promising approach from the cognitive computing field. PMID:28772943

  6. Learning an intrinsic-variable preserving manifold for dynamic visual tracking.

    PubMed

    Qiao, Hong; Zhang, Peng; Zhang, Bo; Zheng, Suiwu

    2010-06-01

    Manifold learning is a hot topic in the field of computer science, particularly since nonlinear dimensionality reduction based on manifold learning was proposed in Science in 2000. The work has achieved great success. The main purpose of current manifold-learning approaches is to search for independent intrinsic variables underlying high dimensional inputs which lie on a low dimensional manifold. In this paper, a new manifold is built up in the training step of the process, on which the input training samples are set to be close to each other if the values of their intrinsic variables are close to each other. Then, the process of dimensionality reduction is transformed into a procedure of preserving the continuity of the intrinsic variables. By utilizing the new manifold, the dynamic tracking of a human who can move and rotate freely is achieved. From the theoretical point of view, it is the first approach to transfer the manifold-learning framework to dynamic tracking. From the application point of view, a new and low dimensional feature for visual tracking is obtained and successfully applied to the real-time tracking of a free-moving object from a dynamic vision system. Experimental results from a dynamic tracking system which is mounted on a dynamic robot validate the effectiveness of the new algorithm.

  7. Comparative analysis of 2 glenoid version measurement methods in variable axial slices on 3-dimensionally reconstructed computed tomography scans.

    PubMed

    Cunningham, Gregory; Freebody, John; Smith, Margaret M; Taha, Mohy E; Young, Allan A; Cass, Benjamin; Giuffre, Bruno

    2018-05-16

    Most glenoid version measurement methods have been validated on 3-dimensionally corrected axial computed tomography (CT) slices at the mid glenoid. Variability of the vault according to slice height and angulation has not yet been studied and is crucial for proper surgical implant positioning. The aim of this study was to analyze the variation of the glenoid vault compared with the Friedman angle according to different CT slice heights and angulations. The hypothesis was that the Friedman angle would show less variability. Sixty shoulder CT scans were retrieved from a hospital imaging database and were reconstructed in the plane of the scapula. Seven axial slices of different heights and coronal angulations were selected, and measurements were carried out by 3 observers. Mid-glenoid mean version was -8.0° (±4.9°; range, -19.6° to +7.0°) and -2.1° (±4.7°; range, -13.0° to +10.3°) using the vault method and Friedman angle, respectively. For both methods, decreasing slice height or angulation did not significantly alter version. Increasing slice height or angulation significantly increased anteversion for the vault method (P < .001). Both interobserver reliability and intraobserver reliability were significantly higher using the Friedman angle. Version at the mid and lower glenoid is similar using either method. The vault method shows less reliability and more variability according to slice height or angulation. Yet, as it significantly differs from the Friedman angle, it should still be used in situations where maximum bone purchase is sought with glenoid implants. For any other situation, the Friedman angle remains the method of choice. Copyright © 2018 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  8. Comparison of Two- and Three-Dimensional Methods for Analysis of Trunk Kinematic Variables in the Golf Swing.

    PubMed

    Smith, Aimée C; Roberts, Jonathan R; Wallace, Eric S; Kong, Pui; Forrester, Stephanie E

    2016-02-01

    Two-dimensional methods have been used to compute trunk kinematic variables (flexion/extension, lateral bend, axial rotation) and X-factor (difference in axial rotation between trunk and pelvis) during the golf swing. Recent X-factor studies advocated three-dimensional (3D) analysis due to the errors associated with two-dimensional (2D) methods, but this has not been investigated for all trunk kinematic variables. The purpose of this study was to compare trunk kinematic variables and X-factor calculated by 2D and 3D methods to examine how different approaches influenced their profiles during the swing. Trunk kinematic variables and X-factor were calculated for golfers from vectors projected onto the global laboratory planes and from 3D segment angles. Trunk kinematic variable profiles were similar in shape; however, there were statistically significant differences in trunk flexion (-6.5 ± 3.6°) at top of backswing and trunk right-side lateral bend (8.7 ± 2.9°) at impact. Differences between 2D and 3D X-factor (approximately 16°) could largely be explained by projection errors introduced to the 2D analysis through flexion and lateral bend of the trunk and pelvis segments. The results support the need to use a 3D method for kinematic data calculation to accurately analyze the golf swing.

  9. Chemometrics comparison of gas chromatography with mass spectrometry and comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry Daphnia magna metabolic profiles exposed to salinity.

    PubMed

    Parastar, Hadi; Garreta-Lara, Elba; Campos, Bruno; Barata, Carlos; Lacorte, Silvia; Tauler, Roma

    2018-06-01

    The performances of gas chromatography with mass spectrometry and of comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry are examined through the comparison of Daphnia magna metabolic profiles. Gas chromatography with mass spectrometry and comprehensive two-dimensional gas chromatography with mass spectrometry were used to compare the concentration changes of metabolites under saline conditions. In this regard, a chemometric strategy based on wavelet compression and multivariate curve resolution-alternating least squares is used to compare the performances of gas chromatography with mass spectrometry and comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry for the untargeted metabolic profiling of Daphnia magna in control and salinity-exposed samples. Examination of the results confirmed the outperformance of comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry over gas chromatography with mass spectrometry for the detection of metabolites in D. magna samples. The peak areas of multivariate curve resolution-alternating least squares resolved elution profiles in every sample analyzed by comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry were arranged in a new data matrix that was then modeled by partial least squares discriminant analysis. The control and salt-exposed daphnids samples were discriminated and the most relevant metabolites were estimated using variable importance in projection and selectivity ratio values. Salinity de-regulated 18 metabolites from metabolic pathways involved in protein translation, transmembrane cell transport, carbon metabolism, secondary metabolism, glycolysis, and osmoregulation. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Unbiased feature selection in learning random forests for high-dimensional data.

    PubMed

    Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi

    2015-01-01

    Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.

  11. Evaluation of tricuspid annular plane systolic excursion measured by two-dimensional echocardiography in healthy dogs: repeatability, reference intervals, and comparison with M-mode assessment.

    PubMed

    Visser, L C; Sintov, D J; Oldach, M S

    2018-06-01

    We sought to determine the feasibility, measurement variability, and within-day repeatability of tricuspid annular plane systolic excursion (TAPSE) measured by two-dimensional echocardiography (2D TAPSE), generate reference intervals for 2D TAPSE, assess agreement and correlation between 2D TAPSE and the conventional TAPSE measured by M-mode echocardiography (MM TAPSE), and to assess the ability of 2D TAPSE to track a drug-induced decrease in right ventricular (RV) function compared with MM TAPSE. Seventy healthy privately owned dogs of varying bodyweight. All dogs underwent a single echocardiogram to quantify RV function by both TAPSE methods. Ten dogs underwent a second echocardiogram 2-3 h after the first to assess within-day repeatability, and 20 different dogs underwent a second echocardiogram 3-h after atenolol (1 mg/kg per os (PO)). Intraobserver and interobserver measurement variabilities were assessed in 12 randomly selected studies using coefficients of variation. Statistical relationships between 2D TAPSE and bodyweight, gender, heart rate, and age were explored. 2D TAPSE could be measured in all dogs. Coefficients of variation for repeatability and measurement variability were low (≤12%). Bodyweight-dependent reference intervals for 2D TAPSE were generated using allometric scaling. TAPSE methods were strongly correlated (r = 0.72; p<0.0001) but 2D TAPSE measured consistently less than MM TAPSE (-1.6 [2.2] mm) when analyzed by Bland-Altman's method. Both TAPSE methods were significantly (p≤0.014) reduced after atenolol but percent decrease in 2D TAPSE (-16.2 [9.3]%) was significantly greater (p=0.03) than MM TAPSE (-7.5 [13.8]%). Two-dimensional echocardiography TAPSE appears well suited for clinical assessment of RV function. The TAPSE methods should not be used interchangeably. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Interobserver and intraobserver variability in the identification of the Lenke classification lumbar modifier in adolescent idiopathic scoliosis.

    PubMed

    Duong, Luc; Cheriet, Farida; Labelle, Hubert; Cheung, Kenneth M C; Abel, Mark F; Newton, Peter O; McCall, Richard E; Lenke, Lawrence G; Stokes, Ian A F

    2009-08-01

    Interobserver and intraobserver reliability study for the identification of the Lenke classification lumbar modifier by a panel of experts compared with a computer algorithm. To measure the variability of the Lenke classification lumbar modifier and determine if computer assistance using 3-dimensional spine models can improve the reliability of classification. The lumbar modifier has been proposed to subclassify Lenke scoliotic curve types into A, B, and C on the basis of the relationship between the central sacral vertical line (CSVL) and the apical lumbar vertebra. Landmarks for identification of the CSVL have not been clearly defined, and the reliability of the actual CSVL position and lumbar modifier selection have never been tested independently. Therefore, the value of the lumbar modifier for curve classification remains unknown. The preoperative radiographs of 68 patients with adolescent idiopathic scoliosis presenting a Lenke type 1 curve were measured manually twice by 6 members of the Scoliosis Research Society 3-dimensional classification committee at 6 months interval. Intraobserver and interobserver reliability was quantified using the percentage of agreement and kappa statistics. In addition, the lumbar curve of all subjects was reconstructed in 3-dimension using a stereoradiographic technique and was submitted to a computer algorithm to infer the lumbar modifier according to measurements from the pedicles. Interobserver rates for the first trial showed a mean kappa value of 0.56. Second trial rates were higher with a mean kappa value of 0.64. Intraobserver rates were evaluated at a mean kappa value of 0.69. The computer algorithm was successful in identifying the lumbar curve type and was in agreement with the observers by a proportion up to 93%. Agreement between and within observers for the Lenke lumbar modifier is only moderate to substantial with manual methods. Computer assistance with 3-dimensional models of the spine has the potential to decrease this variability.

  13. The development of a three-dimensional partially elliptic flow computer program for combustor research

    NASA Technical Reports Server (NTRS)

    Pan, Y. S.

    1978-01-01

    A three dimensional, partially elliptic, computer program was developed. Without requiring three dimensional computer storage locations for all flow variables, the partially elliptic program is capable of predicting three dimensional combustor flow fields with large downstream effects. The program requires only slight increase of computer storage over the parabolic flow program from which it was developed. A finite difference formulation for a three dimensional, fully elliptic, turbulent, reacting, flow field was derived. Because of the negligible diffusion effects in the main flow direction in a supersonic combustor, the set of finite-difference equations can be reduced to a partially elliptic form. Only the pressure field was governed by an elliptic equation and requires three dimensional storage; all other dependent variables are governed by parabolic equations. A numerical procedure which combines a marching integration scheme with an iterative scheme for solving the elliptic pressure was adopted.

  14. Three-dimensional computer-assisted study model analysis of long-term oral-appliance wear. Part 1: Methodology.

    PubMed

    Chen, Hui; Lowe, Alan A; de Almeida, Fernanda Riberiro; Wong, Mary; Fleetham, John A; Wang, Bangkang

    2008-09-01

    The aim of this study was to test a 3-dimensional (3D) computer-assisted dental model analysis system that uses selected landmarks to describe tooth movement during treatment with an oral appliance. Dental casts of 70 patients diagnosed with obstructive sleep apnea and treated with oral appliances for a mean time of 7 years 4 months were evaluated with a 3D digitizer (MicroScribe-3DX, Immersion, San Jose, Calif) compatible with the Rhinoceros modeling program (version 3.0 SR3c, Robert McNeel & Associates, Seattle, Wash). A total of 86 landmarks on each model were digitized, and 156 variables were calculated as either the linear distance between points or the distance from points to reference planes. Four study models for each patient (maxillary baseline, mandibular baseline, maxillary follow-up, and mandibular follow-up) were superimposed on 2 sets of reference points: 3 points on the palatal rugae for maxillary model superimposition, and 3 occlusal contact points for the same set of maxillary and mandibular model superimpositions. The patients were divided into 3 evaluation groups by 5 orthodontists based on the changes between baseline and follow-up study models. Digital dental measurements could be analyzed, including arch width, arch length, curve of Spee, overbite, overjet, and the anteroposterior relationship between the maxillary and mandibular arches. A method error within 0.23 mm in 14 selected variables was found for the 3D system. The statistical differences in the 3 evaluation groups verified the division criteria determined by the orthodontists. The system provides a method to record 3D measurements of study models that permits computer visualization of tooth position and movement from various perspectives.

  15. Dimensional analysis of flame angles versus wind speed

    Treesearch

    Robert E. Martin; Mark A. Finney; Domingo M. Molina; David B. Sapsis; Scott L. Stephens; Joe H. Scott; David R. Weise

    1991-01-01

    Dimensional analysis has potential to help explain and predict physical phenomena, but has been used very little in studies of wildland fire behavior. By combining variables into dimensionless groups, the number of variables to be handled and the experiments to be run is greatly reduced. A low velocity wind tunnel was constructed, and methyl, ethyl, and isopropyl...

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honorio, J.; Goldstein, R.; Honorio, J.

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statisticalmore » theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.« less

  17. A numerical study of the 2- and 3-dimensional unsteady Navier-Stokes equations in velocity-vorticity variables using compact difference schemes

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.; Grosch, C. E.

    1984-01-01

    A compact finite-difference approximation to the unsteady Navier-Stokes equations in velocity-vorticity variables is used to numerically simulate a number of flows. These include two-dimensional laminar flow of a vortex evolving over a flat plate with an embedded cavity, the unsteady flow over an elliptic cylinder, and aspects of the transient dynamics of the flow over a rearward facing step. The methodology required to extend the two-dimensional formulation to three-dimensions is presented.

  18. Dynamically Intuitive and Potentially Predicatable Three-Dimensional Structures in the Low Frequency Flow Variability of the Extratropical Northern Hemisphere

    NASA Astrophysics Data System (ADS)

    Wettstein, J. J.; Li, C.; Bradshaw, S.

    2016-12-01

    Canonical tropospheric climate variability patterns and their corresponding indices are ubiquitous, yet a firm dynamical interpretation has remained elusive for many of even the leading extratropical patterns. Part of the lingering difficulty in understanding and predicting atmospheric low frequency variability is the fact that the identification itself of the different patterns is indistinct. This study characterizes three-dimensional structures in the low frequency variability of the extratropical zonal wind field within the entire period of record of the ERA-Interim reanalysis and suggests the foundations for a new paradigm in identifying and predicting extratropical atmospheric low-frequency variability. In concert with previous results, there is a surprisingly rich three-dimensional structure to the variance of the zonal wind field that is not (cannot be) captured by traditional identification protocols that explore covariance of pressure in the lower troposphere, flow variability in the zonal mean or, for that matter, in any variable on any planar surface. Correspondingly, many of the pressure-based canonical indices of low frequency atmospheric variability exhibit inconsistent relationships to physically intuitive reorganizations of the subtropical and polar front jets and with other forcing mechanisms. Different patterns exhibit these inconsistencies to a greater or lesser extent. The three-dimensional variance of the zonal wind field is, by contrast, naturally organized around dynamically intuitive atmospheric redistributions that have a surprisingly large amount of physically intuitive information in the vertical. These conclusions are robust in a variety of seasons and also in intra-seasonal and inter-annual explorations. Similar results and conclusions are also derived using detrended data, other reanalyses, and state-of-the-art coupled climate model output. In addition to providing a clearer perspective on the distinct three-dimensional patterns of atmospheric low frequency variability, the time evolution and potential predictability of the resultant patterns can be explored with much greater clarity because of an intrinsic link between the patterns and the requisite conservation of momentum (i.e. to the primitive equations and candidate forcing mechanisms).

  19. An Integrative Framework for Bayesian Variable Selection with Informative Priors for Identifying Genes and Pathways

    PubMed Central

    Ander, Bradley P.; Zhang, Xiaoshuai; Xue, Fuzhong; Sharp, Frank R.; Yang, Xiaowei

    2013-01-01

    The discovery of genetic or genomic markers plays a central role in the development of personalized medicine. A notable challenge exists when dealing with the high dimensionality of the data sets, as thousands of genes or millions of genetic variants are collected on a relatively small number of subjects. Traditional gene-wise selection methods using univariate analyses face difficulty to incorporate correlational, structural, or functional structures amongst the molecular measures. For microarray gene expression data, we first summarize solutions in dealing with ‘large p, small n’ problems, and then propose an integrative Bayesian variable selection (iBVS) framework for simultaneously identifying causal or marker genes and regulatory pathways. A novel partial least squares (PLS) g-prior for iBVS is developed to allow the incorporation of prior knowledge on gene-gene interactions or functional relationships. From the point view of systems biology, iBVS enables user to directly target the joint effects of multiple genes and pathways in a hierarchical modeling diagram to predict disease status or phenotype. The estimated posterior selection probabilities offer probabilitic and biological interpretations. Both simulated data and a set of microarray data in predicting stroke status are used in validating the performance of iBVS in a Probit model with binary outcomes. iBVS offers a general framework for effective discovery of various molecular biomarkers by combining data-based statistics and knowledge-based priors. Guidelines on making posterior inferences, determining Bayesian significance levels, and improving computational efficiencies are also discussed. PMID:23844055

  20. An integrative framework for Bayesian variable selection with informative priors for identifying genes and pathways.

    PubMed

    Peng, Bin; Zhu, Dianwen; Ander, Bradley P; Zhang, Xiaoshuai; Xue, Fuzhong; Sharp, Frank R; Yang, Xiaowei

    2013-01-01

    The discovery of genetic or genomic markers plays a central role in the development of personalized medicine. A notable challenge exists when dealing with the high dimensionality of the data sets, as thousands of genes or millions of genetic variants are collected on a relatively small number of subjects. Traditional gene-wise selection methods using univariate analyses face difficulty to incorporate correlational, structural, or functional structures amongst the molecular measures. For microarray gene expression data, we first summarize solutions in dealing with 'large p, small n' problems, and then propose an integrative Bayesian variable selection (iBVS) framework for simultaneously identifying causal or marker genes and regulatory pathways. A novel partial least squares (PLS) g-prior for iBVS is developed to allow the incorporation of prior knowledge on gene-gene interactions or functional relationships. From the point view of systems biology, iBVS enables user to directly target the joint effects of multiple genes and pathways in a hierarchical modeling diagram to predict disease status or phenotype. The estimated posterior selection probabilities offer probabilitic and biological interpretations. Both simulated data and a set of microarray data in predicting stroke status are used in validating the performance of iBVS in a Probit model with binary outcomes. iBVS offers a general framework for effective discovery of various molecular biomarkers by combining data-based statistics and knowledge-based priors. Guidelines on making posterior inferences, determining Bayesian significance levels, and improving computational efficiencies are also discussed.

  1. Feature Screening in Ultrahigh Dimensional Cox's Model.

    PubMed

    Yang, Guangren; Yu, Ye; Li, Runze; Buu, Anne

    Survival data with ultrahigh dimensional covariates such as genetic markers have been collected in medical studies and other fields. In this work, we propose a feature screening procedure for the Cox model with ultrahigh dimensional covariates. The proposed procedure is distinguished from the existing sure independence screening (SIS) procedures (Fan, Feng and Wu, 2010, Zhao and Li, 2012) in that the proposed procedure is based on joint likelihood of potential active predictors, and therefore is not a marginal screening procedure. The proposed procedure can effectively identify active predictors that are jointly dependent but marginally independent of the response without performing an iterative procedure. We develop a computationally effective algorithm to carry out the proposed procedure and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. That is, with the probability tending to one, the selected variable set includes the actual active predictors. We conduct Monte Carlo simulation to evaluate the finite sample performance of the proposed procedure and further compare the proposed procedure and existing SIS procedures. The proposed methodology is also demonstrated through an empirical analysis of a real data example.

  2. A Fast Proceduere for Optimizing Thermal Protection Systems of Re-Entry Vehicles

    NASA Astrophysics Data System (ADS)

    Ferraiuolo, M.; Riccio, A.; Tescione, D.; Gigliotti, M.

    The aim of the present work is to introduce a fast procedure to optimize thermal protection systems for re-entry vehicles subjected to high thermal loads. A simplified one-dimensional optimization process, performed in order to find the optimum design variables (lengths, sections etc.), is the first step of the proposed design procedure. Simultaneously, the most suitable materials able to sustain high temperatures and meeting the weight requirements are selected and positioned within the design layout. In this stage of the design procedure, simplified (generalized plane strain) FEM models are used when boundary and geometrical conditions allow the reduction of the degrees of freedom. Those simplified local FEM models can be useful because they are time-saving and very simple to build; they are essentially one dimensional and can be used for optimization processes in order to determine the optimum configuration with regard to weight, temperature and stresses. A triple-layer and a double-layer body, subjected to the same aero-thermal loads, have been optimized to minimize the overall weight. Full two and three-dimensional analyses are performed in order to validate those simplified models. Thermal-structural analyses and optimizations are executed by adopting the Ansys FEM code.

  3. A Simple Algebraic Grid Adaptation Scheme with Applications to Two- and Three-dimensional Flow Problems

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.; Lytle, John K.

    1989-01-01

    An algebraic adaptive grid scheme based on the concept of arc equidistribution is presented. The scheme locally adjusts the grid density based on gradients of selected flow variables from either finite difference or finite volume calculations. A user-prescribed grid stretching can be specified such that control of the grid spacing can be maintained in areas of known flowfield behavior. For example, the grid can be clustered near a wall for boundary layer resolution and made coarse near the outer boundary of an external flow. A grid smoothing technique is incorporated into the adaptive grid routine, which is found to be more robust and efficient than the weight function filtering technique employed by other researchers. Since the present algebraic scheme requires no iteration or solution of differential equations, the computer time needed for grid adaptation is trivial, making the scheme useful for three-dimensional flow problems. Applications to two- and three-dimensional flow problems show that a considerable improvement in flowfield resolution can be achieved by using the proposed adaptive grid scheme. Although the scheme was developed with steady flow in mind, it is a good candidate for unsteady flow computations because of its efficiency.

  4. Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach

    NASA Astrophysics Data System (ADS)

    Chowdhury, R.; Adhikari, S.

    2012-10-01

    Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.

  5. Natural extension of fast-slow decomposition for dynamical systems

    NASA Astrophysics Data System (ADS)

    Rubin, J. E.; Krauskopf, B.; Osinga, H. M.

    2018-01-01

    Modeling and parameter estimation to capture the dynamics of physical systems are often challenging because many parameters can range over orders of magnitude and are difficult to measure experimentally. Moreover, selecting a suitable model complexity requires a sufficient understanding of the model's potential use, such as highlighting essential mechanisms underlying qualitative behavior or precisely quantifying realistic dynamics. We present an approach that can guide model development and tuning to achieve desired qualitative and quantitative solution properties. It relies on the presence of disparate time scales and employs techniques of separating the dynamics of fast and slow variables, which are well known in the analysis of qualitative solution features. We build on these methods to show how it is also possible to obtain quantitative solution features by imposing designed dynamics for the slow variables in the form of specified two-dimensional paths in a bifurcation-parameter landscape.

  6. Can a four-dimensional model of occupational commitment help to explain intent to leave the emergency medical service occupation?

    PubMed

    Blau, Gary; Chapman, Susan; Pred, Robert S; Lopez, Andrea

    2009-01-01

    Using a sample of 854 emergency medical service (EMS) respondents, this study supported a four-dimension model of occupational commitment, comprised of affective, normative, accumulated costs, and limited alternatives. When personal and job-related variables were controlled, general job satisfaction emerged as a negative correlate of intent to leave. Controlling for personal, job-related, and job satisfaction variables, affective and limited alternatives commitment were each significant negative correlates. There were small but significant interactive effects among the commitment dimensions in accounting for additional intent to leave variance, including a four-way interaction. "High" versus "low" cumulative commitment subgroups were created by selecting respondents who were equal to or above ("high") versus below ("low") the median on each of the four occupational commitment dimensions. A t-test indicated that low cumulative commitment EMS respondents were more likely to intend to leave than high cumulative commitment EMS respondents.

  7. Dimensionality reduction for the quantitative evaluation of a smartphone-based Timed Up and Go test.

    PubMed

    Palmerini, Luca; Mellone, Sabato; Rocchi, Laura; Chiari, Lorenzo

    2011-01-01

    The Timed Up and Go is a clinical test to assess mobility in the elderly and in Parkinson's disease. Lately instrumented versions of the test are being considered, where inertial sensors assess motion. To improve the pervasiveness, ease of use, and cost, we consider a smartphone's accelerometer as the measurement system. Several parameters (usually highly correlated) can be computed from the signals recorded during the test. To avoid redundancy and obtain the features that are most sensitive to the locomotor performance, a dimensionality reduction was performed through principal component analysis (PCA). Forty-nine healthy subjects of different ages were tested. PCA was performed to extract new features (principal components) which are not redundant combinations of the original parameters and account for most of the data variability. They can be useful for exploratory analysis and outlier detection. Then, a reduced set of the original parameters was selected through correlation analysis with the principal components. This set could be recommended for studies based on healthy adults. The proposed procedure could be used as a first-level feature selection in classification studies (i.e. healthy-Parkinson's disease, fallers-non fallers) and could allow, in the future, a complete system for movement analysis to be incorporated in a smartphone.

  8. Visualization of Potential Energy Function Using an Isoenergy Approach and 3D Prototyping

    ERIC Educational Resources Information Center

    Teplukhin, Alexander; Babikov, Dmitri

    2015-01-01

    In our three-dimensional world, one can plot, see, and comprehend a function of two variables at most, V(x,y). One cannot plot a function of three or more variables. For this reason, visualization of the potential energy function in its full dimensionality is impossible even for the smallest polyatomic molecules, such as triatomics. This creates…

  9. Using Dynamic Mathematics Software to Teach One-Variable Inequalities by the View of Semiotic Registers

    ERIC Educational Resources Information Center

    Kabaca, Tolga

    2013-01-01

    Solution set of any inequality or compound inequality, which has one-variable, lies in the real line which is one dimensional. So a difficulty appears when computer assisted graphical representation is intended to use for teaching these topics. Sketching a one-dimensional graph by using computer software is not a straightforward work. In this…

  10. Forms of null Lagrangians in field theories of continuum mechanics

    NASA Astrophysics Data System (ADS)

    Kovalev, V. A.; Radaev, Yu. N.

    2012-02-01

    The divergence representation of a null Lagrangian that is regular in a star-shaped domain is used to obtain its general expression containing field gradients of order ≤ 1 in the case of spacetime of arbitrary dimension. It is shown that for a static three-component field in the three-dimensional space, a null Lagrangian can contain up to 15 independent elements in total. The general form of a null Lagrangian in the four-dimensional Minkowski spacetime is obtained (the number of physical field variables is assumed arbitrary). A complete theory of the null Lagrangian for the n-dimensional spacetime manifold (including the four-dimensional Minkowski spacetime as a special case) is given. Null Lagrangians are then used as a basis for solving an important variational problem of an integrating factor. This problem involves searching for factors that depend on the spacetime variables, field variables, and their gradients and, for a given system of partial differential equations, ensure the equality between the scalar product of a vector multiplier by the system vector and some divergence expression for arbitrary field variables and, hence, allow one to formulate a divergence conservation law on solutions to the system.

  11. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  12. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  13. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  14. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  15. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  16. Tuning optical properties of opal photonic crystals by structural defects engineering

    NASA Astrophysics Data System (ADS)

    di Stasio, F.; Cucini, M.; Berti, L.; Comoretto, D.; Abbotto, A.; Bellotto, L.; Manfredi, N.; Marinzi, C.

    2009-06-01

    We report on the preparation and optical characterization of three dimensional colloidal photonic crystal (PhC) containing an engineered planar defect embedding photoactive push-pull dyes. Free standing polystyrene films having thickness between 0.6 and 3 mm doped with different dipolar chromophores were prepared. These films were sandwiched between two artificial opals creating a PhC structure with planar defect. The system was characterized by reflectance at normal incidence angle (R), variable angle transmittance (T) and photoluminescence spectroscopy (PL) Evidence of defect states were observed in T and R spectra which allow the light to propagate for selected frequencies within the pseudogap (stop band).

  17. Metallic and Ceramic Material Development Research

    DTIC Science & Technology

    2010-05-01

    Woodward and T.A. Parthasarathy, “Experiments and Three-Dimensional Dislocation Simulations of Microplasticity in Selected Materials,” IUTAM...Parthasarathy, “Experiments and Three-Dimensional Dislocation Simulations of Microplasticity in Selected Materials,” IUTAM Conference Proceedings

  18. Social inequality, lifestyles and health - a non-linear canonical correlation analysis based on the approach of Pierre Bourdieu.

    PubMed

    Grosse Frie, Kirstin; Janssen, Christian

    2009-01-01

    Based on the theoretical and empirical approach of Pierre Bourdieu, a multivariate non-linear method is introduced as an alternative way to analyse the complex relationships between social determinants and health. The analysis is based on face-to-face interviews with 695 randomly selected respondents aged 30 to 59. Variables regarding socio-economic status, life circumstances, lifestyles, health-related behaviour and health were chosen for the analysis. In order to determine whether the respondents can be differentiated and described based on these variables, a non-linear canonical correlation analysis (OVERALS) was performed. The results can be described on three dimensions; Eigenvalues add up to the fit of 1.444, which can be interpreted as approximately 50 % of explained variance. The three-dimensional space illustrates correspondences between variables and provides a framework for interpretation based on latent dimensions, which can be described by age, education, income and gender. Using non-linear canonical correlation analysis, health characteristics can be analysed in conjunction with socio-economic conditions and lifestyles. Based on Bourdieus theoretical approach, the complex correlations between these variables can be more substantially interpreted and presented.

  19. Clinic value of two-dimensional speckle tracking combined with adenosine stress echocardiography for assessment of myocardial viability.

    PubMed

    Ran, Hong; Zhang, Ping-Yang; Fang, Ling-Ling; Ma, Xiao-Wu; Wu, Wen-Fang; Feng, Wang-Fei

    2012-07-01

    To evaluate whether myocardial strain under adenosine stress calculated from two-dimensional echocardiography by automatic frame-by-frame tracking of natural acoustic markers enables objective description of myocardial viability in clinic. Two-dimensional echocardiography and two-dimensional speckle tracking imaging (2D STI) at rest were performed first and once again after adenosine was infused at 140 ug/kg/min over a period of 6 minutes in 36 stable patients with previous myocardial infarction. Then radionuclide myocardial perfusion/metabolic imaging served as the "gold standard" to define myocardial viability was given in all patients within 1 day. Two-dimensional speckle tracking images were acquired at rest and after adenosine administration. An automatic frame-by-frame tracking system of natural acoustic echocardiographic markers was used to calculate 2D strain variables including peak-systolic circumferential strain (CS(peak-sys)), radial strain (RS(peak-sys)), and longitudinal strain (LS(peak-sys)). Those segments with abnormal motion from visual assessment of two-dimensional echocardiography were selected for further study. As a result, 126 regions were viable whereas 194 were nonviable among 320 abnormal motion segments in 36 patients according to radionuclide imaging. At rest, there were no significant changes of 2D strain between the viable and nonviable myocardium. After adenosine administration (140 ug/kg/min), CS(peak-sys) had a little change of the viable myocardium while RS(peak-sys) and LS(peak-sys) increased significantly compared with those at rest. In nonviable group, CS(peak-sys), RS(peak-sys), and LS(peak-sys) had no significant changes during adenosine administration. After adenosine administration, RS(peak-sys) and LS(peak-sys) in viable group increased significantly compared with nonviable group. Obtained strain data were highly reproducible and affected in small intraobserver and interobserver variabilities. A change of radial strain more than 9.5% has a sensitivity of 83.9% and a specificity of 81.4% for viable whereas a change of longitudinal strain more than 14.6% allowed a sensitivity of 86.7% and a specificity of 90.2%. 2D STI combined with adenosine stress echocardiography could provide a new and reliable method to identify myocardium viability. © 2012, Wiley Periodicals, Inc.

  20. A rocket engine design expert system

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth J.

    1989-01-01

    The overall structure and capabilities of an expert system designed to evaluate rocket engine performance are described. The expert system incorporates a JANNAF standard reference computer code to determine rocket engine performance and a state-of-the-art finite element computer code to calculate the interactions between propellant injection, energy release in the combustion chamber, and regenerative cooling heat transfer. Rule-of-thumb heuristics were incorporated for the hydrogen-oxygen coaxial injector design, including a minimum gap size constraint on the total number of injector elements. One-dimensional equilibrium chemistry was employed in the energy release analysis of the combustion chamber and three-dimensional finite-difference analysis of the regenerative cooling channels was used to calculate the pressure drop along the channels and the coolant temperature as it exits the coolant circuit. Inputting values to describe the geometry and state properties of the entire system is done directly from the computer keyboard. Graphical display of all output results from the computer code analyses is facilitated by menu selection of up to five dependent variables per plot.

  1. Review of literature on the finite-element solution of the equations of two-dimensional surface-water flow in the horizontal plane

    USGS Publications Warehouse

    Lee, Jonathan K.; Froehlich, David C.

    1987-01-01

    Published literature on the application of the finite-element method to solving the equations of two-dimensional surface-water flow in the horizontal plane is reviewed in this report. The finite-element method is ideally suited to modeling two-dimensional flow over complex topography with spatially variable resistance. A two-dimensional finite-element surface-water flow model with depth and vertically averaged velocity components as dependent variables allows the user great flexibility in defining geometric features such as the boundaries of a water body, channels, islands, dikes, and embankments. The following topics are reviewed in this report: alternative formulations of the equations of two-dimensional surface-water flow in the horizontal plane; basic concepts of the finite-element method; discretization of the flow domain and representation of the dependent flow variables; treatment of boundary conditions; discretization of the time domain; methods for modeling bottom, surface, and lateral stresses; approaches to solving systems of nonlinear equations; techniques for solving systems of linear equations; finite-element alternatives to Galerkin's method of weighted residuals; techniques of model validation; and preparation of model input data. References are listed in the final chapter.

  2. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  3. Three dimensional simulation of spatial and temporal variability of stratospheric hydrogen chloride

    NASA Technical Reports Server (NTRS)

    Kaye, Jack A.; Rood, Richard B.; Jackman, Charles H.; Allen, Dale J.; Larson, Edmund M.

    1989-01-01

    Spatial and temporal variability of atmospheric HCl columns are calculated for January 1979 using a three-dimensional chemistry-transport model designed to provide the best possible representation of stratospheric transport. Large spatial and temporal variability of the HCl columns is shown to be correlated with lower stratospheric potential vorticity and thus to be of dynamical origin. Systematic longitudinal structure is correlated with planetary wave structure. These results can help place spatially and temporally isolated column and profile measurements in a regional and/or global perspective.

  4. VALIDITY OF A TWO-DIMENSIONAL MODEL FOR VARIABLE-DENSITY HYDRODYNAMIC CIRCULATION

    EPA Science Inventory

    A three-dimensional model of temperatures and currents has been formulated to assist in the analysis and interpretation of the dynamics of stratified lakes. In this model, nonlinear eddy coefficients for viscosity and conductivities are included. A two-dimensional model (one vert...

  5. Multiple Attribute Group Decision-Making Methods Based on Trapezoidal Fuzzy Two-Dimensional Linguistic Partitioned Bonferroni Mean Aggregation Operators.

    PubMed

    Yin, Kedong; Yang, Benshuo; Li, Xuemei

    2018-01-24

    In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making.

  6. Multiple Attribute Group Decision-Making Methods Based on Trapezoidal Fuzzy Two-Dimensional Linguistic Partitioned Bonferroni Mean Aggregation Operators

    PubMed Central

    Yin, Kedong; Yang, Benshuo

    2018-01-01

    In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making. PMID:29364849

  7. A formal and data-based comparison of measures of motor-equivalent covariation.

    PubMed

    Verrel, Julius

    2011-09-15

    Different analysis methods have been developed for assessing motor-equivalent organization of movement variability. In the uncontrolled manifold (UCM) method, the structure of variability is analyzed by comparing goal-equivalent and non-goal-equivalent variability components at the level of elemental variables (e.g., joint angles). In contrast, in the covariation by randomization (CR) approach, motor-equivalent organization is assessed by comparing variability at the task level between empirical and decorrelated surrogate data. UCM effects can be due to both covariation among elemental variables and selective channeling of variability to elemental variables with low task sensitivity ("individual variation"), suggesting a link between the UCM and CR method. However, the precise relationship between the notion of covariation in the two approaches has not been analyzed in detail yet. Analysis of empirical and simulated data from a study on manual pointing shows that in general the two approaches are not equivalent, but the respective covariation measures are highly correlated (ρ > 0.7) for two proposed definitions of covariation in the UCM context. For one-dimensional task spaces, a formal comparison is possible and in fact the two notions of covariation are equivalent. In situations in which individual variation does not contribute to UCM effects, for which necessary and sufficient conditions are derived, this entails the equivalence of the UCM and CR analysis. Implications for the interpretation of UCM effects are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ochilov, S.; Alam, M. S.; Bal, A.

    2006-05-01

    Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.

  9. Treatment-Related Morbidity in Prostate Cancer: A Comparison of 3-Dimensional Conformal Radiation Therapy With and Without Image Guidance Using Implanted Fiducial Markers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Jasmeet, E-mail: drsingh.j@gmail.com; Greer, Peter B.; White, Martin A.

    Purpose: To estimate the prevalence of rectal and urinary dysfunctional symptoms using image guided radiation therapy (IGRT) with fiducials and magnetic resonance planning for prostate cancer. Methods and Materials: During the implementation stages of IGRT between September 2008 and March 2010, 367 consecutive patients were treated with prostatic irradiation using 3-dimensional conformal radiation therapy with and without IGRT (non-IGRT). In November 2010, these men were asked to report their bowel and bladder symptoms using a postal questionnaire. The proportions of patients with moderate to severe symptoms in these groups were compared using logistic regression models adjusted for tumor and treatmentmore » characteristic variables. Results: Of the 282 respondents, the 154 selected for IGRT had higher stage tumors, received higher prescribed doses, and had larger volumes of rectum receiving high dosage than did the 128 selected for non-IGRT. The follow-up duration was 8 to 26 months. Compared with the non-IGRT group, improvement was noted in all dysfunctional rectal symptoms using IGRT. In multivariable analyses, IGRT improved rectal pain (odds ratio [OR] 0.07 [0.009-0.7], P=.02), urgency (OR 0.27 [0.11-0.63], P=<.01), diarrhea (OR 0.009 [0.02-0.35], P<.01), and change in bowel habits (OR 0.18 [0.06-0.52], P<.010). No correlation was observed between rectal symptom levels and dose-volume histogram data. Urinary dysfunctional symptoms were similar in both treatment groups. Conclusions: In comparison with men selected for non-IGRT, a significant reduction of bowel dysfunctional symptoms was confirmed in men selected for IGRT, even though they had larger volumes of rectum treated to higher doses.« less

  10. Three-dimensional benchmark for variable-density flow and transport simulation: matching semi-analytic stability modes for steady unstable convection in an inclined porous box

    USGS Publications Warehouse

    Voss, Clifford I.; Simmons, Craig T.; Robinson, Neville I.

    2010-01-01

    This benchmark for three-dimensional (3D) numerical simulators of variable-density groundwater flow and solute or energy transport consists of matching simulation results with the semi-analytical solution for the transition from one steady-state convective mode to another in a porous box. Previous experimental and analytical studies of natural convective flow in an inclined porous layer have shown that there are a variety of convective modes possible depending on system parameters, geometry and inclination. In particular, there is a well-defined transition from the helicoidal mode consisting of downslope longitudinal rolls superimposed upon an upslope unicellular roll to a mode consisting of purely an upslope unicellular roll. Three-dimensional benchmarks for variable-density simulators are currently (2009) lacking and comparison of simulation results with this transition locus provides an unambiguous means to test the ability of such simulators to represent steady-state unstable 3D variable-density physics.

  11. Nonlinear intrinsic variables and state reconstruction in multiscale simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dsilva, Carmeline J., E-mail: cdsilva@princeton.edu; Talmon, Ronen, E-mail: ronen.talmon@yale.edu; Coifman, Ronald R., E-mail: coifman@math.yale.edu

    2013-11-14

    Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certainmore » simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.« less

  12. Nonlinear intrinsic variables and state reconstruction in multiscale simulations

    NASA Astrophysics Data System (ADS)

    Dsilva, Carmeline J.; Talmon, Ronen; Rabin, Neta; Coifman, Ronald R.; Kevrekidis, Ioannis G.

    2013-11-01

    Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certain simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.

  13. [Discrimination of varieties of brake fluid using visual-near infrared spectra].

    PubMed

    Jiang, Lu-lu; Tan, Li-hong; Qiu, Zheng-jun; Lu, Jiang-feng; He, Yong

    2008-06-01

    A new method was developed to fast discriminate brands of brake fluid by means of visual-near infrared spectroscopy. Five different brands of brake fluid were analyzed using a handheld near infrared spectrograph, manufactured by ASD Company, and 60 samples were gotten from each brand of brake fluid. The samples data were pretreated using average smoothing and standard normal variable method, and then analyzed using principal component analysis (PCA). A 2-dimensional plot was drawn based on the first and the second principal components, and the plot indicated that the clustering characteristic of different brake fluid is distinct. The foregoing 6 principal components were taken as input variable, and the band of brake fluid as output variable to build the discriminate model by stepwise discriminant analysis method. Two hundred twenty five samples selected randomly were used to create the model, and the rest 75 samples to verify the model. The result showed that the distinguishing rate was 94.67%, indicating that the method proposed in this paper has good performance in classification and discrimination. It provides a new way to fast discriminate different brands of brake fluid.

  14. Preventing Data Ambiguity in Infectious Diseases with Four-Dimensional and Personalized Evaluations

    PubMed Central

    Iandiorio, Michelle J.; Fair, Jeanne M.; Chatzipanagiotou, Stylianos; Ioannidis, Anastasios; Trikka-Graphakos, Eleftheria; Charalampaki, Nikoletta; Sereti, Christina; Tegos, George P.; Hoogesteijn, Almira L.; Rivas, Ariel L.

    2016-01-01

    Background Diagnostic errors can occur, in infectious diseases, when anti-microbial immune responses involve several temporal scales. When responses span from nanosecond to week and larger temporal scales, any pre-selected temporal scale is likely to miss some (faster or slower) responses. Hoping to prevent diagnostic errors, a pilot study was conducted to evaluate a four-dimensional (4D) method that captures the complexity and dynamics of infectious diseases. Methods Leukocyte-microbial-temporal data were explored in canine and human (bacterial and/or viral) infections, with: (i) a non-structured approach, which measures leukocytes or microbes in isolation; and (ii) a structured method that assesses numerous combinations of interacting variables. Four alternatives of the structured method were tested: (i) a noise-reduction oriented version, which generates a single (one data point-wide) line of observations; (ii) a version that measures complex, three-dimensional (3D) data interactions; (iii) a non-numerical version that displays temporal data directionality (arrows that connect pairs of consecutive observations); and (iv) a full 4D (single line-, complexity-, directionality-based) version. Results In all studies, the non-structured approach revealed non-interpretable (ambiguous) data: observations numerically similar expressed different biological conditions, such as recovery and lack of recovery from infections. Ambiguity was also found when the data were structured as single lines. In contrast, two or more data subsets were distinguished and ambiguity was avoided when the data were structured as complex, 3D, single lines and, in addition, temporal data directionality was determined. The 4D method detected, even within one day, changes in immune profiles that occurred after antibiotics were prescribed. Conclusions Infectious disease data may be ambiguous. Four-dimensional methods may prevent ambiguity, providing earlier, in vivo, dynamic, complex, and personalized information that facilitates both diagnostics and selection or evaluation of anti-microbial therapies. PMID:27411058

  15. Selection of optimal complexity for ENSO-EMR model by minimum description length principle

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.

    2012-12-01

    One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression modeling of nonlinear processes: Derivation and applications to climatic variability. J. Climate, 18 (21): 4404-4424. D. Kondrashov, S. Kravtsov, A. W. Robertson and M. Ghil, 2005. A hierarchy of data-based ENSO models. J. Climate, 18, 4425-4444.

  16. LIP: The Livermore Interpolation Package, Version 1.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-07-06

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the 'LEOS Interpolation Package'. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a 'LIP interpolation object' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as 'partial setup' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  17. LIP: The Livermore Interpolation Package, Version 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-01-04

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the ''LEOS Interpolation Package''. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a ''LIP interpolation object'' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as ''partial setup'' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  18. Outcome-Dependent Sampling with Interval-Censored Failure Time Data

    PubMed Central

    Zhou, Qingning; Cai, Jianwen; Zhou, Haibo

    2017-01-01

    Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664

  19. Dependence of Sum Frequency Generation (SFG) Spectral Features on the Mesoscale Arrangement of SFG-Active Crystalline Domains Interspersed in SFG-Inactive Matrix: A Case Study with Cellulose in Uniaxially Aligned Control Samples and Alkali-Treated Secondary Cell Walls of Plants

    DOE PAGES

    Makarem, Mohamadamin; Sawada, Daisuke; O'Neill, Hugh M.; ...

    2017-04-21

    Vibrational sum frequency generation (SFG) spectroscopy can selectively detect not only molecules at two-dimensional (2D) interfaces but also noncentrosymmetric domains interspersed in amorphous three-dimensional (3D) matrixes. However, the SFG analysis of 3D systems is more complicated than 2D systems because more variables are involved. One such variable is the distance between SFG-active domains in SFG-inactive matrixes. In this study, we fabricated control samples in which SFG-active cellulose crystals were uniaxially aligned in an amorphous matrix. Assuming uniform separation distances between cellulose crystals, the relative intensities of alkyl (CH) and hydroxyl (OH) SFG peaks of cellulose could be related to themore » intercrystallite distance. The experimentally measured CH/OH intensity ratio as a function of the intercrystallite distance could be explained reasonably well with a model constructed using the theoretically calculated hyperpolarizabilities of cellulose and the symmetry cancellation principle of dipoles antiparallel to each other. In conclusion, this comparison revealed physical insights into the intercrystallite distance dependence of the CH/OH SFG intensity ratio of cellulose, which can be used to interpret the SFG spectral features of plant cell walls in terms of mesoscale packing of cellulose microfibrils.« less

  20. Restoration of four-dimensional diffeomorphism covariance in canonical general relativity: An intrinsic Hamilton-Jacobi approach

    NASA Astrophysics Data System (ADS)

    Salisbury, Donald; Renn, Jürgen; Sundermeyer, Kurt

    2016-02-01

    Classical background independence is reflected in Lagrangian general relativity through covariance under the full diffeomorphism group. We show how this independence can be maintained in a Hamilton-Jacobi approach that does not accord special privilege to any geometric structure. Intrinsic space-time curvature-based coordinates grant equal status to all geometric backgrounds. They play an essential role as a starting point for inequivalent semiclassical quantizations. The scheme calls into question Wheeler’s geometrodynamical approach and the associated Wheeler-DeWitt equation in which 3-metrics are featured geometrical objects. The formalism deals with variables that are manifestly invariant under the full diffeomorphism group. Yet, perhaps paradoxically, the liberty in selecting intrinsic coordinates is precisely as broad as is the original diffeomorphism freedom. We show how various ideas from the past five decades concerning the true degrees of freedom of general relativity can be interpreted in light of this new constrained Hamiltonian description. In particular, we show how the Kuchař multi-fingered time approach can be understood as a means of introducing full four-dimensional diffeomorphism invariants. Every choice of new phase space variables yields new Einstein-Hamilton-Jacobi constraining relations, and corresponding intrinsic Schrödinger equations. We show how to implement this freedom by canonical transformation of the intrinsic Hamiltonian. We also reinterpret and rectify significant work by Dittrich on the construction of “Dirac observables.”

  1. Computational Work to Support FAP/SRW Variable-Speed Power-Turbine Development

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2012-01-01

    The purpose of this report is to document the work done to enable a NASA CFD code to model the transition on a blade. The purpose of the present work is to down-select a transition model that would allow the flow simulation of a Variable-Speed Power-Turbine (VSPT) to be accurately performed. The modeling is to be ultimately performed to also account for the blade row interactions and effect on transition and therefore accurate accounting for losses. The present work is limited to steady flows. The low Reynolds number k-omega model of Wilcox and a modified version of same will be used for modeling of transition on experimentally measured blade pressure and heat transfer. It will be shown that the k-omega model and its modified variant fail to simulate the transition with any degree of accuracy. A case is therefore made for more accurate transition models. Three-equation models based on the work of Mayle on Laminar Kinetic Energy were explored and the Walters and Leylek model which was thought to be in a more mature state of development is introduced and implemented in the Glenn-HT code. Two-dimensional flat plate results and three-dimensional results for flow over turbine blades and the resulting heat transfer and its transitional behavior are reported. It is shown that the transition simulation is much improved over the baseline k-omega model.

  2. Analysis of chaos in high-dimensional wind power system.

    PubMed

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping

    2018-01-01

    A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.

  3. Dimensionality Assessment of Ordered Polytomous Items with Parallel Analysis

    ERIC Educational Resources Information Center

    Timmerman, Marieke E.; Lorenzo-Seva, Urbano

    2011-01-01

    Parallel analysis (PA) is an often-recommended approach for assessment of the dimensionality of a variable set. PA is known in different variants, which may yield different dimensionality indications. In this article, the authors considered the most appropriate PA procedure to assess the number of common factors underlying ordered polytomously…

  4. Improving permafrost distribution modelling using feature selection algorithms

    NASA Astrophysics Data System (ADS)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its overall operation. It operates by constructing a large collection of decorrelated classification trees, and then predicts the permafrost occurrence through a majority vote. With the so-called out-of-bag (OOB) error estimate, the classification of permafrost data can be validated as well as the contribution of each predictor can be assessed. The performances of compared permafrost distribution models (computed on independent testing sets) increased with the application of FS algorithms on the original dataset and irrelevant or redundant variables were removed. As a consequence, the process provided faster and more cost-effective predictors and a better understanding of the underlying structures residing in permafrost data. Our work demonstrates the usefulness of a feature selection step prior to applying a machine learning algorithm. In fact, permafrost predictors could be ranked not only based on their heuristic and subjective importance (expert knowledge), but also based on their statistical relevance in relation of the permafrost distribution.

  5. Selection of molecular descriptors with artificial intelligence for the understanding of HIV-1 protease peptidomimetic inhibitors-activity.

    PubMed

    Sirois, S; Tsoukas, C M; Chou, Kuo-Chen; Wei, Dongqing; Boucher, C; Hatzakis, G E

    2005-03-01

    Quantitative Structure Activity Relationship (QSAR) techniques are used routinely by computational chemists in drug discovery and development to analyze datasets of compounds. Quantitative numerical methods like Partial Least Squares (PLS) and Artificial Neural Networks (ANN) have been used on QSAR to establish correlations between molecular properties and bioactivity. However, ANN may be advantageous over PLS because it considers the interrelations of the modeled variables. This study focused on the HIV-1 Protease (HIV-1 Pr) inhibitors belonging to the peptidomimetic class of compounds. The main objective was to select molecular descriptors with the best predictive value for antiviral potency (Ki). PLS and ANN were used to predict Ki activity of HIV-1 Pr inhibitors and the results were compared. To address the issue of dimensionality reduction, Genetic Algorithms (GA) were used for variable selection and their performance was compared against that of ANN. Finally, the structure of the optimum ANN achieving the highest Pearson's-R coefficient was determined. On the basis of Pearson's-R, PLS and ANN were compared to determine which exhibits maximum performance. Training and validation of models was performed on 15 random split sets of the master dataset consisted of 231 compounds. For each compound 192 molecular descriptors were considered. The molecular structure and constant of inhibition (Ki) were selected from the NIAID database. Study findings suggested that non-covalent interactions such as hydrophobicity, shape and hydrogen bonding describe well the antiviral activity of the HIV-1 Pr compounds. The significance of lipophilicity and relationship to HIV-1 associated hyperlipidemia and lipodystrophy syndrome warrant further investigation.

  6. Model-Free Conditional Independence Feature Screening For Ultrahigh Dimensional Data.

    PubMed

    Wang, Luheng; Liu, Jingyuan; Li, Yong; Li, Runze

    2017-03-01

    Feature screening plays an important role in ultrahigh dimensional data analysis. This paper is concerned with conditional feature screening when one is interested in detecting the association between the response and ultrahigh dimensional predictors (e.g., genetic makers) given a low-dimensional exposure variable (such as clinical variables or environmental variables). To this end, we first propose a new index to measure conditional independence, and further develop a conditional screening procedure based on the newly proposed index. We systematically study the theoretical property of the proposed procedure and establish the sure screening and ranking consistency properties under some very mild conditions. The newly proposed screening procedure enjoys some appealing properties. (a) It is model-free in that its implementation does not require a specification on the model structure; (b) it is robust to heavy-tailed distributions or outliers in both directions of response and predictors; and (c) it can deal with both feature screening and the conditional screening in a unified way. We study the finite sample performance of the proposed procedure by Monte Carlo simulations and further illustrate the proposed method through two real data examples.

  7. The extraction of simple relationships in growth factor-specific multiple-input and multiple-output systems in cell-fate decisions by backward elimination PLS regression.

    PubMed

    Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya

    2013-01-01

    Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.

  8. High dimensional model representation method for fuzzy structural dynamics

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  9. Two-Dimensional Thermal Boundary Layer Corrections for Convective Heat Flux Gauges

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Haddad, George

    2007-01-01

    This work presents a CFD (Computational Fluid Dynamics) study of two-dimensional thermal boundary layer correction factors for convective heat flux gauges mounted in flat plate subjected to a surface temperature discontinuity with variable properties taken into account. A two-equation k - omega turbulence model is considered. Results are obtained for a wide range of Mach numbers (1 to 5), gauge radius ratio, and wall temperature discontinuity. Comparisons are made for correction factors with constant properties and variable properties. It is shown that the variable-property effects on the heat flux correction factors become significant

  10. Optical frequency selective surface design using a GPU accelerated finite element boundary integral method

    NASA Astrophysics Data System (ADS)

    Ashbach, Jason A.

    Periodic metallodielectric frequency selective surface (FSS) designs have historically seen widespread use in the microwave and radio frequency spectra. By scaling the dimensions of an FSS unit cell for use in a nano-fabrication process, these concepts have recently been adapted for use in optical applications as well. While early optical designs have been limited to wellunderstood geometries or optimized pixelated screens, nano-fabrication, lithographic and interconnect technology has progressed to a point where it is possible to fabricate metallic screens of arbitrary geometries featuring curvilinear or even three-dimensional characteristics that are only tens of nanometers wide. In order to design an FSS featuring such characteristics, it is important to have a robust numerical solver that features triangular elements in purely two-dimensional geometries and prismatic or tetrahedral elements in three-dimensional geometries. In this dissertation, a periodic finite element method code has been developed which features prismatic elements whose top and bottom boundaries are truncated by numerical integration of the boundary integral as opposed to an approximate representation found in a perfectly matched layer. However, since no exact solution exists for the calculation of triangular elements in a boundary integral, this process can be time consuming. To address this, these calculations were optimized for parallelization such that they may be done on a graphics processor, which provides a large increase in computational speed. Additionally, a simple geometrical representation using a Bezier surface is presented which provides generality with few variables. With a fast numerical solver coupled with a lowvariable geometric representation, a heuristic optimization algorithm has been used to develop several optical designs such as an absorber, a circular polarization filter, a transparent conductive surface and an enhanced, optical modulator.

  11. A novel framework to alleviate the sparsity problem in context-aware recommender systems

    NASA Astrophysics Data System (ADS)

    Yu, Penghua; Lin, Lanfen; Wang, Jing

    2017-04-01

    Recommender systems have become indispensable for services in the era of big data. To improve accuracy and satisfaction, context-aware recommender systems (CARSs) attempt to incorporate contextual information into recommendations. Typically, valid and influential contexts are determined in advance by domain experts or feature selection approaches. Most studies have focused on utilizing the unitary context due to the differences between various contexts. Meanwhile, multi-dimensional contexts will aggravate the sparsity problem, which means that the user preference matrix would become extremely sparse. Consequently, there are not enough or even no preferences in most multi-dimensional conditions. In this paper, we propose a novel framework to alleviate the sparsity issue for CARSs, especially when multi-dimensional contextual variables are adopted. Motivated by the intuition that the overall preferences tend to show similarities among specific groups of users and conditions, we first explore to construct one contextual profile for each contextual condition. In order to further identify those user and context subgroups automatically and simultaneously, we apply a co-clustering algorithm. Furthermore, we expand user preferences in a given contextual condition with the identified user and context clusters. Finally, we perform recommendations based on expanded preferences. Extensive experiments demonstrate the effectiveness of the proposed framework.

  12. Some applications of the multi-dimensional fractional order for the Riemann-Liouville derivative

    NASA Astrophysics Data System (ADS)

    Ahmood, Wasan Ajeel; Kiliçman, Adem

    2017-01-01

    In this paper, the aim of this work is to study theorem for the one-dimensional space-time fractional deriative, generalize some function for the one-dimensional fractional by table represents the fractional Laplace transforms of some elementary functions to be valid for the multi-dimensional fractional Laplace transform and give the definition of the multi-dimensional fractional Laplace transform. This study includes that, dedicate the one-dimensional fractional Laplace transform for functions of only one independent variable and develop of the one-dimensional fractional Laplace transform to multi-dimensional fractional Laplace transform based on the modified Riemann-Liouville derivative.

  13. Comparison of three-dimensional lower extremity running kinematics of young adult and elderly runners.

    PubMed

    Fukuchi, Reginaldo K; Duarte, Marcos

    2008-11-01

    The objective of this study was to compare the three-dimensional lower extremity running kinematics of young adult runners and elderly runners. Seventeen elderly adults (age 67-73 years) and 17 young adults (age 26-36 years) ran at 3.1 m x s(-1) on a treadmill while the movements of the lower extremity during the stance phase were recorded at 120 Hz using three-dimensional video. The three-dimensional kinematics of the lower limb segments and of the ankle and knee joints were determined, and selected variables were calculated to describe the movement. Our results suggest that elderly runners have a different movement pattern of the lower extremity from that of young adults during the stance phase of running. Compared with the young adults, the elderly runners had a substantial decrease in stride length (1.97 vs. 2.23 m; P = 0.01), an increase in stride frequency (1.58 vs. 1.37 Hz; P = 0.002), less knee flexion/extension range of motion (26 vs. 33 degrees ; P = 0.002), less tibial internal/external rotation range of motion (9 vs. 12 degrees ; P < 0.001), larger external rotation angle of the foot segment (toe-out angle) at the heel strike (-5.8 vs. -1.0 degrees ; P = 0.009), and greater asynchronies between the ankle and knee movements during running. These results may help to explain why elderly individuals could be more susceptible to running-related injuries.

  14. User's manual for XTRAN2L (version 1.2): A program for solving the general-frequency unsteady transonic small-disturbance equation

    NASA Technical Reports Server (NTRS)

    Seidel, D. A.; Batina, J. T.

    1986-01-01

    The development, use and operation of the XTRAN2L program that solves the two dimensional unsteady transonic small disturbance potential equation are described. The XTRAN2L program is used to calculate steady and unsteady transonic flow fields about airfoils and is capable of performing self contained transonic flutter calculations. Operation of the XTRAN2L code is described, and tables defining all input variables, including default values, are presented. Sample cases that use various program options are shown to illustrate operation of XTRAN2L. Computer listings containing input and selected output are included as an aid to the user.

  15. Locating CVBEM collocation points for steady state heat transfer problems

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.

  16. An assessment of support vector machines for land cover classification

    USGS Publications Warehouse

    Huang, C.; Davis, L.S.; Townshend, J.R.G.

    2002-01-01

    The support vector machine (SVM) is a group of theoretically superior machine learning algorithms. It was found competitive with the best available machine learning algorithms in classifying high-dimensional data sets. This paper gives an introduction to the theoretical development of the SVM and an experimental evaluation of its accuracy, stability and training speed in deriving land cover classifications from satellite images. The SVM was compared to three other popular classifiers, including the maximum likelihood classifier (MLC), neural network classifiers (NNC) and decision tree classifiers (DTC). The impacts of kernel configuration on the performance of the SVM and of the selection of training data and input variables on the four classifiers were also evaluated in this experiment.

  17. Integrative Exploratory Analysis of Two or More Genomic Datasets.

    PubMed

    Meng, Chen; Culhane, Aedin

    2016-01-01

    Exploratory analysis is an essential step in the analysis of high throughput data. Multivariate approaches such as correspondence analysis (CA), principal component analysis, and multidimensional scaling are widely used in the exploratory analysis of single dataset. Modern biological studies often assay multiple types of biological molecules (e.g., mRNA, protein, phosphoproteins) on a same set of biological samples, thereby creating multiple different types of omics data or multiassay data. Integrative exploratory analysis of these multiple omics data is required to leverage the potential of multiple omics studies. In this chapter, we describe the application of co-inertia analysis (CIA; for analyzing two datasets) and multiple co-inertia analysis (MCIA; for three or more datasets) to address this problem. These methods are powerful yet simple multivariate approaches that represent samples using a lower number of variables, allowing a more easily identification of the correlated structure in and between multiple high dimensional datasets. Graphical representations can be employed to this purpose. In addition, the methods simultaneously project samples and variables (genes, proteins) onto the same lower dimensional space, so the most variant variables from each dataset can be selected and associated with samples, which can be further used to facilitate biological interpretation and pathway analysis. We applied CIA to explore the concordance between mRNA and protein expression in a panel of 60 tumor cell lines from the National Cancer Institute. In the same 60 cell lines, we used MCIA to perform a cross-platform comparison of mRNA gene expression profiles obtained on four different microarray platforms. Last, as an example of integrative analysis of multiassay or multi-omics data we analyzed transcriptomic, proteomic, and phosphoproteomic data from pluripotent (iPS) and embryonic stem (ES) cell lines.

  18. Aggression, Suicidality, and Intermittent Explosive Disorder: Serotonergic Correlates in Personality Disorder and Healthy Control Subjects

    PubMed Central

    Coccaro, Emil F; Lee, Royce; Kavoussi, Richard J

    2010-01-01

    Central serotonergic (5-HT) activity has long been implicated in the regulation of impulsive aggressive behavior. This study was performed to use a highly selective agent for 5-HT (d-Fenfluramine, d-FEN) in a large group of human subjects to further explore this relationship dimensionally and categorically. One hundred and fifty healthy subjects (100 with personality disorder, PD and 50 healthy volunteer controls, HV) underwent d-FEN challenge studies. Residual peak delta prolactin (ΔPRL[d-FEN]-R; ie, after the removal of potentially confounding variables) was used as the primary 5-HT response variable. Composite measures of aggression and impulsivity were used as dimensional measures, and history of suicidal/self-injurious behavior as well as the presence of intermittent explosive disorder (IED) were used as categorical variables. ΔPRL[d-FEN]-R responses correlated inversely with composite aggression, but not composite impulsivity, in all subjects and in males and females examined separately. The correlation with composite aggression was strongest in male PD subjects. ΔPRL[d-FEN]-R values were reduced in PD subjects with a history of suicidal behavior but not, self-injurious behavior. ΔPRL[d-FEN]-R values were also reduced in patients meeting Research Criteria for IED. Physiologic responses to 5-HT stimulation are reduced as a function of aggression (but not generalized impulsivity) in human subjects. The same is true for personality disordered subjects with a history of suicidal, but not self-injurious, behavior and for subjects with a diagnosis of IED by research criteria. These data have particular relevance to the notion of impulsive aggression and the biological validity of IED. PMID:19776731

  19. Insights on beer volatile profile: Optimization of solid-phase microextraction procedure taking advantage of the comprehensive two-dimensional gas chromatography structured separation.

    PubMed

    Martins, Cátia; Brandão, Tiago; Almeida, Adelaide; Rocha, Sílvia M

    2015-06-01

    The aroma profile of beer is crucial for its quality and consumer acceptance, which is modu-lated by a network of variables. The main goal of this study was to optimize solid-phase microextraction experimental parameters (fiber coating, extraction temperature, and time), taking advantage of the comprehensive two-dimensional gas chromatography structured separation. As far as we know, it is the first time that this approach was used to the untargeted and comprehensive study of the beer volatile profile. Decarbonation is a critical sample preparation step, and two conditions were tested: static and under ultrasonic treatment, and the static condition was selected. Considering the conditions that promoted the highest extraction efficiency, the following parameters were selected: poly(dimethylsiloxane)/divinylbenzene fiber coating, at 40ºC, using 10 min of pre-equilibrium followed by 30 min of extraction. Around 700-800 compounds per sample were detected, corresponding to the beer volatile profile. An exploratory application was performed with commercial beers, using a set of 32 compounds with reported impact on beer aroma, in which different patterns can be observed through the structured chromatogram. In summary, the obtained results emphasize the potential of this methodology to allow an in-depth study of volatile molecular composition of beer. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Optimal dimensionality reduction of complex dynamics: the chess game as diffusion on a free-energy landscape.

    PubMed

    Krivov, Sergei V

    2011-07-01

    Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game--the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.

  1. Three dimensional elements with Lagrange multipliers for the modified couple stress theory

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Rok; Lee, Byung-Chai

    2018-07-01

    Three dimensional mixed elements for the modified couple stress theory are proposed. The C1 continuity for the displacement field, which is required because of the curvature term in the variational form of the theory, is satisfied weakly by introducing a supplementary rotation as an independent variable and constraining the relation between the rotation and the displacement with a Lagrange multiplier vector. An additional constraint about the deviatoric curvature is also considered for three dimensional problems. Weak forms with one constraint and two constraints are derived, and four elements satisfying convergence criteria are developed by applying different approximations to each field of independent variables. The elements pass a [InlineEquation not available: see fulltext.] patch test for three dimensional problems. Numerical examples show that the additional constraint could be considered essential for the three dimensional elements, and one of the elements is recommended for practical applications via the comparison of the performances of the elements. In addition, all the proposed elements can represent the size effect well.

  2. Optimal dimensionality reduction of complex dynamics: The chess game as diffusion on a free-energy landscape

    NASA Astrophysics Data System (ADS)

    Krivov, Sergei V.

    2011-07-01

    Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game—the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.

  3. Path analysis of the energy density of wood in eucalyptus clones.

    PubMed

    Couto, A M; Teodoro, P E; Trugilho, P F

    2017-03-16

    Path analysis has been used for establishing selection criteria in genetic breeding programs for several crops. However, it has not been used in eucalyptus breeding programs yet. In the present study, we aimed to identify the wood technology traits that could be used as the criteria for direct and indirect selection of eucalyptus genotypes with high energy density of wood. Twenty-four eucalyptus clones were evaluated in a completely randomized design with five replications. The following traits were assessed: basic wood density, total extractives, lignin content, ash content, nitrogen content, carbon content, hydrogen content, sulfur content, oxygen content, higher calorific power, holocellulose, and energy density. After verifying the variability of all evaluated traits among the clones, a two-dimensional correlation network was used to determine the phenotypic patterns among them. The obtained coefficient of determination (0.94) presented a higher magnitude in relation to the effect of the residual variable, and it served as an excellent model for explaining the genetic effects related to the variations observed in the energy density of wood in all eucalyptus clones. However, for future studies, we recommend evaluating other traits, especially the morphological traits, because of the greater ease in their measurement. Selecting clones with high basic density is the most promising strategy for eucalyptus breeding programs that aim to increase the energy density of wood because of its high heritability and magnitude of the cause-and-effect relationship with this trait.

  4. Comparative study of feature selection with ensemble learning using SOM variants

    NASA Astrophysics Data System (ADS)

    Filali, Ameni; Jlassi, Chiraz; Arous, Najet

    2017-03-01

    Ensemble learning has succeeded in the growth of stability and clustering accuracy, but their runtime prohibits them from scaling up to real-world applications. This study deals the problem of selecting a subset of the most pertinent features for every cluster from a dataset. The proposed method is another extension of the Random Forests approach using self-organizing maps (SOM) variants to unlabeled data that estimates the out-of-bag feature importance from a set of partitions. Every partition is created using a various bootstrap sample and a random subset of the features. Then, we show that the process internal estimates are used to measure variable pertinence in Random Forests are also applicable to feature selection in unsupervised learning. This approach aims to the dimensionality reduction, visualization and cluster characterization at the same time. Hence, we provide empirical results on nineteen benchmark data sets indicating that RFS can lead to significant improvement in terms of clustering accuracy, over several state-of-the-art unsupervised methods, with a very limited subset of features. The approach proves promise to treat with very broad domains.

  5. System and method for progressive band selection for hyperspectral images

    NASA Technical Reports Server (NTRS)

    Fisher, Kevin (Inventor)

    2013-01-01

    Disclosed herein are systems, methods, and non-transitory computer-readable storage media for progressive band selection for hyperspectral images. A system having module configured to control a processor to practice the method calculates a virtual dimensionality of a hyperspectral image having multiple bands to determine a quantity Q of how many bands are needed for a threshold level of information, ranks each band based on a statistical measure, selects Q bands from the multiple bands to generate a subset of bands based on the virtual dimensionality, and generates a reduced image based on the subset of bands. This approach can create reduced datasets of full hyperspectral images tailored for individual applications. The system uses a metric specific to a target application to rank the image bands, and then selects the most useful bands. The number of bands selected can be specified manually or calculated from the hyperspectral image's virtual dimensionality.

  6. Full evaporation dynamic headspace in combination with selectable one-dimensional/two-dimensional gas chromatography-mass spectrometry for the determination of suspected fragrance allergens in cosmetic products.

    PubMed

    Devos, Christophe; Ochiai, Nobuo; Sasamoto, Kikuo; Sandra, Pat; David, Frank

    2012-09-14

    Suspected fragrance allergens were determined in cosmetic products using a combination of full evaporation-dynamic headspace (FEDHS) with selectable one-dimensional/two-dimensional GC-MS. The full evaporation dynamic headspace approach allows the non-discriminating extraction and injection of both apolar and polar fragrance compounds, without contamination of the analytical system by high molecular weight non-volatile matrix compounds. The method can be applied to all classes of cosmetic samples, including water containing matrices such as shower gels or body creams. In combination with selectable (1)D/(2)D GC-MS, consisting of a dedicated heart-cutting GC-MS configuration using capillary flow technology (CFT) and low thermal mass GC (LTM-GC), a highly flexible and easy-to-use analytical solution is offered. Depending on the complexity of the perfume fraction, analyses can be performed in one-dimensional GC-MS mode or in heart-cutting two-dimensional GC-MS mode, without the need of hardware reconfiguration. The two-dimensional mode with independent temperature control of the first and second dimension column is especially useful to confirm the presence of detected allergen compounds when mass spectral deconvolution is not possible. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. January and July global distributions of atmospheric heating for 1986, 1987, and 1988

    NASA Technical Reports Server (NTRS)

    Schaack, Todd K.; Johnson, Donald R.

    1994-01-01

    Three-dimensional global distributions of atmospheric heating are estimated for January and July of the 3-year period 1986-88 from the European Center for Medium Weather Forecasts (ECMWF) Tropical Ocean Global Atmosphere (TOGA) assimilated datasets. Emphasis is placed on the interseasonal and interannual variability of heating both locally and regionally. Large fluctuations in the magnitude of heating and the disposition of maxima/minima in the Tropics occur over the 3-year period. This variability, which is largely in accord with anomalous precipitation expected during the El Nino-Southern Oscillation (ENSO) cycle, appears realistic. In both January and July, interannual differences of 1.0-1.5 K/day in the vertically averaged heating occur over the tropical Pacific. These interannual regional differences are substantial in comparison with maximum monthly averaged heating rates of 2.0-2.5 K/day. In the extratropics, the most prominent interannual variability occurs along the wintertime North Atlantic cyclone track. Vertical profiles of heating from selected regions also reveal large interannual variability. Clearly evident is the modulation of the heating within tropical regions of deep moist convection associated with the evolution of the ENSO cycle. The heating integrated over continental and oceanic basins emphasizes the impact of land and ocean surfaces on atmospheric energy balance and depicts marked interseasonal and interannual large-scale variability.

  8. Classification of motor imagery tasks for BCI with multiresolution analysis and multiobjective feature selection.

    PubMed

    Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés

    2016-07-15

    Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.

  9. Bearing Fault Diagnosis under Variable Speed Using Convolutional Neural Networks and the Stochastic Diagonal Levenberg-Marquardt Algorithm

    PubMed Central

    Tra, Viet; Kim, Jaeyoung; Kim, Jong-Myon

    2017-01-01

    This paper presents a novel method for diagnosing incipient bearing defects under variable operating speeds using convolutional neural networks (CNNs) trained via the stochastic diagonal Levenberg-Marquardt (S-DLM) algorithm. The CNNs utilize the spectral energy maps (SEMs) of the acoustic emission (AE) signals as inputs and automatically learn the optimal features, which yield the best discriminative models for diagnosing incipient bearing defects under variable operating speeds. The SEMs are two-dimensional maps that show the distribution of energy across different bands of the AE spectrum. It is hypothesized that the variation of a bearing’s speed would not alter the overall shape of the AE spectrum rather, it may only scale and translate it. Thus, at different speeds, the same defect would yield SEMs that are scaled and shifted versions of each other. This hypothesis is confirmed by the experimental results, where CNNs trained using the S-DLM algorithm yield significantly better diagnostic performance under variable operating speeds compared to existing methods. In this work, the performance of different training algorithms is also evaluated to select the best training algorithm for the CNNs. The proposed method is used to diagnose both single and compound defects at six different operating speeds. PMID:29211025

  10. A MacCormack-TVD finite difference method to simulate the mass flow in mountainous terrain with variable computational domain

    NASA Astrophysics Data System (ADS)

    Ouyang, Chaojun; He, Siming; Xu, Qiang; Luo, Yu; Zhang, Wencheng

    2013-03-01

    A two-dimensional mountainous mass flow dynamic procedure solver (Massflow-2D) using the MacCormack-TVD finite difference scheme is proposed. The solver is implemented in Matlab on structured meshes with variable computational domain. To verify the model, a variety of numerical test scenarios, namely, the classical one-dimensional and two-dimensional dam break, the landslide in Hong Kong in 1993 and the Nora debris flow in the Italian Alps in 2000, are executed, and the model outputs are compared with published results. It is established that the model predictions agree well with both the analytical solution as well as the field observations.

  11. Some theorems and properties of multi-dimensional fractional Laplace transforms

    NASA Astrophysics Data System (ADS)

    Ahmood, Wasan Ajeel; Kiliçman, Adem

    2016-06-01

    The aim of this work is to study theorems and properties for the one-dimensional fractional Laplace transform, generalize some properties for the one-dimensional fractional Lapalce transform to be valid for the multi-dimensional fractional Lapalce transform and is to give the definition of the multi-dimensional fractional Lapalce transform. This study includes: dedicate the one-dimensional fractional Laplace transform for functions of only one independent variable with some of important theorems and properties and develop of some properties for the one-dimensional fractional Laplace transform to multi-dimensional fractional Laplace transform. Also, we obtain a fractional Laplace inversion theorem after a short survey on fractional analysis based on the modified Riemann-Liouville derivative.

  12. Three Dimensional Variable-Wavelength X-Ray Bragg Coherent Diffraction Imaging

    DOE PAGES

    Cha, W.; Ulvestad, A.; Allain, M.; ...

    2016-11-23

    Here, we present and demonstrate a formalism by which three-dimensional (3D) Bragg x-ray coherent diffraction imaging (BCDI) can be implemented without moving the sample by scanning the energy of the incident x-ray beam. This capability is made possible by introducing a 3D Fourier transform that accounts for x-ray wavelength variability. We also demonstrate the approach by inverting coherent Bragg diffraction patterns from a gold nanocrystal measured with an x-ray energy scan. Furthermore, variable-wavelength BCDI will expand the breadth of feasible in situ 3D strain imaging experiments towards more diverse materials environments, especially where sample manipulation is difficult.

  13. Computing Shapes Of Cascade Diffuser Blades

    NASA Technical Reports Server (NTRS)

    Tran, Ken; Prueger, George H.

    1993-01-01

    Computer program generates sizes and shapes of cascade-type blades for use in axial or radial turbomachine diffusers. Generates shapes of blades rapidly, incorporating extensive cascade data to determine optimum incidence and deviation angle for blade design based on 65-series data base of National Advisory Commission for Aeronautics and Astronautics (NACA). Allows great variability in blade profile through input variables. Also provides for design of three-dimensional blades by allowing variable blade stacking. Enables designer to obtain computed blade-geometry data in various forms: as input for blade-loading analysis; as input for quasi-three-dimensional analysis of flow; or as points for transfer to computer-aided design.

  14. Three Dimensional Variable-Wavelength X-Ray Bragg Coherent Diffraction Imaging

    NASA Astrophysics Data System (ADS)

    Cha, W.; Ulvestad, A.; Allain, M.; Chamard, V.; Harder, R.; Leake, S. J.; Maser, J.; Fuoss, P. H.; Hruszkewycz, S. O.

    2016-11-01

    We present and demonstrate a formalism by which three-dimensional (3D) Bragg x-ray coherent diffraction imaging (BCDI) can be implemented without moving the sample by scanning the energy of the incident x-ray beam. This capability is made possible by introducing a 3D Fourier transform that accounts for x-ray wavelength variability. We demonstrate the approach by inverting coherent Bragg diffraction patterns from a gold nanocrystal measured with an x-ray energy scan. Variable-wavelength BCDI will expand the breadth of feasible in situ 3D strain imaging experiments towards more diverse materials environments, especially where sample manipulation is difficult.

  15. Three Dimensional Variable-Wavelength X-Ray Bragg Coherent Diffraction Imaging.

    PubMed

    Cha, W; Ulvestad, A; Allain, M; Chamard, V; Harder, R; Leake, S J; Maser, J; Fuoss, P H; Hruszkewycz, S O

    2016-11-25

    We present and demonstrate a formalism by which three-dimensional (3D) Bragg x-ray coherent diffraction imaging (BCDI) can be implemented without moving the sample by scanning the energy of the incident x-ray beam. This capability is made possible by introducing a 3D Fourier transform that accounts for x-ray wavelength variability. We demonstrate the approach by inverting coherent Bragg diffraction patterns from a gold nanocrystal measured with an x-ray energy scan. Variable-wavelength BCDI will expand the breadth of feasible in situ 3D strain imaging experiments towards more diverse materials environments, especially where sample manipulation is difficult.

  16. FastProject: a tool for low-dimensional analysis of single-cell RNA-Seq data.

    PubMed

    DeTomaso, David; Yosef, Nir

    2016-08-23

    A key challenge in the emerging field of single-cell RNA-Seq is to characterize phenotypic diversity between cells and visualize this information in an informative manner. A common technique when dealing with high-dimensional data is to project the data to 2 or 3 dimensions for visualization. However, there are a variety of methods to achieve this result and once projected, it can be difficult to ascribe biological significance to the observed features. Additionally, when analyzing single-cell data, the relationship between cells can be obscured by technical confounders such as variable gene capture rates. To aid in the analysis and interpretation of single-cell RNA-Seq data, we have developed FastProject, a software tool which analyzes a gene expression matrix and produces a dynamic output report in which two-dimensional projections of the data can be explored. Annotated gene sets (referred to as gene 'signatures') are incorporated so that features in the projections can be understood in relation to the biological processes they might represent. FastProject provides a novel method of scoring each cell against a gene signature so as to minimize the effect of missed transcripts as well as a method to rank signature-projection pairings so that meaningful associations can be quickly identified. Additionally, FastProject is written with a modular architecture and designed to serve as a platform for incorporating and comparing new projection methods and gene selection algorithms. Here we present FastProject, a software package for two-dimensional visualization of single cell data, which utilizes a plethora of projection methods and provides a way to systematically investigate the biological relevance of these low dimensional representations by incorporating domain knowledge.

  17. Variable Selection in the Presence of Missing Data: Imputation-based Methods.

    PubMed

    Zhao, Yize; Long, Qi

    2017-01-01

    Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.

  18. Forecast Modelling via Variations in Binary Image-Encoded Information Exploited by Deep Learning Neural Networks.

    PubMed

    Liu, Da; Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai

    2016-01-01

    Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the jointmore » velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.« less

  20. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  1. Forecast Modelling via Variations in Binary Image-Encoded Information Exploited by Deep Learning Neural Networks

    PubMed Central

    Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai

    2016-01-01

    Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012. PMID:27281032

  2. Scales of variability of black carbon plumes and their dependence on resolution of ECHAM6-HAM

    NASA Astrophysics Data System (ADS)

    Weigum, Natalie; Stier, Philip; Schutgens, Nick; Kipling, Zak

    2015-04-01

    Prediction of the aerosol effect on climate depends on the ability of three-dimensional numerical models to accurately estimate aerosol properties. However, a limitation of traditional grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies between observations and aerosol models. The aim of this study is to understand how a global climate model's (GCM) inability to resolve sub-grid scale variability affects simulations of important aerosol features. This problem is addressed by comparing observed black carbon (BC) plume scales from the HIPPO aircraft campaign to those simulated by ECHAM-HAM GCM, and testing how model resolution affects these scales. This study additionally investigates how model resolution affects BC variability in remote and near-source regions. These issues are examined using three different approaches: comparison of observed and simulated along-flight-track plume scales, two-dimensional autocorrelation analysis, and 3-dimensional plume analysis. We find that the degree to which GCMs resolve variability can have a significant impact on the scales of BC plumes, and it is important for models to capture the scales of aerosol plume structures, which account for a large degree of aerosol variability. In this presentation, we will provide further results from the three analysis techniques along with a summary of the implication of these results on future aerosol model development.

  3. Megavoltage computed tomography image guidance with helical tomotherapy in patients with vertebral tumors: analysis of factors influencing interobserver variability.

    PubMed

    Levegrün, Sabine; Pöttgen, Christoph; Jawad, Jehad Abu; Berkovic, Katharina; Hepp, Rodrigo; Stuschke, Martin

    2013-02-01

    To evaluate megavoltage computed tomography (MVCT)-based image guidance with helical tomotherapy in patients with vertebral tumors by analyzing factors influencing interobserver variability, considered as quality criterion of image guidance. Five radiation oncologists retrospectively registered 103 MVCTs in 10 patients to planning kilovoltage CTs by rigid transformations in 4 df. Interobserver variabilities were quantified using the standard deviations (SDs) of the distributions of the correction vector components about the observers' fraction mean. To assess intraobserver variabilities, registrations were repeated after ≥4 weeks. Residual deviations after setup correction due to uncorrectable rotational errors and elastic deformations were determined at 3 craniocaudal target positions. To differentiate observer-related variations in minimizing these residual deviations across the 3-dimensional MVCT from image resolution effects, 2-dimensional registrations were performed in 30 single transverse and sagittal MVCT slices. Axial and longitudinal MVCT image resolutions were quantified. For comparison, image resolution of kilovoltage cone-beam CTs (CBCTs) and interobserver variability in registrations of 43 CBCTs were determined. Axial MVCT image resolution is 3.9 lp/cm. Longitudinal MVCT resolution amounts to 6.3 mm, assessed as full-width at half-maximum of thin objects in MVCTs with finest pitch. Longitudinal CBCT resolution is better (full-width at half-maximum, 2.5 mm for CBCTs with 1-mm slices). In MVCT registrations, interobserver variability in the craniocaudal direction (SD 1.23 mm) is significantly larger than in the lateral and ventrodorsal directions (SD 0.84 and 0.91 mm, respectively) and significantly larger compared with CBCT alignments (SD 1.04 mm). Intraobserver variabilities are significantly smaller than corresponding interobserver variabilities (variance ratio [VR] 1.8-3.1). Compared with 3-dimensional registrations, 2-dimensional registrations have significantly smaller interobserver variability in the lateral and ventrodorsal directions (VR 3.8 and 2.8, respectively) but not in the craniocaudal direction (VR 0.75). Tomotherapy image guidance precision is affected by image resolution and residual deviations after setup correction. Eliminating the effect of residual deviations yields small interobserver variabilities with submillimeter precision in the axial plane. In contrast, interobserver variability in the craniocaudal direction is dominated by the poorer longitudinal MVCT image resolution. Residual deviations after image guidance exist and need to be considered when dose gradients ultimately achievable with image guided radiation therapy techniques are analyzed. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Topic modeling for cluster analysis of large biological and medical datasets

    PubMed Central

    2014-01-01

    Background The big data moniker is nowhere better deserved than to describe the ever-increasing prodigiousness and complexity of biological and medical datasets. New methods are needed to generate and test hypotheses, foster biological interpretation, and build validated predictors. Although multivariate techniques such as cluster analysis may allow researchers to identify groups, or clusters, of related variables, the accuracies and effectiveness of traditional clustering methods diminish for large and hyper dimensional datasets. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. Its ability to reduce high dimensionality to a small number of latent variables makes it suitable as a means for clustering or overcoming clustering difficulties in large biological and medical datasets. Results In this study, three topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, are proposed and tested on the cluster analysis of three large datasets: Salmonella pulsed-field gel electrophoresis (PFGE) dataset, lung cancer dataset, and breast cancer dataset, which represent various types of large biological or medical datasets. All three various methods are shown to improve the efficacy/effectiveness of clustering results on the three datasets in comparison to traditional methods. A preferable cluster analysis method emerged for each of the three datasets on the basis of replicating known biological truths. Conclusion Topic modeling could be advantageously applied to the large datasets of biological or medical research. The three proposed topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, yield clustering improvements for the three different data types. Clusters more efficaciously represent truthful groupings and subgroupings in the data than traditional methods, suggesting that topic model-based methods could provide an analytic advancement in the analysis of large biological or medical datasets. PMID:25350106

  5. Topic modeling for cluster analysis of large biological and medical datasets.

    PubMed

    Zhao, Weizhong; Zou, Wen; Chen, James J

    2014-01-01

    The big data moniker is nowhere better deserved than to describe the ever-increasing prodigiousness and complexity of biological and medical datasets. New methods are needed to generate and test hypotheses, foster biological interpretation, and build validated predictors. Although multivariate techniques such as cluster analysis may allow researchers to identify groups, or clusters, of related variables, the accuracies and effectiveness of traditional clustering methods diminish for large and hyper dimensional datasets. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. Its ability to reduce high dimensionality to a small number of latent variables makes it suitable as a means for clustering or overcoming clustering difficulties in large biological and medical datasets. In this study, three topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, are proposed and tested on the cluster analysis of three large datasets: Salmonella pulsed-field gel electrophoresis (PFGE) dataset, lung cancer dataset, and breast cancer dataset, which represent various types of large biological or medical datasets. All three various methods are shown to improve the efficacy/effectiveness of clustering results on the three datasets in comparison to traditional methods. A preferable cluster analysis method emerged for each of the three datasets on the basis of replicating known biological truths. Topic modeling could be advantageously applied to the large datasets of biological or medical research. The three proposed topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, yield clustering improvements for the three different data types. Clusters more efficaciously represent truthful groupings and subgroupings in the data than traditional methods, suggesting that topic model-based methods could provide an analytic advancement in the analysis of large biological or medical datasets.

  6. Automated body weight prediction of dairy cows using 3-dimensional vision.

    PubMed

    Song, X; Bokkers, E A M; van der Tol, P P J; Groot Koerkamp, P W G; van Mourik, S

    2018-05-01

    The objectives of this study were to quantify the error of body weight prediction using automatically measured morphological traits in a 3-dimensional (3-D) vision system and to assess the influence of various sources of uncertainty on body weight prediction. In this case study, an image acquisition setup was created in a cow selection box equipped with a top-view 3-D camera. Morphological traits of hip height, hip width, and rump length were automatically extracted from the raw 3-D images taken of the rump area of dairy cows (n = 30). These traits combined with days in milk, age, and parity were used in multiple linear regression models to predict body weight. To find the best prediction model, an exhaustive feature selection algorithm was used to build intermediate models (n = 63). Each model was validated by leave-one-out cross-validation, giving the root mean square error and mean absolute percentage error. The model consisting of hip width (measurement variability of 0.006 m), days in milk, and parity was the best model, with the lowest errors of 41.2 kg of root mean square error and 5.2% mean absolute percentage error. Our integrated system, including the image acquisition setup, image analysis, and the best prediction model, predicted the body weights with a performance similar to that achieved using semi-automated or manual methods. Moreover, the variability of our simplified morphological trait measurement showed a negligible contribution to the uncertainty of body weight prediction. We suggest that dairy cow body weight prediction can be improved by incorporating more predictive morphological traits and by improving the prediction model structure. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

  7. Active Subspaces of Airfoil Shape Parameterizations

    NASA Astrophysics Data System (ADS)

    Grey, Zachary J.; Constantine, Paul G.

    2018-05-01

    Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.

  8. Exploring the CAESAR database using dimensionality reduction techniques

    NASA Astrophysics Data System (ADS)

    Mendoza-Schrock, Olga; Raymer, Michael L.

    2012-06-01

    The Civilian American and European Surface Anthropometry Resource (CAESAR) database containing over 40 anthropometric measurements on over 4000 humans has been extensively explored for pattern recognition and classification purposes using the raw, original data [1-4]. However, some of the anthropometric variables would be impossible to collect in an uncontrolled environment. Here, we explore the use of dimensionality reduction methods in concert with a variety of classification algorithms for gender classification using only those variables that are readily observable in an uncontrolled environment. Several dimensionality reduction techniques are employed to learn the underlining structure of the data. These techniques include linear projections such as the classical Principal Components Analysis (PCA) and non-linear (manifold learning) techniques, such as Diffusion Maps and the Isomap technique. This paper briefly describes all three techniques, and compares three different classifiers, Naïve Bayes, Adaboost, and Support Vector Machines (SVM), for gender classification in conjunction with each of these three dimensionality reduction approaches.

  9. Integrative analysis of gene expression and copy number alterations using canonical correlation analysis.

    PubMed

    Soneson, Charlotte; Lilljebjörn, Henrik; Fioretos, Thoas; Fontes, Magnus

    2010-04-15

    With the rapid development of new genetic measurement methods, several types of genetic alterations can be quantified in a high-throughput manner. While the initial focus has been on investigating each data set separately, there is an increasing interest in studying the correlation structure between two or more data sets. Multivariate methods based on Canonical Correlation Analysis (CCA) have been proposed for integrating paired genetic data sets. The high dimensionality of microarray data imposes computational difficulties, which have been addressed for instance by studying the covariance structure of the data, or by reducing the number of variables prior to applying the CCA. In this work, we propose a new method for analyzing high-dimensional paired genetic data sets, which mainly emphasizes the correlation structure and still permits efficient application to very large data sets. The method is implemented by translating a regularized CCA to its dual form, where the computational complexity depends mainly on the number of samples instead of the number of variables. The optimal regularization parameters are chosen by cross-validation. We apply the regularized dual CCA, as well as a classical CCA preceded by a dimension-reducing Principal Components Analysis (PCA), to a paired data set of gene expression changes and copy number alterations in leukemia. Using the correlation-maximizing methods, regularized dual CCA and PCA+CCA, we show that without pre-selection of known disease-relevant genes, and without using information about clinical class membership, an exploratory analysis singles out two patient groups, corresponding to well-known leukemia subtypes. Furthermore, the variables showing the highest relevance to the extracted features agree with previous biological knowledge concerning copy number alterations and gene expression changes in these subtypes. Finally, the correlation-maximizing methods are shown to yield results which are more biologically interpretable than those resulting from a covariance-maximizing method, and provide different insight compared to when each variable set is studied separately using PCA. We conclude that regularized dual CCA as well as PCA+CCA are useful methods for exploratory analysis of paired genetic data sets, and can be efficiently implemented also when the number of variables is very large.

  10. An adaptive confidence limit for periodic non-steady conditions fault detection

    NASA Astrophysics Data System (ADS)

    Wang, Tianzhen; Wu, Hao; Ni, Mengqi; Zhang, Milu; Dong, Jingjing; Benbouzid, Mohamed El Hachemi; Hu, Xiong

    2016-05-01

    System monitoring has become a major concern in batch process due to the fact that failure rate in non-steady conditions is much higher than in steady ones. A series of approaches based on PCA have already solved problems such as data dimensionality reduction, multivariable decorrelation, and processing non-changing signal. However, if the data follows non-Gaussian distribution or the variables contain some signal changes, the above approaches are not applicable. To deal with these concerns and to enhance performance in multiperiod data processing, this paper proposes a fault detection method using adaptive confidence limit (ACL) in periodic non-steady conditions. The proposed ACL method achieves four main enhancements: Longitudinal-Standardization could convert non-Gaussian sampling data to Gaussian ones; the multiperiod PCA algorithm could reduce dimensionality, remove correlation, and improve the monitoring accuracy; the adaptive confidence limit could detect faults under non-steady conditions; the fault sections determination procedure could select the appropriate parameter of the adaptive confidence limit. The achieved result analysis clearly shows that the proposed ACL method is superior to other fault detection approaches under periodic non-steady conditions.

  11. A finite-element model for simulation of two-dimensional steady-state ground-water flow in confined aquifers

    USGS Publications Warehouse

    Kuniansky, E.L.

    1990-01-01

    A computer program based on the Galerkin finite-element method was developed to simulate two-dimensional steady-state ground-water flow in either isotropic or anisotropic confined aquifers. The program may also be used for unconfined aquifers of constant saturated thickness. Constant head, constant flux, and head-dependent flux boundary conditions can be specified in order to approximate a variety of natural conditions, such as a river or lake boundary, and pumping well. The computer program was developed for the preliminary simulation of ground-water flow in the Edwards-Trinity Regional aquifer system as part of the Regional Aquifer-Systems Analysis Program. Results of the program compare well to analytical solutions and simulations .from published finite-difference models. A concise discussion of the Galerkin method is presented along with a description of the program. Provided in the Supplemental Data section are a listing of the computer program, definitions of selected program variables, and several examples of data input and output used in verifying the accuracy of the program.

  12. Two novel two-dimensional copper(II) coordination polymers with 1-(4-aminobenzyl)-1,2,4-triazole: Synthesis, crystal structure, magnetic characterization and absorption of anion pollutants

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Wu, Xiang Xia; Guo, Jian-Hua; Huo, Jian-Zhong; Ding, Bin

    2017-01-01

    In this work a flexible multi-dentate 1-(4-aminobenzyl)-1,2,4-triazole (abtz) ligand has been employed, two novel triazole-Cu(II) coordination polymers {[Cu(abtz)2(Br)2]·(H2O)2}n (1) and {[Cu(abtz)2]·(SiF6)·(H2O)2}n (2) have been isolated under solvo-thermal conditions. 1 is a 2D neutral CuII coordination polymer while 2 is 2D cation micro-porous CuII coordination polymer with the channel dimensionalities of 11.852(1) Å × 11.852(1) Å (metal-metal distances). Variable-temperature magnetic susceptibility data of 1 and 2 have been recorded in the 2-300 K temperature range indicating weak anti-ferromagnetic interactions. Further absorption properties of anion pollutants for 2 also have been investigated. 2 presents the novel example of cationic triazole-copper(II) coordination framework for effectively capturing anion pollutants Cr2O72- in the water solutions and selectively capturing Congo Red in the methanol solutions.

  13. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  14. A stock market forecasting model combining two-directional two-dimensional principal component analysis and radial basis function neural network.

    PubMed

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.

  15. A Stock Market Forecasting Model Combining Two-Directional Two-Dimensional Principal Component Analysis and Radial Basis Function Neural Network

    PubMed Central

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J.

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron. PMID:25849483

  16. Phase-space finite elements in a least-squares solution of the transport equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drumm, C.; Fan, W.; Pautz, S.

    2013-07-01

    The linear Boltzmann transport equation is solved using a least-squares finite element approximation in the space, angular and energy phase-space variables. The method is applied to both neutral particle transport and also to charged particle transport in the presence of an electric field, where the angular and energy derivative terms are handled with the energy/angular finite elements approximation, in a manner analogous to the way the spatial streaming term is handled. For multi-dimensional problems, a novel approach is used for the angular finite elements: mapping the surface of a unit sphere to a two-dimensional planar region and using a meshingmore » tool to generate a mesh. In this manner, much of the spatial finite-elements machinery can be easily adapted to handle the angular variable. The energy variable and the angular variable for one-dimensional problems make use of edge/beam elements, also building upon the spatial finite elements capabilities. The methods described here can make use of either continuous or discontinuous finite elements in space, angle and/or energy, with the use of continuous finite elements resulting in a smaller problem size and the use of discontinuous finite elements resulting in more accurate solutions for certain types of problems. The work described in this paper makes use of continuous finite elements, so that the resulting linear system is symmetric positive definite and can be solved with a highly efficient parallel preconditioned conjugate gradients algorithm. The phase-space finite elements capability has been built into the Sceptre code and applied to several test problems, including a simple one-dimensional problem with an analytic solution available, a two-dimensional problem with an isolated source term, showing how the method essentially eliminates ray effects encountered with discrete ordinates, and a simple one-dimensional charged-particle transport problem in the presence of an electric field. (authors)« less

  17. Variables in Color Perception of Young Children

    ERIC Educational Resources Information Center

    Gaines, Rosslyn

    1972-01-01

    Study investigated the effect of the stimulus variables of value, chroma, and hue in relation to sex, intelligence, and dimensional attention of kindergarten children using two reward conditions. (Author)

  18. Implementation of a Transition Model in a NASA Code and Validation Using Heat Transfer Data on a Turbine Blade

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2012-01-01

    The purpose of this report is to summarize and document the work done to enable a NASA CFD code to model laminar-turbulent transition process on an isolated turbine blade. The ultimate purpose of the present work is to down-select a transition model that would allow the flow simulation of a variable speed power turbine to be accurately performed. The flow modeling in its final form will account for the blade row interactions and their effects on transition which would lead to accurate accounting for losses. The present work only concerns itself with steady flows of variable inlet turbulence. The low Reynolds number k- model of Wilcox and a modified version of the same model will be used for modeling of transition on experimentally measured blade pressure and heat transfer. It will be shown that the k- model and its modified variant fail to simulate the transition with any degree of accuracy. A case is thus made for the adoption of more accurate transition models. Three-equation models based on the work of Mayle on Laminar Kinetic Energy were explored. The three-equation model of Walters and Leylek was thought to be in a relatively mature state of development and was implemented in the Glenn-HT code. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Surface heat transfer rate serves as sensitive indicator of transition. With the newly implemented model, it was shown that the simulation of transition process is much improved over the baseline k- model for the single Reynolds number and pressure ratio attempted; while agreement with heat transfer data became more satisfactory. Armed with the new transition model, total-pressure losses of computed three-dimensional flow of E3 tip section cascade were compared to the experimental data for a range of incidence angles. The results obtained, form a partial loss bucket for the chosen blade. In time the loss bucket will be populated with losses at additional incidences. Results obtained thus far will be discussed herein.

  19. Electrochemical state and internal variables estimation using a reduced-order physics-based model of a lithium-ion cell and an extended Kalman filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stetzel, KD; Aldrich, LL; Trimboli, MS

    2015-03-15

    This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables.more » (C) 2014 Elsevier B.V. All rights reserved.« less

  20. Periodic, complexiton solutions and stability for a (2+1)-dimensional variable-coefficient Gross-Pitaevskii equation in the Bose-Einstein condensation

    NASA Astrophysics Data System (ADS)

    Yin, Hui-Min; Tian, Bo; Zhao, Xin-Chao

    2018-06-01

    This paper presents an investigation of a (2 + 1)-dimensional variable-coefficient Gross-Pitaevskii equation in the Bose-Einstein condensation. Periodic and complexiton solutions are obtained. Solitons solutions are also gotten through the periodic solutions. Numerical solutions via the split step method are stable. Effects of the weak and strong modulation instability on the solitons are shown: the weak modulation instability permits an observable soliton, and the strong one overwhelms its development.

  1. Continuous-variable gate decomposition for the Bose-Hubbard model

    NASA Astrophysics Data System (ADS)

    Kalajdzievski, Timjan; Weedbrook, Christian; Rebentrost, Patrick

    2018-06-01

    In this work, we decompose the time evolution of the Bose-Hubbard model into a sequence of logic gates that can be implemented on a continuous-variable photonic quantum computer. We examine the structure of the circuit that represents this time evolution for one-dimensional and two-dimensional lattices. The elementary gates needed for the implementation are counted as a function of lattice size. We also include the contribution of the leading dipole interaction term which may be added to the Hamiltonian and its corresponding circuit.

  2. Prediction of thoracic injury severity in frontal impacts by selected anatomical morphomic variables through model-averaged logistic regression approach.

    PubMed

    Zhang, Peng; Parenteau, Chantal; Wang, Lu; Holcombe, Sven; Kohoyda-Inglis, Carla; Sullivan, June; Wang, Stewart

    2013-11-01

    This study resulted in a model-averaging methodology that predicts crash injury risk using vehicle, demographic, and morphomic variables and assesses the importance of individual predictors. The effectiveness of this methodology was illustrated through analysis of occupant chest injuries in frontal vehicle crashes. The crash data were obtained from the International Center for Automotive Medicine (ICAM) database for calendar year 1996 to 2012. The morphomic data are quantitative measurements of variations in human body 3-dimensional anatomy. Morphomics are obtained from imaging records. In this study, morphomics were obtained from chest, abdomen, and spine CT using novel patented algorithms. A NASS-trained crash investigator with over thirty years of experience collected the in-depth crash data. There were 226 cases available with occupants involved in frontal crashes and morphomic measurements. Only cases with complete recorded data were retained for statistical analysis. Logistic regression models were fitted using all possible configurations of vehicle, demographic, and morphomic variables. Different models were ranked by the Akaike Information Criteria (AIC). An averaged logistic regression model approach was used due to the limited sample size relative to the number of variables. This approach is helpful when addressing variable selection, building prediction models, and assessing the importance of individual variables. The final predictive results were developed using this approach, based on the top 100 models in the AIC ranking. Model-averaging minimized model uncertainty, decreased the overall prediction variance, and provided an approach to evaluating the importance of individual variables. There were 17 variables investigated: four vehicle, four demographic, and nine morphomic. More than 130,000 logistic models were investigated in total. The models were characterized into four scenarios to assess individual variable contribution to injury risk. Scenario 1 used vehicle variables; Scenario 2, vehicle and demographic variables; Scenario 3, vehicle and morphomic variables; and Scenario 4 used all variables. AIC was used to rank the models and to address over-fitting. In each scenario, the results based on the top three models and the averages of the top 100 models were presented. The AIC and the area under the receiver operating characteristic curve (AUC) were reported in each model. The models were re-fitted after removing each variable one at a time. The increases of AIC and the decreases of AUC were then assessed to measure the contribution and importance of the individual variables in each model. The importance of the individual variables was also determined by their weighted frequencies of appearance in the top 100 selected models. Overall, the AUC was 0.58 in Scenario 1, 0.78 in Scenario 2, 0.76 in Scenario 3 and 0.82 in Scenario 4. The results showed that morphomic variables are as accurate at predicting injury risk as demographic variables. The results of this study emphasize the importance of including morphomic variables when assessing injury risk. The results also highlight the need for morphomic data in the development of human mathematical models when assessing restraint performance in frontal crashes, since morphomic variables are more "tangible" measurements compared to demographic variables such as age and gender. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Modeling seasonal variability of carbonate system parameters at the sediment -water interface in the Baltic Sea (Gdansk Deep)

    NASA Astrophysics Data System (ADS)

    Protsenko, Elizaveta; Yakubov, Shamil; Lessin, Gennady; Yakushev, Evgeniy; Sokołowski, Adam

    2017-04-01

    A one-dimensional fully-coupled benthic pelagic biogeochemical model BROM (Bottom RedOx Model) was used for simulations of seasonal variability of biogeochemical parameters in the upper sediment, Bottom Boundary Layer and the water column in the Gdansk Deep of the Baltic Sea. This model represents key biogeochemical processes of transformation of C, N, P, Si, O, S, Mn, Fe and the processes of vertical transport in the water column and the sediments. The hydrophysical block of BROM was forced by the output calculated with model GETM (General Estuarine Transport Model). In this study we focused on parameters of carbonate system at Baltic Sea, and mainly on their distributions near the sea-water interface. For validating of BROM we used field data (concentrations of main nutrients at water column and porewater of upper sediment) from the Gulf of Gdansk. The model allowed us to simulate the baseline ranges of seasonal variability of pH, Alkalinity, TIC and calcite/aragonite saturation as well as vertical fluxes of carbon in a region potentially selected for the CCS storage. This work was supported by project EEA CO2MARINE and STEMM-CCS.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J.; Moon, T.J.; Howell, J.R.

    This paper presents an analysis of the heat transfer occurring during an in-situ curing process for which infrared energy is provided on the surface of polymer composite during winding. The material system is Hercules prepreg AS4/3501-6. Thermoset composites have an exothermic chemical reaction during the curing process. An Eulerian thermochemical model is developed for the heat transfer analysis of helical winding. The model incorporates heat generation due to the chemical reaction. Several assumptions are made leading to a two-dimensional, thermochemical model. For simplicity, 360{degree} heating around the mandrel is considered. In order to generate the appropriate process windows, the developedmore » heat transfer model is combined with a simple winding time model. The process windows allow for a proper selection of process variables such as infrared energy input and winding velocity to give a desired end-product state. Steady-state temperatures are found for each combination of the process variables. A regression analysis is carried out to relate the process variables to the resulting steady-state temperatures. Using regression equations, process windows for a wide range of cylinder diameters are found. A general procedure to find process windows for Hercules AS4/3501-6 prepreg tape is coded in a FORTRAN program.« less

  5. Submorphotypes of the maxillary first molar and their effects on alignment and rotation.

    PubMed

    Kim, Hong-Kyun; Kwon, Ho Beom; Hyun, Hong-Keun; Jung, Min-Ho; Han, Seong Ho; Park, Young-Seok

    2014-09-01

    The aim of this study was to explore the shape differences in maxillary first molars with orthographic measurements using 3-dimensional virtual models to assess whether there is variability in morphology that could affect the alignment results when treated by straight-wire appliance systems. A total of 175 maxillary first molars with 4 cusps were selected for classification. With 3-dimensional laser scanning and reconstruction software, virtual casts were constructed. After performing several linear and angular measurements on the virtual occlusal plane, the teeth were clustered into 2 groups by the method of partitioning around medoids. To visualize the 2 groups, occlusal polygons were constructed using the average data of these groups. The resultant 2 clusters showed statistically significant differences in the measurements describing the cusp locations and the buccal and lingual outlines. The rotation along the centers made the 2 cluster polygons look similar, but there was a difference in the direction of the midsagittal lines. There was considerable variability in morphology according to 2 clusters in the population of this study. The occlusal polygons showed that the outlines of the 2 clusters were similar, but the midsagittal line directions and inner geometries were different. The difference between the morphologies of the 2 clusters could result in occlusal contact differences, which might be considered for better alignment of the maxillary posterior segment. Copyright © 2014 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  6. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique.

    PubMed

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young

    2014-03-01

    This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.

  7. Investigation of the Effect of Tool Edge Geometry upon Cutting Variables, Tool Wear and Burr Formation Using Finite Element Simulation — A Progress Report

    NASA Astrophysics Data System (ADS)

    Sartkulvanich, Partchapol; Al-Zkeri, Ibrahim; Yen, Yung-Chang; Altan, Taylan

    2004-06-01

    This paper summarizes some of the progress made on FEM simulations of metal cutting processes conducted at the Engineering Research Center (ERC/NSM). Presented research focuses on the performance of various cutting edge geometries (hone and chamfer edges) for different tool materials and specifically on: 1) the effect of round and chamfer edge geometries on the cutting variables in machining carbon steels and 2) the effect of the edge hone size upon the flank wear and burr formation behavior in face milling of A356-T6 aluminum alloy. In the second task, an innovative design of edge preparation with varying hone size around the tool nose is also explored using FEM. In order to model three-dimensional conventional turning and face milling with two-dimensional orthogonal cutting simulations, 2D simulation cross-sections consisting of the cutting speed direction and chip flow direction are selected at different locations along the tool nose radius. Then the geometries of the hone and chamfer edges and their associated tool angles as well as uncut chip thickness are determined on these planes and employed in cutting simulations. The chip flow direction on the tool rake face are obtained by examining the wear grooves on the experimental inserts or estimated by using Oxley's approximation theory of oblique cutting. Simulation results are compared with the available experimental results (e.g. cutting forces) both qualitatively and quantitatively.

  8. Analysis and Design of High-Order Parallel Resonant Converters

    NASA Astrophysics Data System (ADS)

    Batarseh, Issa Eid

    1990-01-01

    In this thesis, a special state variable transformation technique has been derived for the analysis of high order dc-to-dc resonant converters. Converters comprised of high order resonant tanks have the advantage of utilizing the parasitic elements by making them part of the resonant tank. A new set of state variables is defined in order to make use of two-dimensional state-plane diagrams in the analysis of high order converters. Such a method has been successfully used for the analysis of the conventional Parallel Resonant Converters (PRC). Consequently, two -dimensional state-plane diagrams are used to analyze the steady state response for third and fourth order PRC's when these converters are operated in the continuous conduction mode. Based on this analysis, a set of control characteristic curves for the LCC-, LLC- and LLCC-type PRC are presented from which various converter design parameters are obtained. Various design curves for component value selections and device ratings are given. This analysis of high order resonant converters shows that the addition of the reactive components to the resonant tank results in converters with better performance characteristics when compared with the conventional second order PRC. Complete design procedure along with design examples for 2nd, 3rd and 4th order converters are presented. Practical power supply units, normally used for computer applications, were built and tested by using the LCC-, LLC- and LLCC-type commutation schemes. In addition, computer simulation results are presented for these converters in order to verify the theoretical results.

  9. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  10. Three-dimensional Hadamard-encoded proton spectroscopic imaging in the human brain using time-cascaded pulses at 3 Tesla.

    PubMed

    Cohen, Ouri; Tal, Assaf; Gonen, Oded

    2014-10-01

    To reduce the specific-absorption-rate (SAR) and chemical shift displacement (CSD) of three-dimensional (3D) Hadamard spectroscopic imaging (HSI) and maintain its point spread function (PSF) benefits. A 3D hybrid of 2D longitudinal, 1D transverse HSI (L-HSI, T-HSI) sequence is introduced and demonstrated in a phantom and the human brain at 3 Tesla (T). Instead of superimposing each of the selective Hadamard radiofrequency (RF) pulses with its N single-slice components, they are cascaded in time, allowing N-fold stronger gradients, reducing the CSD. A spatially refocusing 180° RF pulse following the T-HSI encoding block provides variable, arbitrary echo time (TE) to eliminate undesirable short T2 species' signals, e.g., lipids. The sequence yields 10-15% better signal-to-noise ratio (SNR) and 8-16% less signal bleed than 3D chemical shift imaging of equal repetition time, spatial resolution and grid size. The 13 ± 6, 22 ± 7, 24 ± 8, and 31 ± 14 in vivo SNRs for myo-inositol, choline, creatine, and N-acetylaspartate were obtained in 21 min from 1 cm(3) voxels at TE ≈ 20 ms. Maximum CSD was 0.3 mm/ppm in each direction. The new hybrid HSI sequence offers a better localized PSF at reduced CSD and SAR at 3T. The short and variable TE permits acquisition of short T2 and J-coupled metabolites with higher SNR. Copyright © 2013 Wiley Periodicals, Inc.

  11. Variables separation and superintegrability of the nine-dimensional MICZ-Kepler problem

    NASA Astrophysics Data System (ADS)

    Phan, Ngoc-Hung; Le, Dai-Nam; Thoi, Tuan-Quoc N.; Le, Van-Hoang

    2018-03-01

    The nine-dimensional MICZ-Kepler problem is of recent interest. This is a system describing a charged particle moving in the Coulomb field plus the field of a SO(8) monopole in a nine-dimensional space. Interestingly, this problem is equivalent to a 16-dimensional harmonic oscillator via the Hurwitz transformation. In the present paper, we report on the multiseparability, a common property of superintegrable systems, and the superintegrability of the problem. First, we show the solvability of the Schrödinger equation of the problem by the variables separation method in different coordinates. Second, based on the SO(10) symmetry algebra of the system, we construct explicitly a set of seventeen invariant operators, which are all in the second order of the momentum components, satisfying the condition of superintegrability. The found number 17 coincides with the prediction of (2n - 1) law of maximal superintegrability order in the case n = 9. Until now, this law is accepted to apply only to scalar Hamiltonian eigenvalue equations in n-dimensional space; therefore, our results can be treated as evidence that this definition of superintegrability may also apply to some vector equations such as the Schrödinger equation for the nine-dimensional MICZ-Kepler problem.

  12. Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap

    NASA Astrophysics Data System (ADS)

    Spiwok, Vojtěch; Králová, Blanka

    2011-12-01

    Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling.

  13. Efficient Variable Selection Method for Exposure Variables on Binary Data

    NASA Astrophysics Data System (ADS)

    Ohno, Manabu; Tarumi, Tomoyuki

    In this paper, we propose a new variable selection method for "robust" exposure variables. We define "robust" as property that the same variable can select among original data and perturbed data. There are few studies of effective for the selection method. The problem that selects exposure variables is almost the same as a problem that extracts correlation rules without robustness. [Brin 97] is suggested that correlation rules are possible to extract efficiently using chi-squared statistic of contingency table having monotone property on binary data. But the chi-squared value does not have monotone property, so it's is easy to judge the method to be not independent with an increase in the dimension though the variable set is completely independent, and the method is not usable in variable selection for robust exposure variables. We assume anti-monotone property for independent variables to select robust independent variables and use the apriori algorithm for it. The apriori algorithm is one of the algorithms which find association rules from the market basket data. The algorithm use anti-monotone property on the support which is defined by association rules. But independent property does not completely have anti-monotone property on the AIC of independent probability model, but the tendency to have anti-monotone property is strong. Therefore, selected variables with anti-monotone property on the AIC have robustness. Our method judges whether a certain variable is exposure variable for the independent variable using previous comparison of the AIC. Our numerical experiments show that our method can select robust exposure variables efficiently and precisely.

  14. Temporal dynamics of selective attention and conflict resolution during cross-dimensional Go-NoGo decisions.

    PubMed

    Kopp, Bruno; Tabeling, Sandra; Moschner, Carsten; Wessel, Karl

    2007-08-17

    Decision-making is a fundamental capacity which is crucial to many higher-order psychological functions. We recorded event-related potentials (ERPs) during a visual target-identification task that required go-nogo choices. Targets were identified on the basis of cross-dimensional conjunctions of particular colors and forms. Color discriminability was manipulated in three conditions to determine the effects of color distinctiveness on component processes of decision-making. Target identification was accompanied by the emergence of prefrontal P2a and P3b. Selection negativity (SN) revealed that target-compatible features captured attention more than target-incompatible features, suggesting that intra-dimensional attentional capture was goal-contingent. No changes of cross-dimensional selection priorities were measurable when color discriminability was altered. Peak latencies of the color-related SN provided a chronometric measure of the duration of attention-related neural processing. ERPs recorded over the frontocentral scalp (N2c, P3a) revealed that color-overlap distractors, more than form-overlap distractors, required additional late selection. The need for additional response selection induced by color-overlap distractors was severely reduced when color discriminability decreased. We propose a simple model of cross-dimensional perceptual decision-making. The temporal synchrony of separate color-related and form-related choices determines whether or not distractor processing includes post-perceptual stages. ERP measures contribute to a comprehensive explanation of the temporal dynamics of component processes of perceptual decision-making.

  15. Effects of band selection on endmember extraction for forestry applications

    NASA Astrophysics Data System (ADS)

    Karathanassi, Vassilia; Andreou, Charoula; Andronis, Vassilis; Kolokoussis, Polychronis

    2014-10-01

    In spectral unmixing theory, data reduction techniques play an important role as hyperspectral imagery contains an immense amount of data, posing many challenging problems such as data storage, computational efficiency, and the so called "curse of dimensionality". Feature extraction and feature selection are the two main approaches for dimensionality reduction. Feature extraction techniques are used for reducing the dimensionality of the hyperspectral data by applying transforms on hyperspectral data. Feature selection techniques retain the physical meaning of the data by selecting a set of bands from the input hyperspectral dataset, which mainly contain the information needed for spectral unmixing. Although feature selection techniques are well-known for their dimensionality reduction potentials they are rarely used in the unmixing process. The majority of the existing state-of-the-art dimensionality reduction methods set criteria to the spectral information, which is derived by the whole wavelength, in order to define the optimum spectral subspace. These criteria are not associated with any particular application but with the data statistics, such as correlation and entropy values. However, each application is associated with specific land c over materials, whose spectral characteristics present variations in specific wavelengths. In forestry for example, many applications focus on tree leaves, in which specific pigments such as chlorophyll, xanthophyll, etc. determine the wavelengths where tree species, diseases, etc., can be detected. For such applications, when the unmixing process is applied, the tree species, diseases, etc., are considered as the endmembers of interest. This paper focuses on investigating the effects of band selection on the endmember extraction by exploiting the information of the vegetation absorbance spectral zones. More precisely, it is explored whether endmember extraction can be optimized when specific sets of initial bands related to leaf spectral characteristics are selected. Experiments comprise application of well-known signal subspace estimation and endmember extraction methods on a hyperspectral imagery that presents a forest area. Evaluation of the extracted endmembers showed that more forest species can be extracted as endmembers using selected bands.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levegruen, Sabine, E-mail: sabine.levegruen@uni-due.de; Poettgen, Christoph; Abu Jawad, Jehad

    Purpose: To evaluate megavoltage computed tomography (MVCT)-based image guidance with helical tomotherapy in patients with vertebral tumors by analyzing factors influencing interobserver variability, considered as quality criterion of image guidance. Methods and Materials: Five radiation oncologists retrospectively registered 103 MVCTs in 10 patients to planning kilovoltage CTs by rigid transformations in 4 df. Interobserver variabilities were quantified using the standard deviations (SDs) of the distributions of the correction vector components about the observers' fraction mean. To assess intraobserver variabilities, registrations were repeated after {>=}4 weeks. Residual deviations after setup correction due to uncorrectable rotational errors and elastic deformations were determinedmore » at 3 craniocaudal target positions. To differentiate observer-related variations in minimizing these residual deviations across the 3-dimensional MVCT from image resolution effects, 2-dimensional registrations were performed in 30 single transverse and sagittal MVCT slices. Axial and longitudinal MVCT image resolutions were quantified. For comparison, image resolution of kilovoltage cone-beam CTs (CBCTs) and interobserver variability in registrations of 43 CBCTs were determined. Results: Axial MVCT image resolution is 3.9 lp/cm. Longitudinal MVCT resolution amounts to 6.3 mm, assessed as full-width at half-maximum of thin objects in MVCTs with finest pitch. Longitudinal CBCT resolution is better (full-width at half-maximum, 2.5 mm for CBCTs with 1-mm slices). In MVCT registrations, interobserver variability in the craniocaudal direction (SD 1.23 mm) is significantly larger than in the lateral and ventrodorsal directions (SD 0.84 and 0.91 mm, respectively) and significantly larger compared with CBCT alignments (SD 1.04 mm). Intraobserver variabilities are significantly smaller than corresponding interobserver variabilities (variance ratio [VR] 1.8-3.1). Compared with 3-dimensional registrations, 2-dimensional registrations have significantly smaller interobserver variability in the lateral and ventrodorsal directions (VR 3.8 and 2.8, respectively) but not in the craniocaudal direction (VR 0.75). Conclusion: Tomotherapy image guidance precision is affected by image resolution and residual deviations after setup correction. Eliminating the effect of residual deviations yields small interobserver variabilities with submillimeter precision in the axial plane. In contrast, interobserver variability in the craniocaudal direction is dominated by the poorer longitudinal MVCT image resolution. Residual deviations after image guidance exist and need to be considered when dose gradients ultimately achievable with image guided radiation therapy techniques are analyzed.« less

  17. Boundary Conditions for Infinite Conservation Laws

    NASA Astrophysics Data System (ADS)

    Rosenhaus, V.; Bruzón, M. S.; Gandarias, M. L.

    2016-12-01

    Regular soliton equations (KdV, sine-Gordon, NLS) are known to possess infinite sets of local conservation laws. Some other classes of nonlinear PDE possess infinite-dimensional symmetries parametrized by arbitrary functions of independent or dependent variables; among them are Zabolotskaya-Khokhlov, Kadomtsev-Petviashvili, Davey-Stewartson equations and Born-Infeld equation. Boundary conditions were shown to play an important role for the existence of local conservation laws associated with infinite-dimensional symmetries. In this paper, we analyze boundary conditions for the infinite conserved densities of regular soliton equations: KdV, potential KdV, Sine-Gordon equation, and nonlinear Schrödinger equation, and compare them with boundary conditions for the conserved densities obtained from infinite-dimensional symmetries with arbitrary functions of independent and dependent variables.

  18. Thrust performance of a variable-geometry, divergent exhaust nozzle on a turbojet engine at altitude

    NASA Technical Reports Server (NTRS)

    Straight, D. M.; Collom, R. R.

    1983-01-01

    A variable geometry, low aspect ratio, nonaxisymmetric, two dimensional, convergent-divergent exhaust nozzle was tested at simulated altitude on a turbojet engine to obtain baseline axial, dry thrust performance over wide ranges of operating nozzle pressure ratios, throat areas, and internal expansion area ratios. The thrust data showed good agreement with theory and scale model test results after the data were corrected for seal leakage and coolant losses. Wall static pressure profile data were also obtained and compared with one dimensional theory and scale model data. The pressure data indicate greater three dimensional flow effects in the full scale tests than with models. The leakage and coolant penalties were substantial, and the method to determine them is included.

  19. Measuring monotony in two-dimensional samples

    NASA Astrophysics Data System (ADS)

    Kachapova, Farida; Kachapov, Ilias

    2010-04-01

    This note introduces a monotony coefficient as a new measure of the monotone dependence in a two-dimensional sample. Some properties of this measure are derived. In particular, it is shown that the absolute value of the monotony coefficient for a two-dimensional sample is between |r| and 1, where r is the Pearson's correlation coefficient for the sample; that the monotony coefficient equals 1 for any monotone increasing sample and equals -1 for any monotone decreasing sample. This article contains a few examples demonstrating that the monotony coefficient is a more accurate measure of the degree of monotone dependence for a non-linear relationship than the Pearson's, Spearman's and Kendall's correlation coefficients. The monotony coefficient is a tool that can be applied to samples in order to find dependencies between random variables; it is especially useful in finding couples of dependent variables in a big dataset of many variables. Undergraduate students in mathematics and science would benefit from learning and applying this measure of monotone dependence.

  20. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time‐to‐Event Analysis

    PubMed Central

    Gong, Xiajing; Hu, Meng

    2018-01-01

    Abstract Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time‐to‐event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high‐dimensional data featured by a large number of predictor variables. Our results showed that ML‐based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high‐dimensional data. The prediction performances of ML‐based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML‐based methods provide a powerful tool for time‐to‐event analysis, with a built‐in capacity for high‐dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. PMID:29536640

  1. Optimizing separations in online comprehensive two‐dimensional liquid chromatography

    PubMed Central

    Gargano, Andrea F.G.; Schoenmakers, Peter J.

    2017-01-01

    Abstract Online comprehensive two‐dimensional liquid chromatography has become an attractive option for the analysis of complex nonvolatile samples found in various fields (e.g. environmental studies, food, life, and polymer sciences). Two‐dimensional liquid chromatography complements the highly popular hyphenated systems that combine liquid chromatography with mass spectrometry. Two‐dimensional liquid chromatography is also applied to the analysis of samples that are not compatible with mass spectrometry (e.g. high‐molecular‐weight polymers), providing important information on the distribution of the sample components along chemical dimensions (molecular weight, charge, lipophilicity, stereochemistry, etc.). Also, in comparison with conventional one‐dimensional liquid chromatography, two‐dimensional liquid chromatography provides a greater separation power (peak capacity). Because of the additional selectivity and higher peak capacity, the combination of two‐dimensional liquid chromatography with mass spectrometry allows for simpler mixtures of compounds to be introduced in the ion source at any given time, improving quantitative analysis by reducing matrix effects. In this review, we summarize the rationale and principles of two‐dimensional liquid chromatography experiments, describe advantages and disadvantages of combining different selectivities and discuss strategies to improve the quality of two‐dimensional liquid chromatography separations. PMID:29027363

  2. Design and prediction of new anticoagulants as a selective Factor IXa inhibitor via three-dimensional quantitative structure-property relationships of amidinobenzothiophene derivatives.

    PubMed

    Gao, Jia-Suo; Tong, Xu-Peng; Chang, Yi-Qun; He, Yu-Xuan; Mei, Yu-Dan; Tan, Pei-Hong; Guo, Jia-Liang; Liao, Guo-Chao; Xiao, Gao-Keng; Chen, Wei-Min; Zhou, Shu-Feng; Sun, Ping-Hua

    2015-01-01

    Factor IXa (FIXa), a blood coagulation factor, is specifically inhibited at the initiation stage of the coagulation cascade, promising an excellent approach for developing selective and safe anticoagulants. Eighty-four amidinobenzothiophene antithrombotic derivatives targeting FIXa were selected to establish three-dimensional quantitative structure-activity relationship (3D-QSAR) and three-dimensional quantitative structure-selectivity relationship (3D-QSSR) models using comparative molecular field analysis and comparative similarity indices analysis methods. Internal and external cross-validation techniques were investigated as well as region focusing and bootstrapping. The satisfactory q (2) values of 0.753 and 0.770, and r (2) values of 0.940 and 0.965 for 3D-QSAR and 3D-QSSR, respectively, indicated that the models are available to predict both the inhibitory activity and selectivity on FIXa against Factor Xa, the activated status of Factor X. This work revealed that the steric, hydrophobic, and H-bond factors should appropriately be taken into account in future rational design, especially the modifications at the 2'-position of the benzene and the 6-position of the benzothiophene in the R group, providing helpful clues to design more active and selective FIXa inhibitors for the treatment of thrombosis. On the basis of the three-dimensional quantitative structure-property relationships, 16 new potent molecules have been designed and are predicted to be more active and selective than Compound 33, which has the best activity as reported in the literature.

  3. Design and prediction of new anticoagulants as a selective Factor IXa inhibitor via three-dimensional quantitative structure-property relationships of amidinobenzothiophene derivatives

    PubMed Central

    Gao, Jia-Suo; Tong, Xu-Peng; Chang, Yi-Qun; He, Yu-Xuan; Mei, Yu-Dan; Tan, Pei-Hong; Guo, Jia-Liang; Liao, Guo-Chao; Xiao, Gao-Keng; Chen, Wei-Min; Zhou, Shu-Feng; Sun, Ping-Hua

    2015-01-01

    Factor IXa (FIXa), a blood coagulation factor, is specifically inhibited at the initiation stage of the coagulation cascade, promising an excellent approach for developing selective and safe anticoagulants. Eighty-four amidinobenzothiophene antithrombotic derivatives targeting FIXa were selected to establish three-dimensional quantitative structure–activity relationship (3D-QSAR) and three-dimensional quantitative structure–selectivity relationship (3D-QSSR) models using comparative molecular field analysis and comparative similarity indices analysis methods. Internal and external cross-validation techniques were investigated as well as region focusing and bootstrapping. The satisfactory q2 values of 0.753 and 0.770, and r2 values of 0.940 and 0.965 for 3D-QSAR and 3D-QSSR, respectively, indicated that the models are available to predict both the inhibitory activity and selectivity on FIXa against Factor Xa, the activated status of Factor X. This work revealed that the steric, hydrophobic, and H-bond factors should appropriately be taken into account in future rational design, especially the modifications at the 2′-position of the benzene and the 6-position of the benzothiophene in the R group, providing helpful clues to design more active and selective FIXa inhibitors for the treatment of thrombosis. On the basis of the three-dimensional quantitative structure–property relationships, 16 new potent molecules have been designed and are predicted to be more active and selective than Compound 33, which has the best activity as reported in the literature. PMID:25848211

  4. Using crown condition variables as indicators of forest health

    Treesearch

    Stanley J. Zarnoch; William A. Bechtold; K.W. Stolte

    2004-01-01

    Indicators of forest health used in previous studies have focused on crown variables analyzed individually at the tree level by summarizing over all species. This approach has the virtue of simplicity but does not account for the three-dimensional attributes of a tree crown, the multivariate nature of the crown variables, or variability among species. To alleviate...

  5. Influence of the Quantity of Aortic Valve Calcium on the Agreement Between Automated 3-Dimensional Transesophageal Echocardiography and Multidetector Row Computed Tomography for Aortic Annulus Sizing.

    PubMed

    Podlesnikar, Tomaz; Prihadi, Edgard A; van Rosendael, Philippe J; Vollema, E Mara; van der Kley, Frank; de Weger, Arend; Ajmone Marsan, Nina; Naji, Franjo; Fras, Zlatko; Bax, Jeroen J; Delgado, Victoria

    2018-01-01

    Accurate aortic annulus sizing is key for selection of appropriate transcatheter aortic valve implantation (TAVI) prosthesis size. The present study compared novel automated 3-dimensional (3D) transesophageal echocardiography (TEE) software and multidetector row computed tomography (MDCT) for aortic annulus sizing and investigated the influence of the quantity of aortic valve calcium (AVC) on the selection of TAVI prosthesis size. A total of 83 patients with severe aortic stenosis undergoing TAVI were evaluated. Maximal and minimal aortic annulus diameter, perimeter, and area were measured. AVC was assessed with computed tomography. The low and high AVC burden groups were defined according to the median AVC score. Overall, 3D TEE measurements slightly underestimated the aortic annulus dimensions as compared with MDCT (mean differences between maximum, minimum diameter, perimeter, and area: -1.7 mm, 0.5 mm, -2.7 mm, and -13 mm 2 , respectively). The agreement between 3D TEE and MDCT on aortic annulus dimensions was superior among patients with low AVC burden (<3,025 arbitrary units) compared with patients with high AVC burden (≥3,025 arbitrary units). The interobserver variability was excellent for both methods. 3D TEE and MDCT led to the same prosthesis size selection in 88%, 95%, and 81% of patients in the total population, the low, and the high AVC burden group, respectively. In conclusion, the novel automated 3D TEE imaging software allows accurate and highly reproducible measurements of the aortic annulus dimensions and shows excellent agreement with MDCT to determine the TAVI prosthesis size, particularly in patients with low AVC burden. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  6. New Patterns of the Two-Dimensional Rogue Waves: (2+1)-Dimensional Maccari System

    NASA Astrophysics Data System (ADS)

    Wang, Gai-Hua; Wang, Li-Hong; Rao, Ji-Guang; He, Jing-Song

    2017-06-01

    The ocean rogue wave is one kind of puzzled destructive phenomenon that has not been understood thoroughly so far. The two-dimensional nature of this wave has inspired the vast endeavors on the recognizing new patterns of the rogue waves based on the dynamical equations with two-spatial variables and one-temporal variable, which is a very crucial step to prevent this disaster event at the earliest stage. Along this issue, we present twelve new patterns of the two-dimensional rogue waves, which are reduced from a rational and explicit formula of the solutions for a (2+1)-dimensional Maccari system. The extreme points (lines) of the first-order lumps (rogue waves) are discussed according to their analytical formulas. For the lower-order rogue waves, we show clearly in formula that parameter b 2 plays a significant role to control these patterns. Supported by the National Natural Science Foundation of China under Grant No. 11671219, the K. C. Wong Magna Fund in Ningbo University, Gai-Hua Wang is also supported by the Scientific Research Foundation of Graduate School of Ningbo University

  7. Design of efficient circularly symmetric two-dimensional variable digital FIR filters.

    PubMed

    Bindima, Thayyil; Elias, Elizabeth

    2016-05-01

    Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability.

  8. Design of efficient circularly symmetric two-dimensional variable digital FIR filters

    PubMed Central

    Bindima, Thayyil; Elias, Elizabeth

    2016-01-01

    Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability. PMID:27222739

  9. A firefly algorithm for optimum design of new-generation beams

    NASA Astrophysics Data System (ADS)

    Erdal, F.

    2017-06-01

    This research addresses the minimum weight design of new-generation steel beams with sinusoidal openings using a metaheuristic search technique, namely the firefly method. The proposed algorithm is also used to compare the optimum design results of sinusoidal web-expanded beams with steel castellated and cellular beams. Optimum design problems of all beams are formulated according to the design limitations stipulated by the Steel Construction Institute. The design methods adopted in these publications are consistent with BS 5950 specifications. The formulation of the design problem considering the above-mentioned limitations turns out to be a discrete programming problem. The design algorithms based on the technique select the optimum universal beam sections, dimensional properties of sinusoidal, hexagonal and circular holes, and the total number of openings along the beam as design variables. Furthermore, this selection is also carried out such that the behavioural limitations are satisfied. Numerical examples are presented, where the suggested algorithm is implemented to achieve the minimum weight design of these beams subjected to loading combinations.

  10. Igs expressed by chronic lymphocytic leukemia B cells show limited binding-site structure variability.

    PubMed

    Marcatili, Paolo; Ghiotto, Fabio; Tenca, Claudya; Chailyan, Anna; Mazzarello, Andrea N; Yan, Xiao-Jie; Colombo, Monica; Albesiano, Emilia; Bagnara, Davide; Cutrona, Giovanna; Morabito, Fortunato; Bruno, Silvia; Ferrarini, Manlio; Chiorazzi, Nicholas; Tramontano, Anna; Fais, Franco

    2013-06-01

    Ag selection has been suggested to play a role in chronic lymphocytic leukemia (CLL) pathogenesis, but no large-scale analysis has been performed so far on the structure of the Ag-binding sites (ABSs) of leukemic cell Igs. We sequenced both H and L chain V(D)J rearrangements from 366 CLL patients and modeled their three-dimensional structures. The resulting ABS structures were clustered into a small number of discrete sets, each containing ABSs with similar shapes and physicochemical properties. This structural classification correlates well with other known prognostic factors such as Ig mutation status and recurrent (stereotyped) receptors, but it shows a better prognostic value, at least in the case of one structural cluster for which clinical data were available. These findings suggest, for the first time, to our knowledge, on the basis of a structural analysis of the Ab-binding sites, that selection by a finite quota of antigenic structures operates on most CLL cases, whether mutated or unmutated.

  11. Estimation of effective hydrologic properties of soils from observations of vegetation density. M.S. Thesis; [water balance of watersheds in Clinton, Maine and Santa Paula, California

    NASA Technical Reports Server (NTRS)

    Tellers, T. E.

    1980-01-01

    An existing one-dimensional model of the annual water balance is reviewed. Slight improvements are made in the method of calculating the bare soil component of evaporation, and in the way surface retention is handled. A natural selection hypothesis, which specifies the equilibrium vegetation density for a given, water limited, climate-soil system, is verified through comparisons with observed data and is employed in the annual water balance of watersheds in Clinton, Ma., and Santa Paula, Ca., to estimate effective areal average soil properties. Comparison of CDF's of annual basin yield derived using these soil properties with observed CDF's provides excellent verification of the soil-selection procedure. This method of parameterization of the land surface should be useful with present global circulation models, enabling them to account for both the non-linearity in the relationship between soil moisture flux and soil moisture concentration, and the variability of soil properties from place to place over the Earth's surface.

  12. Three-dimensional marginal separation

    NASA Technical Reports Server (NTRS)

    Duck, Peter W.

    1988-01-01

    The three dimensional marginal separation of a boundary layer along a line of symmetry is considered. The key equation governing the displacement function is derived, and found to be a nonlinear integral equation in two space variables. This is solved iteratively using a pseudo-spectral approach, based partly in double Fourier space, and partly in physical space. Qualitatively, the results are similar to previously reported two dimensional results (which are also computed to test the accuracy of the numerical scheme); however quantitatively the three dimensional results are much different.

  13. Parallel Planes Information Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian

    2015-12-26

    This software presents a user-provided multivariate dataset as an interactive three dimensional visualization so that the user can explore the correlation between variables in the observations and the distribution of observations among the variables.

  14. Delineation of recharge areas for selected wells in the St. Peter-Prairie du Chien-Jordan Aquifer, Rochester, Minnesota

    USGS Publications Warehouse

    Delin, G.N.; Almendinger, James Edward

    1991-01-01

    Hydrogeologic mapping and numerical modeling were used to delineate zones of contribution to wells, defined as all parts of a ground-water-flow system that could supply water to a well. The zones of contribution delineated by use of numerical modeling have similar orientation (parallel to regional flow directions) but significantly different areas than the zones of contribution delineated by use of hydrogeologic mapping. Differences in computed areas of recharge are attributed to the capability of the numerical model to more accurately represent (1) the three-dimensional flow system, (2) hydrologic boundaries like streams, (3) variable recharge, and (4) the influence of nearby pumped wells, compared to the analytical models.

  15. Delineation of recharge areas for selected wells in the St. Peter-Prairie du Chien-Jordan aquifer, Rochester, Minnesota

    USGS Publications Warehouse

    Delin, G.N.; Almendinger, James Edward

    1993-01-01

    Hydrogeologic mapping and numerical modeling were used to delineate zones of contribution to wells, defined as all parts of a ground-water-flow system that could supply water to a well. The zones of contribution delineated by use of numerical modeling have similar orientation (parallel to regional flow directions) but significantly different areas than the zones of contribution delineated by use of hydrogeologic mapping. Differences in computed areas of recharge are attributed to the capability of the numerical model to more accurately represent (1) the three-dimensional flow system, (2) hydrologic boundaries such as streams, (3) variable recharge, and (4) the influence of nearby pumped wells, compared to the analytical models.

  16. Convective dynamics - Panel report

    NASA Technical Reports Server (NTRS)

    Carbone, Richard; Foote, G. Brant; Moncrieff, Mitch; Gal-Chen, Tzvi; Cotton, William; Heymsfield, Gerald

    1990-01-01

    Aspects of highly organized forms of deep convection at midlatitudes are reviewed. Past emphasis in field work and cloud modeling has been directed toward severe weather as evidenced by research on tornadoes, hail, and strong surface winds. A number of specific issues concerning future thrusts, tactics, and techniques in convective dynamics are presented. These subjects include; convective modes and parameterization, global structure and scale interaction, convective energetics, transport studies, anvils and scale interaction, and scale selection. Also discussed are analysis workshops, four-dimensional data assimilation, matching models with observations, network Doppler analyses, mesoscale variability, and high-resolution/high-performance Doppler. It is also noted, that, classical surface measurements and soundings, flight-level research aircraft data, passive satellite data, and traditional photogrammetric studies are examples of datasets that require assimilation and integration.

  17. Sensitivity Analysis of Stability Problems of Steel Structures using Shell Finite Elements and Nonlinear Computation Methods

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk; Kala, Jiří

    2011-09-01

    The main focus of the paper is the analysis of the influence of residual stress on the ultimate limit state of a hot-rolled member in compression. The member was modelled using thin-walled elements of type SHELL 181 and meshed in the programme ANSYS. Geometrical and material non-linear analysis was used. The influence of residual stress was studied using variance-based sensitivity analysis. In order to obtain more general results, the non-dimensional slenderness was selected as a study parameter. Comparison of the influence of the residual stress with the influence of other dominant imperfections is illustrated in the conclusion of the paper. All input random variables were considered according to results of experimental research.

  18. Multi-Dimensional Sensors and Sensing Systems

    NASA Technical Reports Server (NTRS)

    Stetter, Joseph R. (Inventor); Shirke, Amol G. (Inventor)

    2014-01-01

    A universal microelectromechanical (MEMS) nano-sensor platform having a substrate and conductive layer deposited in a pattern on the surface to make several devices at the same time, a patterned insulation layer, wherein the insulation layer is configured to expose one or more portions of the conductive layer, and one or more functionalization layers deposited on the exposed portions of the conductive layer to make multiple sensing capability on a single MEMS fabricated device. The functionalization layers are adapted to provide one or more transducer sensor classes selected from the group consisting of: radiant, electrochemical, electronic, mechanical, magnetic, and thermal sensors for chemical and physical variables and producing more than one type of sensor for one or more significant parameters that need to be monitored.

  19. Existence of Lipschitz selections of the Steiner map

    NASA Astrophysics Data System (ADS)

    Bednov, B. B.; Borodin, P. A.; Chesnokova, K. V.

    2018-02-01

    This paper is concerned with the problem of the existence of Lipschitz selections of the Steiner map {St}_n, which associates with n points of a Banach space X the set of their Steiner points. The answer to this problem depends on the geometric properties of the unit sphere S(X) of X, its dimension, and the number n. For n≥slant 4 general conditions are obtained on the space X under which {St}_n admits no Lipschitz selection. When X is finite dimensional it is shown that, if n≥slant 4 is even, the map {St}_n has a Lipschitz selection if and only if S(X) is a finite polytope; this is not true if n≥slant 3 is odd. For n=3 the (single-valued) map {St}_3 is shown to be Lipschitz continuous in any smooth strictly-convex two-dimensional space; this ceases to be true in three-dimensional spaces. Bibliography: 21 titles.

  20. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    NASA Astrophysics Data System (ADS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  1. System for generating two-dimensional masks from a three-dimensional model using topological analysis

    DOEpatents

    Schiek, Richard [Albuquerque, NM

    2006-06-20

    A method of generating two-dimensional masks from a three-dimensional model comprises providing a three-dimensional model representing a micro-electro-mechanical structure for manufacture and a description of process mask requirements, reducing the three-dimensional model to a topological description of unique cross sections, and selecting candidate masks from the unique cross sections and the cross section topology. The method further can comprise reconciling the candidate masks based on the process mask requirements description to produce two-dimensional process masks.

  2. Computer language for identifying chemicals with comprehensive two-dimensional gas chromatography and mass spectrometry.

    PubMed

    Reichenbach, Stephen E; Kottapalli, Visweswara; Ni, Mingtian; Visvanathan, Arvind

    2005-04-15

    This paper describes a language for expressing criteria for chemical identification with comprehensive two-dimensional gas chromatography paired with mass spectrometry (GC x GC-MS) and presents computer-based tools implementing the language. The Computer Language for Indentifying Chemicals (CLIC) allows expressions that describe rules (or constraints) for selecting chemical peaks or data points based on multi-dimensional chromatographic properties and mass spectral characteristics. CLIC offers chromatographic functions of retention times, functions of mass spectra, numbers for quantitative and relational evaluation, and logical and arithmetic operators. The language is demonstrated with the compound-class selection rules described by Welthagen et al. [W. Welthagen, J. Schnelle-Kreis, R. Zimmermann, J. Chromatogr. A 1019 (2003) 233-249]. A software implementation of CLIC provides a calculator-like graphical user-interface (GUI) for building and applying selection expressions. From the selection calculator, expressions can be used to select chromatographic peaks that meet the criteria or create selection chromatograms that mask data points inconsistent with the criteria. Selection expressions can be combined with graphical, geometric constraints in the retention-time plane as a powerful component for chemical identification with template matching or used to speed and improve mass spectrum library searches.

  3. Asymptotic and spectral analysis of the gyrokinetic-waterbag integro-differential operator in toroidal geometry

    NASA Astrophysics Data System (ADS)

    Besse, Nicolas; Coulette, David

    2016-08-01

    Achieving plasmas with good stability and confinement properties is a key research goal for magnetic fusion devices. The underlying equations are the Vlasov-Poisson and Vlasov-Maxwell (VPM) equations in three space variables, three velocity variables, and one time variable. Even in those somewhat academic cases where global equilibrium solutions are known, studying their stability requires the analysis of the spectral properties of the linearized operator, a daunting task. We have identified a model, for which not only equilibrium solutions can be constructed, but many of their stability properties are amenable to rigorous analysis. It uses a class of solution to the VPM equations (or to their gyrokinetic approximations) known as waterbag solutions which, in particular, are piecewise constant in phase-space. It also uses, not only the gyrokinetic approximation of fast cyclotronic motion around magnetic field lines, but also an asymptotic approximation regarding the magnetic-field-induced anisotropy: the spatial variation along the field lines is taken much slower than across them. Together, these assumptions result in a drastic reduction in the dimensionality of the linearized problem, which becomes a set of two nested one-dimensional problems: an integral equation in the poloidal variable, followed by a one-dimensional complex Schrödinger equation in the radial variable. We show here that the operator associated to the poloidal variable is meromorphic in the eigenparameter, the pulsation frequency. We also prove that, for all but a countable set of real pulsation frequencies, the operator is compact and thus behaves mostly as a finite-dimensional one. The numerical algorithms based on such ideas have been implemented in a companion paper [D. Coulette and N. Besse, "Numerical resolution of the global eigenvalue problem for gyrokinetic-waterbag model in toroidal geometry" (submitted)] and were found to be surprisingly close to those for the original gyrokinetic-Vlasov equations. The purpose of the present paper is to make these new ideas accessible to two readerships: applied mathematicians and plasma physicists.

  4. Advancing three-dimensional MEMS by complimentary laser micro manufacturing

    NASA Astrophysics Data System (ADS)

    Palmer, Jeremy A.; Williams, John D.; Lemp, Tom; Lehecka, Tom M.; Medina, Francisco; Wicker, Ryan B.

    2006-01-01

    This paper describes improvements that enable engineers to create three-dimensional MEMS in a variety of materials. It also provides a means for selectively adding three-dimensional, high aspect ratio features to pre-existing PMMA micro molds for subsequent LIGA processing. This complimentary method involves in situ construction of three-dimensional micro molds in a stand-alone configuration or directly adjacent to features formed by x-ray lithography. Three-dimensional micro molds are created by micro stereolithography (MSL), an additive rapid prototyping technology. Alternatively, three-dimensional features may be added by direct femtosecond laser micro machining. Parameters for optimal femtosecond laser micro machining of PMMA at 800 nanometers are presented. The technical discussion also includes strategies for enhancements in the context of material selection and post-process surface finish. This approach may lead to practical, cost-effective 3-D MEMS with the surface finish and throughput advantages of x-ray lithography. Accurate three-dimensional metal microstructures are demonstrated. Challenges remain in process planning for micro stereolithography and development of buried features following femtosecond laser micro machining.

  5. A one-dimensional model for gas-solid heat transfer in pneumatic conveying

    NASA Astrophysics Data System (ADS)

    Smajstrla, Kody Wayne

    A one-dimensional ODE model reduced from a two-fluid model of a higher dimensional order is developed to study dilute, two-phase (air and solid particles) flows with heat transfer in a horizontal pneumatic conveying pipe. Instead of using constant air properties (e.g., density, viscosity, thermal conductivity) evaluated at the initial flow temperature and pressure, this model uses an iteration approach to couple the air properties with flow pressure and temperature. Multiple studies comparing the use of constant or variable air density, viscosity, and thermal conductivity are conducted to study the impact of the changing properties to system performance. The results show that the fully constant property calculation will overestimate the results of the fully variable calculation by 11.4%, while the constant density with variable viscosity and thermal conductivity calculation resulted in an 8.7% overestimation, the constant viscosity with variable density and thermal conductivity overestimated by 2.7%, and the constant thermal conductivity with variable density and viscosity calculation resulted in a 1.2% underestimation. These results demonstrate that gas properties varying with gas temperature can have a significant impact on a conveying system and that the varying density accounts for the majority of that impact. The accuracy of the model is also validated by comparing the simulation results to the experimental values found in the literature.

  6. Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis

    USGS Publications Warehouse

    Hong, Y.-S.T.; Rosen, Michael R.; Bhamidimarri, R.

    2003-01-01

    This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. ?? 2003 Elsevier Science Ltd. All rights reserved.

  7. Prediction of Incident Diabetes in the Jackson Heart Study Using High-Dimensional Machine Learning

    PubMed Central

    Casanova, Ramon; Saldana, Santiago; Simpson, Sean L.; Lacy, Mary E.; Subauste, Angela R.; Blackshear, Chad; Wagenknecht, Lynne; Bertoni, Alain G.

    2016-01-01

    Statistical models to predict incident diabetes are often based on limited variables. Here we pursued two main goals: 1) investigate the relative performance of a machine learning method such as Random Forests (RF) for detecting incident diabetes in a high-dimensional setting defined by a large set of observational data, and 2) uncover potential predictors of diabetes. The Jackson Heart Study collected data at baseline and in two follow-up visits from 5,301 African Americans. We excluded those with baseline diabetes and no follow-up, leaving 3,633 individuals for analyses. Over a mean 8-year follow-up, 584 participants developed diabetes. The full RF model evaluated 93 variables including demographic, anthropometric, blood biomarker, medical history, and echocardiogram data. We also used RF metrics of variable importance to rank variables according to their contribution to diabetes prediction. We implemented other models based on logistic regression and RF where features were preselected. The RF full model performance was similar (AUC = 0.82) to those more parsimonious models. The top-ranked variables according to RF included hemoglobin A1C, fasting plasma glucose, waist circumference, adiponectin, c-reactive protein, triglycerides, leptin, left ventricular mass, high-density lipoprotein cholesterol, and aldosterone. This work shows the potential of RF for incident diabetes prediction while dealing with high-dimensional data. PMID:27727289

  8. Three-Dimensional Flow of an Oldroyd-B Fluid with Variable Thermal Conductivity and Heat Generation/Absorption

    PubMed Central

    Shehzad, Sabir Ali; Alsaedi, Ahmed; Hayat, Tasawar; Alhuthali, M. Shahab

    2013-01-01

    This paper looks at the series solutions of three dimensional boundary layer flow. An Oldroyd-B fluid with variable thermal conductivity is considered. The flow is induced due to stretching of a surface. Analysis has been carried out in the presence of heat generation/absorption. Homotopy analysis is implemented in developing the series solutions to the governing flow and energy equations. Graphs are presented and discussed for various parameters of interest. Comparison of present study with the existing limiting solution is shown and examined. PMID:24223780

  9. Absolute judgment for one- and two-dimensional stimuli embedded in Gaussian noise

    NASA Technical Reports Server (NTRS)

    Kvalseth, T. O.

    1977-01-01

    This study examines the effect on human performance of adding Gaussian noise or disturbance to the stimuli in absolute judgment tasks involving both one- and two-dimensional stimuli. For each selected stimulus value (both an X-value and a Y-value were generated in the two-dimensional case), 10 values (or 10 pairs of values in the two-dimensional case) were generated from a zero-mean Gaussian variate, added to the selected stimulus value and then served as the coordinate values for the 10 points that were displayed sequentially on a CRT. The results show that human performance, in terms of the information transmitted and rms error as functions of stimulus uncertainty, was significantly reduced as the noise variance increased.

  10. Data mining and statistical inference in selective laser melting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamath, Chandrika

    Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less

  11. Data mining and statistical inference in selective laser melting

    DOE PAGES

    Kamath, Chandrika

    2016-01-11

    Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less

  12. Non-ignorable missingness item response theory models for choice effects in examinee-selected items.

    PubMed

    Liu, Chen-Wei; Wang, Wen-Chung

    2017-11-01

    Examinee-selected item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set, always yields incomplete data (i.e., when only the selected items are answered, data are missing for the others) that are likely non-ignorable in likelihood inference. Standard item response theory (IRT) models become infeasible when ESI data are missing not at random (MNAR). To solve this problem, the authors propose a two-dimensional IRT model that posits one unidimensional IRT model for observed data and another for nominal selection patterns. The two latent variables are assumed to follow a bivariate normal distribution. In this study, the mirt freeware package was adopted to estimate parameters. The authors conduct an experiment to demonstrate that ESI data are often non-ignorable and to determine how to apply the new model to the data collected. Two follow-up simulation studies are conducted to assess the parameter recovery of the new model and the consequences for parameter estimation of ignoring MNAR data. The results of the two simulation studies indicate good parameter recovery of the new model and poor parameter recovery when non-ignorable missing data were mistakenly treated as ignorable. © 2017 The British Psychological Society.

  13. Analysis of the GRNs Inference by Using Tsallis Entropy and a Feature Selection Approach

    NASA Astrophysics Data System (ADS)

    Lopes, Fabrício M.; de Oliveira, Evaldo A.; Cesar, Roberto M.

    An important problem in the bioinformatics field is to understand how genes are regulated and interact through gene networks. This knowledge can be helpful for many applications, such as disease treatment design and drugs creation purposes. For this reason, it is very important to uncover the functional relationship among genes and then to construct the gene regulatory network (GRN) from temporal expression data. However, this task usually involves data with a large number of variables and small number of observations. In this way, there is a strong motivation to use pattern recognition and dimensionality reduction approaches. In particular, feature selection is specially important in order to select the most important predictor genes that can explain some phenomena associated with the target genes. This work presents a first study about the sensibility of entropy methods regarding the entropy functional form, applied to the problem of topology recovery of GRNs. The generalized entropy proposed by Tsallis is used to study this sensibility. The inference process is based on a feature selection approach, which is applied to simulated temporal expression data generated by an artificial gene network (AGN) model. The inferred GRNs are validated in terms of global network measures. Some interesting conclusions can be drawn from the experimental results, as reported for the first time in the present paper.

  14. Selection for low or high primary dormancy in Lolium rigidum Gaud seeds results in constitutive differences in stress protein expression and peroxidase activity

    PubMed Central

    Goggin, Danica E.; Powles, Stephen B.; Steadman, Kathryn J.

    2011-01-01

    Seed dormancy in wild Lolium rigidum Gaud (annual ryegrass) populations is highly variable and not well characterized at the biochemical level. To identify some of the determinants of dormancy level in these seeds, the proteomes of subpopulations selected for low and high levels of primary dormancy were compared by two-dimensional polyacrylamide gel electrophoresis of extracts from mature, dry seeds. High-dormancy seeds showed higher expression of small heat shock proteins, enolase, and glyoxalase I than the low-dormancy seeds. The functional relevance of these differences in protein expression was confirmed by the fact that high-dormancy seeds were more tolerant to high temperatures imposed at imbibition and had consistently higher glyoxalase I activity over 0–42 d dark stratification. Higher expression of a putative glutathione peroxidase in low-dormancy seeds was not accompanied by higher activity, but these seeds had a slightly more oxidized glutathione pool and higher total peroxidase activity. Overall, these biochemical and physiological differences suggest that L. rigidum seeds selected for low dormancy are more prepared for rapid germination via peroxidase-mediated cell wall weakening, whilst seeds selected for high dormancy are constitutively prepared to survive environmental stresses, even in the absence of stress during seed development. PMID:20974739

  15. Three-variable solution in the (2+1)-dimensional null-surface formulation

    NASA Astrophysics Data System (ADS)

    Harriott, Tina A.; Williams, J. G.

    2018-04-01

    The null-surface formulation of general relativity (NSF) describes gravity by using families of null surfaces instead of a spacetime metric. Despite the fact that the NSF is (to within a conformal factor) equivalent to general relativity, the equations of the NSF are exceptionally difficult to solve, even in 2+1 dimensions. The present paper gives the first exact (2+1)-dimensional solution that depends nontrivially upon all three of the NSF's intrinsic spacetime variables. The metric derived from this solution is shown to represent a spacetime whose source is a massless scalar field that satisfies the general relativistic wave equation and the Einstein equations with minimal coupling. The spacetime is identified as one of a family of (2+1)-dimensional general relativistic spacetimes discovered by Cavaglià.

  16. Identification of solid state fermentation degree with FT-NIR spectroscopy: Comparison of wavelength variable selection methods of CARS and SCARS.

    PubMed

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-01-01

    The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Identification of solid state fermentation degree with FT-NIR spectroscopy: Comparison of wavelength variable selection methods of CARS and SCARS

    NASA Astrophysics Data System (ADS)

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-10-01

    The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.

  18. A THREE-DIMENSIONAL AIR FLOW MODEL FOR SOIL VENTING: SUPERPOSITION OF ANLAYTICAL FUNCTIONS

    EPA Science Inventory

    A three-dimensional computer model was developed for the simulation of the soil-air pressure distribution at steady state and specific discharge vectors during soil venting with multiple wells in unsaturated soil. The Kirchhoff transformation of dependent variables and coordinate...

  19. A reconstruction algorithm for three-dimensional object-space data using spatial-spectral multiplexing

    NASA Astrophysics Data System (ADS)

    Wu, Zhejun; Kudenov, Michael W.

    2017-05-01

    This paper presents a reconstruction algorithm for the Spatial-Spectral Multiplexing (SSM) optical system. The goal of this algorithm is to recover the three-dimensional spatial and spectral information of a scene, given that a one-dimensional spectrometer array is used to sample the pupil of the spatial-spectral modulator. The challenge of the reconstruction is that the non-parametric representation of the three-dimensional spatial and spectral object requires a large number of variables, thus leading to an underdetermined linear system that is hard to uniquely recover. We propose to reparameterize the spectrum using B-spline functions to reduce the number of unknown variables. Our reconstruction algorithm then solves the improved linear system via a least- square optimization of such B-spline coefficients with additional spatial smoothness regularization. The ground truth object and the optical model for the measurement matrix are simulated with both spatial and spectral assumptions according to a realistic field of view. In order to test the robustness of the algorithm, we add Poisson noise to the measurement and test on both two-dimensional and three-dimensional spatial and spectral scenes. Our analysis shows that the root mean square error of the recovered results can be achieved within 5.15%.

  20. Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap.

    PubMed

    Spiwok, Vojtěch; Králová, Blanka

    2011-12-14

    Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling. © 2011 American Institute of Physics

  1. A Three-Dimensional Finite-Element Model for Simulating Water Flow in Variably Saturated Porous Media

    NASA Astrophysics Data System (ADS)

    Huyakorn, Peter S.; Springer, Everett P.; Guvanasen, Varut; Wadsworth, Terry D.

    1986-12-01

    A three-dimensional finite-element model for simulating water flow in variably saturated porous media is presented. The model formulation is general and capable of accommodating complex boundary conditions associated with seepage faces and infiltration or evaporation on the soil surface. Included in this formulation is an improved Picard algorithm designed to cope with severely nonlinear soil moisture relations. The algorithm is formulated for both rectangular and triangular prism elements. The element matrices are evaluated using an "influence coefficient" technique that avoids costly numerical integration. Spatial discretization of a three-dimensional region is performed using a vertical slicing approach designed to accommodate complex geometry with irregular boundaries, layering, and/or lateral discontinuities. Matrix solution is achieved using a slice successive overrelaxation scheme that permits a fairly large number of nodal unknowns (on the order of several thousand) to be handled efficiently on small minicomputers. Six examples are presented to verify and demonstrate the utility of the proposed finite-element model. The first four examples concern one- and two-dimensional flow problems used as sample problems to benchmark the code. The remaining examples concern three-dimensional problems. These problems are used to illustrate the performance of the proposed algorithm in three-dimensional situations involving seepage faces and anisotropic soil media.

  2. Sparse modeling of spatial environmental variables associated with asthma

    PubMed Central

    Chang, Timothy S.; Gangnon, Ronald E.; Page, C. David; Buckingham, William R.; Tandias, Aman; Cowan, Kelly J.; Tomasallo, Carrie D.; Arndt, Brian G.; Hanrahan, Lawrence P.; Guilbert, Theresa W.

    2014-01-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin’s Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5–50 years over a three-year period. Each patient’s home address was geocoded to one of 3,456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin’s geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. PMID:25533437

  3. Sparse modeling of spatial environmental variables associated with asthma.

    PubMed

    Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

    2015-02-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. A 100 electrode intracortical array: structural variability.

    PubMed

    Campbell, P K; Jones, K E; Normann, R A

    1990-01-01

    A technique has been developed for fabricating three dimensional "hair brush" electrode arrays from monocrystalline silicon blocks. Arrays consist of a square pattern of 100 penetrating electrodes, with 400 microns interelectrode spacing. Each electrode is 1.5mm in length and tapers from about 100 microns at its base to a sharp point at the tip. The tips of each electrode are coated with platinum and the entire structure, with the exception of the tips, is insulated with polyimide. Electrical connection to selected electrodes is made by wire bonding polyimide insulated 25 microns diameter gold lead wires to bonding pads on the rear surface of the array. As the geometrical characteristics of the electrodes in such an aray will influence their electrical properties (such as impedance, capacitance, spreading resistance in an electrolyte, etc.) it is desirable that such an array have minimal variability in geometry from electrode to electrode. A study was performed to determine the geometrical variability resulting from our micromachining techniques. Measurements of the diameter of each of the 100 electrodes were made at various planes above the silicon substrate of the array. For the array that was measured, the standard deviation of the diameters was approximately 9% of the mean diameter near the tip, 8% near the middle, and 6% near the base. We describe fabrication techniques which should further reduce these variabilities.

  5. Optimizing separations in online comprehensive two-dimensional liquid chromatography.

    PubMed

    Pirok, Bob W J; Gargano, Andrea F G; Schoenmakers, Peter J

    2018-01-01

    Online comprehensive two-dimensional liquid chromatography has become an attractive option for the analysis of complex nonvolatile samples found in various fields (e.g. environmental studies, food, life, and polymer sciences). Two-dimensional liquid chromatography complements the highly popular hyphenated systems that combine liquid chromatography with mass spectrometry. Two-dimensional liquid chromatography is also applied to the analysis of samples that are not compatible with mass spectrometry (e.g. high-molecular-weight polymers), providing important information on the distribution of the sample components along chemical dimensions (molecular weight, charge, lipophilicity, stereochemistry, etc.). Also, in comparison with conventional one-dimensional liquid chromatography, two-dimensional liquid chromatography provides a greater separation power (peak capacity). Because of the additional selectivity and higher peak capacity, the combination of two-dimensional liquid chromatography with mass spectrometry allows for simpler mixtures of compounds to be introduced in the ion source at any given time, improving quantitative analysis by reducing matrix effects. In this review, we summarize the rationale and principles of two-dimensional liquid chromatography experiments, describe advantages and disadvantages of combining different selectivities and discuss strategies to improve the quality of two-dimensional liquid chromatography separations. © 2017 The Authors. Journal of Separation Science published by WILEY-VCH Verlag GmbH & Co. KGaA.

  6. Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.

    2012-01-01

    The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.

  7. Separation of variables in Maxwell equations in Plebański-Demiański spacetime

    NASA Astrophysics Data System (ADS)

    Frolov, Valeri P.; Krtouš, Pavel; KubizÅák, David

    2018-05-01

    A new method for separating variables in the Maxwell equations in four- and higher-dimensional Kerr-(A)dS spacetimes proposed recently by Lunin is generalized to any off-shell metric that admits a principal Killing-Yano tensor. The key observation is that Lunin's ansatz for the vector potential can be formulated in a covariant form—in terms of the principal tensor. In particular, focusing on the four-dimensional case we demonstrate separability of Maxwell's equations in the Kerr-NUT-(A)dS and the Plebański-Demiański family of spacetimes. The new method of separation of variables is quite different from the standard approach based on the Newman-Penrose formalism.

  8. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Cliff

    2015-01-01

    Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  9. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2016-01-01

    Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  10. A latent class distance association model for cross-classified data with a categorical response variable.

    PubMed

    Vera, José Fernando; de Rooij, Mark; Heiser, Willem J

    2014-11-01

    In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low-dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented. © 2014 The British Psychological Society.

  11. Emotional Variability and Clarity in Depression and Social Anxiety

    PubMed Central

    Thompson, Renee J.; Boden, Matthew Tyler; Gotlib, Ian H.

    2016-01-01

    Recent research has underscored the importance of elucidating specific patterns of emotion that characterize mental disorders. We examined two emotion traits, emotional variability and emotional clarity, in relation to both categorical (diagnostic interview) and dimensional (self-report) measures of Major Depressive Disorder (MDD) and Social Anxiety Disorder (SAD) in women diagnosed with MDD only (n=35), SAD only (n=31), MDD and SAD (n=26), or no psychiatric disorder (n=38). Results of the categorical analyses suggest that elevated emotional variability and diminished emotional clarity are transdiagnostic of MDD and SAD. More specifically, emotional variability was elevated for MDD and SAD diagnoses compared to no diagnosis, showing an additive effect for co-occurring MDD and SAD. Similarly diminished levels of emotional clarity characterized all three clinical groups compared to the healthy control group. Dimensional findings suggest that whereas emotional variability is associated more consistently with depression than with social anxiety, emotional clarity is associated more consistently with social anxiety than with depression. Results are interpreted using a threshold- and dose-response framework. PMID:26371579

  12. Evolutionary algorithms for multi-objective optimization: fuzzy preference aggregation and multisexual EAs

    NASA Astrophysics Data System (ADS)

    Bonissone, Stefano R.

    2001-11-01

    There are many approaches to solving multi-objective optimization problems using evolutionary algorithms. We need to select methods for representing and aggregating preferences, as well as choosing strategies for searching in multi-dimensional objective spaces. First we suggest the use of linguistic variables to represent preferences and the use of fuzzy rule systems to implement tradeoff aggregations. After a review of alternatives EA methods for multi-objective optimizations, we explore the use of multi-sexual genetic algorithms (MSGA). In using a MSGA, we need to modify certain parts of the GAs, namely the selection and crossover operations. The selection operator groups solutions according to their gender tag to prepare them for crossover. The crossover is modified by appending a gender tag at the end of the chromosome. We use single and double point crossovers. We determine the gender of the offspring by the amount of genetic material provided by each parent. The parent that contributed the most to the creation of a specific offspring determines the gender that the offspring will inherit. This is still a work in progress, and in the conclusion we examine many future extensions and experiments.

  13. Capturing pair-wise epistatic effects associated with three agronomic traits in barley.

    PubMed

    Xu, Yi; Wu, Yajun; Wu, Jixiang

    2018-04-01

    Genetic association mapping has been widely applied to determine genetic markers favorably associated with a trait of interest and provide information for marker-assisted selection. Many association mapping studies commonly focus on main effects due to intolerable computing intensity. This study aims to select several sets of DNA markers with potential epistasis to maximize genetic variations of some key agronomic traits in barley. By doing so, we integrated a MDR (multifactor dimensionality reduction) method with a forward variable selection approach. This integrated approach was used to determine single nucleotide polymorphism pairs with epistasis effects associated with three agronomic traits: heading date, plant height, and grain yield in barley from the barley Coordinated Agricultural Project. Our results showed that four, seven, and five SNP pairs accounted for 51.06, 45.66 and 40.42% for heading date, plant height, and grain yield, respectively with epistasis being considered, while corresponding contributions to these three traits were 45.32, 31.39, 31.31%, respectively without epistasis being included. The results suggested that epistasis model was more effective than non-epistasis model in this study and can be more preferred for other applications.

  14. Selection of meteorological conditions to apply in an Ecotron facility

    NASA Astrophysics Data System (ADS)

    Leemans, Vincent; De Cruz, Lesley; Dumont, Benjamin; Hamdi, Rafiq; Delaplace, Pierre; Heinesh, Bernard; Garré, Sarah; Verheggen, François; Theodorakopoulos, Nicolas; Longdoz, Bernard

    2017-04-01

    This presentation aims to propose a generic method to produce meteorological input data that is useful for climate research infrastructures such as an Ecotron, where researchers will face the need to generate representative actual or future climatic conditions. Depending on the experimental objectives and the research purposes, typical conditions or more extreme values such as dry or wet climatic scenarios might be requested. Four variables were considered here, the near-surface air temperature, the near-surface relative humidity, the cloud cover and precipitation. The meteorological datasets, among which a specific meteorological year can be picked up, are produced by the ALARO-0 model from the RMIB (Royal Meteorological Institute of Belgium). Two future climate scenarios (RCP 4.5 and 8.5) and two time periods (2041-2070 and 2071-2100) were used as well as a historical run of the model (1981-2010) which is used as a reference. When the data from a historical run were compared to the observed historical data, biases were noticed. A linear correction was proposed for all the variables except for precipitation, for which a non-linear correction (using a power function) was chosen to maintain a zero-precipitation occurrences. These transformations were able to remove most of the differences between the observed and historical run of the model for the means and for the standard deviations. For the relative humidity, because of non-linearities, only one half of the average bias was corrected and a different path might have to be chosen. For the selection of a meteorological year, a position and a dispersion parameter have been proposed to characterise each meteorological year for each variable. For precipitation, a third parameter quantifying the importance of dry and wet periods has been defined. In order to select a specific climate, for each of these nine parameters the experimenter should provide a percentile and a weight to prioritize the importance of each variable in the process of a global climate selection. The proposed algorithm computed the weighted distance for each year between the parameters and the point representing the position of the percentile in the nine-dimensional space. The five closest values were then selected and represented in different graphs. The proposed method is able to provide a decision aid in the selection of the meteorological conditions to be generated within an Ecotron. However, with a limited number of years available in each case (thirty years for each RCP and each time period), there is no perfect match and the ultimate trade-off will be the responsibility of the researcher. For typical years, close to the median, the relative frequency is higher and the trade-off is more easy than for more extreme years where the relative frequency is low.

  15. Quantifying Variability of Avian Colours: Are Signalling Traits More Variable?

    PubMed Central

    Delhey, Kaspar; Peters, Anne

    2008-01-01

    Background Increased variability in sexually selected ornaments, a key assumption of evolutionary theory, is thought to be maintained through condition-dependence. Condition-dependent handicap models of sexual selection predict that (a) sexually selected traits show amplified variability compared to equivalent non-sexually selected traits, and since males are usually the sexually selected sex, that (b) males are more variable than females, and (c) sexually dimorphic traits more variable than monomorphic ones. So far these predictions have only been tested for metric traits. Surprisingly, they have not been examined for bright coloration, one of the most prominent sexual traits. This omission stems from computational difficulties: different types of colours are quantified on different scales precluding the use of coefficients of variation. Methodology/Principal Findings Based on physiological models of avian colour vision we develop an index to quantify the degree of discriminable colour variation as it can be perceived by conspecifics. A comparison of variability in ornamental and non-ornamental colours in six bird species confirmed (a) that those coloured patches that are sexually selected or act as indicators of quality show increased chromatic variability. However, we found no support for (b) that males generally show higher levels of variability than females, or (c) that sexual dichromatism per se is associated with increased variability. Conclusions/Significance We show that it is currently possible to realistically estimate variability of animal colours as perceived by them, something difficult to achieve with other traits. Increased variability of known sexually-selected/quality-indicating colours in the studied species, provides support to the predictions borne from sexual selection theory but the lack of increased overall variability in males or dimorphic colours in general indicates that sexual differences might not always be shaped by similar selective forces. PMID:18301766

  16. Applications of an exponential finite difference technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Handschuh, R.F.; Keith, T.G. Jr.

    1988-07-01

    An exponential finite difference scheme first presented by Bhattacharya for one dimensional unsteady heat conduction problems in Cartesian coordinates was extended. The finite difference algorithm developed was used to solve the unsteady diffusion equation in one dimensional cylindrical coordinates and was applied to two and three dimensional conduction problems in Cartesian coordinates. Heat conduction involving variable thermal conductivity was also investigated. The method was used to solve nonlinear partial differential equations in one and two dimensional Cartesian coordinates. Predicted results are compared to exact solutions where available or to results obtained by other numerical methods.

  17. Impact of interannual variability (1979-1986) of transport and temperature on ozone as computed using a two-dimensional photochemical model

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Douglass, Anne R.; Chandra, Sushil; Stolarski, Richard S.; Rosenfield, Joan E.; Kaye, Jack A.

    1991-01-01

    Values of the monthly mean heating rates and the residual circulation characteristics were calculated using NMC data for temperature and the solar backscattered UV ozone for the period between 1979 and 1986. The results were used in a two-dimensional photochemical model in order to examine the effects of temperature and residual circulation on the interannual variability of ozone. It was found that the calculated total ozone was more sensitive to variations in interannual residual circulation than in the interannual temperature. The magnitude of the modeled ozone variability was found to be similar to the observed variability, but the observed and modeled year-to-year deviations were, for the most part, uncorrelated, due to the fact that the model did not account for most of the QBO forcing and for some of the observed tropospheric changes.

  18. Dimensional reduction in sensorimotor systems: A framework for understanding muscle coordination of posture

    PubMed Central

    Ting, Lena H.

    2014-01-01

    The simple act of standing up is an important and essential motor behavior that most humans and animals achieve with ease. Yet, maintaining standing balance involves complex sensorimotor transformations that must continually integrate a large array of sensory inputs and coordinate multiple motor outputs to muscles throughout the body. Multiple, redundant local sensory signals are integrated to form an estimate of a few global, task-level variables important to postural control, such as body center of mass position and body orientation with respect to Earth-vertical. Evidence suggests that a limited set of muscle synergies, reflecting preferential sets of muscle activation patterns, are used to move task variables such as center of mass position in a predictable direction following a postural perturbations. We propose a hierarchal feedback control system that allows the nervous system the simplicity of performing goal-directed computations in task-variable space, while maintaining the robustness afforded by redundant sensory and motor systems. We predict that modulation of postural actions occurs in task-variable space, and in the associated transformations between the low-dimensional task-space and high-dimensional sensor and muscle spaces. Development of neuromechanical models that reflect these neural transformations between low and high-dimensional representations will reveal the organizational principles and constraints underlying sensorimotor transformations for balance control, and perhaps motor tasks in general. This framework and accompanying computational models could be used to formulate specific hypotheses about how specific sensory inputs and motor outputs are generated and altered following neural injury, sensory loss, or rehabilitation. PMID:17925254

  19. The Units Tell You What to Do

    ERIC Educational Resources Information Center

    Brown, Simon

    2009-01-01

    Many students have some difficulty with calculations. Simple dimensional analysis provides a systematic means of checking for errors and inconsistencies and for developing both new insight and new relationships between variables. Teaching dimensional analysis at even the most basic level strengthens the insight and confidence of students, and…

  20. Generalized Lie symmetry approach for fractional order systems of differential equations. III

    NASA Astrophysics Data System (ADS)

    Singla, Komal; Gupta, R. K.

    2017-06-01

    The generalized Lie symmetry technique is proposed for the derivation of point symmetries for systems of fractional differential equations with an arbitrary number of independent as well as dependent variables. The efficiency of the method is illustrated by its application to three higher dimensional nonlinear systems of fractional order partial differential equations consisting of the (2 + 1)-dimensional asymmetric Nizhnik-Novikov-Veselov system, (3 + 1)-dimensional Burgers system, and (3 + 1)-dimensional Navier-Stokes equations. With the help of derived Lie point symmetries, the corresponding invariant solutions transform each of the considered systems into a system of lower-dimensional fractional partial differential equations.

  1. Resolving the Conflict Between Associative Overdominance and Background Selection

    PubMed Central

    Zhao, Lei; Charlesworth, Brian

    2016-01-01

    In small populations, genetic linkage between a polymorphic neutral locus and loci subject to selection, either against partially recessive mutations or in favor of heterozygotes, may result in an apparent selective advantage to heterozygotes at the neutral locus (associative overdominance) and a retardation of the rate of loss of variability by genetic drift at this locus. In large populations, selection against deleterious mutations has previously been shown to reduce variability at linked neutral loci (background selection). We describe analytical, numerical, and simulation studies that shed light on the conditions under which retardation vs. acceleration of loss of variability occurs at a neutral locus linked to a locus under selection. We consider a finite, randomly mating population initiated from an infinite population in equilibrium at a locus under selection. With mutation and selection, retardation occurs only when S, the product of twice the effective population size and the selection coefficient, is of order 1. With S >> 1, background selection always causes an acceleration of loss of variability. Apparent heterozygote advantage at the neutral locus is, however, always observed when mutations are partially recessive, even if there is an accelerated rate of loss of variability. With heterozygote advantage at the selected locus, loss of variability is nearly always retarded. The results shed light on experiments on the loss of variability at marker loci in laboratory populations and on the results of computer simulations of the effects of multiple selected loci on neutral variability. PMID:27182952

  2. The underlying structure of diagnostic systems of schizophrenia: a comprehensive polydiagnostic approach.

    PubMed

    Peralta, Victor; Cuesta, Manuel J

    2005-11-15

    The objective was to ascertain the underlying factor structure of alternative definitions of schizophrenia, and to examine the distribution of schizophrenia-related variables against the resulting factor solution. Twenty-three diagnostic schemes of schizophrenia were applied to 660 patients presenting with psychotic symptoms regardless of the specific diagnosis of psychotic disorder. Factor analysis of the 23 diagnostic schemes yielded three interpretable factors explaining 58% of the variance, the first factor (general schizophrenia factor) accounting for most of the variance (36%). On the basis of the general schizophrenia factor score, the sample was divided in quintile groups representing 5 levels of schizophrenia definition (absent, doubtful, very broad, broad and narrow) and the distribution of a number of schizophrenia-related variables was examined across the groups. This grouping procedure was used for examining the comparative validity of alternative levels of categorically defined schizophrenia and an ordinal (i.e. dimensional) definition. Overall, schizophrenia-related variables displayed a dose-response relationship with level of schizophrenia definition. Logistic regression analyses revealed that the dimensional definition explained more variance in the schizophrenia-related variables than the alternative levels for defining schizophrenia categorically. These results are consistent with a unitary and dimensional construct of schizophrenia with no clear "points of rarity" at its boundaries, thus supporting the continuum hypothesis of the psychotic illness.

  3. A four-dimensional virtual hand brain-machine interface using active dimension selection.

    PubMed

    Rouse, Adam G

    2016-06-01

    Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s(-1) for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.

  4. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.

    PubMed

    Gong, Xiajing; Hu, Meng; Zhao, Liang

    2018-05-01

    Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  5. Modeling variably saturated multispecies reactive groundwater solute transport with MODFLOW-UZF and RT3D

    USGS Publications Warehouse

    Bailey, Ryan T.; Morway, Eric D.; Niswonger, Richard G.; Gates, Timothy K.

    2013-01-01

    A numerical model was developed that is capable of simulating multispecies reactive solute transport in variably saturated porous media. This model consists of a modified version of the reactive transport model RT3D (Reactive Transport in 3 Dimensions) that is linked to the Unsaturated-Zone Flow (UZF1) package and MODFLOW. Referred to as UZF-RT3D, the model is tested against published analytical benchmarks as well as other published contaminant transport models, including HYDRUS-1D, VS2DT, and SUTRA, and the coupled flow and transport modeling system of CATHY and TRAN3D. Comparisons in one-dimensional, two-dimensional, and three-dimensional variably saturated systems are explored. While several test cases are included to verify the correct implementation of variably saturated transport in UZF-RT3D, other cases are included to demonstrate the usefulness of the code in terms of model run-time and handling the reaction kinetics of multiple interacting species in variably saturated subsurface systems. As UZF1 relies on a kinematic-wave approximation for unsaturated flow that neglects the diffusive terms in Richards equation, UZF-RT3D can be used for large-scale aquifer systems for which the UZF1 formulation is reasonable, that is, capillary-pressure gradients can be neglected and soil parameters can be treated as homogeneous. Decreased model run-time and the ability to include site-specific chemical species and chemical reactions make UZF-RT3D an attractive model for efficient simulation of multispecies reactive transport in variably saturated large-scale subsurface systems.

  6. Modeling multivariate time series on manifolds with skew radial basis functions.

    PubMed

    Jamshidi, Arta A; Kirby, Michael J

    2011-01-01

    We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.

  7. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  8. Kinematic gait patterns in healthy runners: A hierarchical cluster analysis.

    PubMed

    Phinyomark, Angkoon; Osis, Sean; Hettinga, Blayne A; Ferber, Reed

    2015-11-05

    Previous studies have demonstrated distinct clusters of gait patterns in both healthy and pathological groups, suggesting that different movement strategies may be represented. However, these studies have used discrete time point variables and usually focused on only one specific joint and plane of motion. Therefore, the first purpose of this study was to determine if running gait patterns for healthy subjects could be classified into homogeneous subgroups using three-dimensional kinematic data from the ankle, knee, and hip joints. The second purpose was to identify differences in joint kinematics between these groups. The third purpose was to investigate the practical implications of clustering healthy subjects by comparing these kinematics with runners experiencing patellofemoral pain (PFP). A principal component analysis (PCA) was used to reduce the dimensionality of the entire gait waveform data and then a hierarchical cluster analysis (HCA) determined group sets of similar gait patterns and homogeneous clusters. The results show two distinct running gait patterns were found with the main between-group differences occurring in frontal and sagittal plane knee angles (P<0.001), independent of age, height, weight, and running speed. When these two groups were compared to PFP runners, one cluster exhibited greater while the other exhibited reduced peak knee abduction angles (P<0.05). The variability observed in running patterns across this sample could be the result of different gait strategies. These results suggest care must be taken when selecting samples of subjects in order to investigate the pathomechanics of injured runners. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Alterations of papilla dimensions after orthodontic closure of the maxillary midline diastema: a retrospective longitudinal study.

    PubMed

    Jeong, Jin-Seok; Lee, Seung-Youp; Chang, Moontaek

    2016-06-01

    The aim of this study was to evaluate alterations of papilla dimensions after orthodontic closure of the diastema between maxillary central incisors. Sixty patients who had a visible diastema between maxillary central incisors that had been closed by orthodontic approximation were selected for this study. Various papilla dimensions were assessed on clinical photographs and study models before the orthodontic treatment and at the follow-up examination after closure of the diastema. Influences of the variables assessed before orthodontic treatment on the alterations of papilla height (PH) and papilla base thickness (PBT) were evaluated by univariate regression analysis. To analyze potential influences of the 3-dimensional papilla dimensions before orthodontic treatment on the alterations of PH and PBT, a multiple regression model was formulated including the 3-dimensional papilla dimensions as predictor variables. On average, PH decreased by 0.80 mm and PBT increased after orthodontic closure of the diastema (P<0.01). Univariate regression analysis revealed that the PH (P=0.002) and PBT (P=0.047) before orthodontic treatment influenced the alteration of PH. With respect to the alteration of PBT, the diastema width (P=0.045) and PBT (P=0.000) were found to be influential factors. PBT before the orthodontic treatment significantly influenced the alteration of PBT in the multiple regression model. PH decreased but PBT increased after orthodontic closure of the diastema. The papilla dimensions before orthodontic treatment influenced the alterations of PH and PBT after closure of the diastema. The PBT increased more when the diastema width before the orthodontic treatment was larger.

  10. Comparison of local grid refinement methods for MODFLOW

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.; Leake, S.A.

    2006-01-01

    Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).

  11. Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds

    NASA Astrophysics Data System (ADS)

    Abdo, Mohammad Gamal Mohammad Mostafa

    This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).

  12. In vivo single-shot three-dimensionally localized multiple quantum spectroscopy of GABA in the human brain with improved spectral selectivity

    NASA Astrophysics Data System (ADS)

    Choi, In-Young; Lee, Sang-Pil; Shen, Jun

    2005-01-01

    A single-shot multiple quantum filtering method is developed that uses two double-band frequency selective pulses for enhanced spectral selectivity in combination with a slice-selective 90°, a slice-selective universal rotator 90°, and a spectral-spatial pulse composed of two slice-selective universal rotator 45° pulses for single-shot three-dimensional localization. The use of this selective multiple quantum filtering method for C3 and C4 methylene protons of GABA resulted in improved spectral selectivity for GABA and effective suppression of overlapping signals such as creatine and glutathione in each single scan, providing reliable measurements of the GABA doublet in all subjects. The concentration of GABA was measured to be 0.7 ± 0.2 μmol/g (means ± SD, n = 15) in the fronto-parietal region of the human brain in vivo.

  13. Differentiating Categories and Dimensions: Evaluating the Robustness of Taxometric Analyses

    ERIC Educational Resources Information Center

    Ruscio, John; Kaczetow, Walter

    2009-01-01

    Interest in modeling the structure of latent variables is gaining momentum, and many simulation studies suggest that taxometric analysis can validly assess the relative fit of categorical and dimensional models. The generation and parallel analysis of categorical and dimensional comparison data sets reduces the subjectivity required to interpret…

  14. Complexity as a Reflection of the Dimensionality of a Task.

    ERIC Educational Resources Information Center

    Spilsbury, Georgina

    1992-01-01

    The hypothesis that a task that increases in complexity (increasing its correlation with a central measure of intelligence) does so by increasing its dimensionality by tapping individual differences or another variable was supported by findings from 46 adults aged 20-70 years performing a mental counting task. (SLD)

  15. The assessment of pi-pi selective stationary phases for two-dimensional HPLC analysis of foods: application to the analysis of coffee.

    PubMed

    Mnatsakanyan, Mariam; Stevenson, Paul G; Shock, David; Conlan, Xavier A; Goodie, Tiffany A; Spencer, Kylie N; Barnett, Neil W; Francis, Paul S; Shalliker, R Andrew

    2010-09-15

    Differences between alkyl, dipole-dipole, hydrogen bonding, and pi-pi selective surfaces represented by non-resonance and resonance pi-stationary phases have been assessed for the separation of 'Ristretto' café espresso by employing 2DHPLC techniques with C18 phase selectivity detection. Geometric approach to factor analysis (GAFA) was used to measure the detected peaks (N), spreading angle (beta), correlation, practical peak capacity (n(p)) and percentage usage of the separations space, as an assessment of selectivity differences between regional quadrants of the two-dimensional separation plane. Although all tested systems were correlated to some degree to the C18 dimension, regional measurement of separation divergence revealed that performance of specific systems was better for certain sample components. The results illustrate that because of the complexity of the 'real' sample obtaining a truly orthogonal two-dimensional system for complex samples of natural origin may be practically impossible. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  16. Separation of variables in the special diagonal Hamilton-Jacobi equation: Application to the dynamical problem of a particle constrained on a moving surface

    NASA Technical Reports Server (NTRS)

    Blanchard, D. L.; Chan, F. K.

    1973-01-01

    For a time-dependent, n-dimensional, special diagonal Hamilton-Jacobi equation a necessary and sufficient condition for the separation of variables to yield a complete integral of the form was established by specifying the admissible forms in terms of arbitrary functions. A complete integral was then expressed in terms of these arbitrary functions and also the n irreducible constants. As an application of the results obtained for the two-dimensional Hamilton-Jacobi equation, analysis was made for a comparatively wide class of dynamical problems involving a particle moving in Euclidean three-dimensional space under the action of external forces but constrained on a moving surface. All the possible cases in which this equation had a complete integral of the form were obtained and these are tubulated for reference.

  17. Application of SEAWAT to select variable-density and viscosity problems

    USGS Publications Warehouse

    Dausman, Alyssa M.; Langevin, Christian D.; Thorne, Danny T.; Sukop, Michael C.

    2010-01-01

    SEAWAT is a combined version of MODFLOW and MT3DMS, designed to simulate three-dimensional, variable-density, saturated groundwater flow. The most recent version of the SEAWAT program, SEAWAT Version 4 (or SEAWAT_V4), supports equations of state for fluid density and viscosity. In SEAWAT_V4, fluid density can be calculated as a function of one or more MT3DMS species, and optionally, fluid pressure. Fluid viscosity is calculated as a function of one or more MT3DMS species, and the program also includes additional functions for representing the dependence of fluid viscosity on temperature. This report documents testing of and experimentation with SEAWAT_V4 with six previously published problems that include various combinations of density-dependent flow due to temperature variations and/or concentration variations of one or more species. Some of the problems also include variations in viscosity that result from temperature differences in water and oil. Comparisons between the results of SEAWAT_V4 and other published results are generally consistent with one another, with minor differences considered acceptable.

  18. Simulations of Control Schemes for Inductively Coupled Plasma Sources

    NASA Astrophysics Data System (ADS)

    Ventzek, P. L. G.; Oda, A.; Shon, J. W.; Vitello, P.

    1997-10-01

    Process control issues are becoming increasingly important in plasma etching. Numerical experiments are an excellent test-bench for evaluating a proposed control system. Models are generally reliable enough to provide information about controller robustness, fitness of diagnostics. We will present results from a two dimensional plasma transport code with a multi-species plasma chemstry obtained from a global model. [1-2] We will show a correlation of external etch parameters (e.g. input power) with internal plasma parameters (e.g. species fluxes) which in turn are correlated with etch results (etch rate, uniformity, and selectivity) either by comparison to experiment or by using a phenomenological etch model. After process characterization, a control scheme can be evaluated since the relationship between the variable to be controlled (e.g. uniformity) is related to the measurable variable (e.g. a density) and external parameter (e.g. coil current). We will present an evaluation using the HBr-Cl2 system as an example. [1] E. Meeks and J. W. Shon, IEEE Trans. on Plasma Sci., 23, 539, 1995. [2] P. Vitello, et al., IEEE Trans. on Plasma Sci., 24, 123, 1996.

  19. A web system of virtual morphometric globes for Mars and the Moon

    NASA Astrophysics Data System (ADS)

    Florinsky, I. V.; Garov, A. S.; Karachevtseva, I. P.

    2018-09-01

    We developed a web system of virtual morphometric globes for Mars and the Moon. As the initial data, we used 15-arc-minutes gridded global digital elevation models (DEMs) extracted from the Mars Orbiter Laser Altimeter (MOLA) and the Lunar Orbiter Laser Altimeter (LOLA) gridded archives. We derived global digital models of sixteen morphometric variables including horizontal, vertical, minimal, and maximal curvatures, as well as catchment area and topographic index. The morphometric models were integrated into the web system developed as a distributed application consisting of a client front-end and a server back-end. The following main functions are implemented in the system: (1) selection of a morphometric variable; (2) two-dimensional visualization of a calculated global morphometric model; (3) 3D visualization of a calculated global morphometric model on the sphere surface; (4) change of a globe scale; and (5) globe rotation by an arbitrary angle. Free, real-time web access to the system is provided. The web system of virtual morphometric globes can be used for geological and geomorphological studies of Mars and the Moon at the global, continental, and regional scales.

  20. Different nano-particles volume fraction and Hartmann number effects on flow and heat transfer of water-silver nanofluid under the variable heat flux

    NASA Astrophysics Data System (ADS)

    Forghani-Tehrani, Pezhman; Karimipour, Arash; Afrand, Masoud; Mousavi, Sayedali

    2017-01-01

    Nanofluid flow and heat transfer composed of water-silver nanoparticles is investigated numerically inside a microchannel. Finite volume approach (FVM) is applied and the effects of gravity are ignored. The whole length of Microchannel is considered in three sections as l1=l3=0.151 and l2=0.71. The linear variable heat flux affects the microchannel wall in the length of l2 while a magnetic field with strength of B0 is considered over the whole domain of it. The influences of different values of Hartmann number (Ha=0, 10, 20), volume fraction of the nanoparticles (ɸ=0, 0.02, 0.04) and Reynolds number (Re=10, 50, 200) on the hydrodynamic and thermal properties of flow are reported. The investigation of slip velocity variations under the effects of a magnetic field are presented for the first time (to the best knowledge of author) while the non-dimensional slip coefficient are selected as B=0.01, 0.05, 0.1 at different states.

  1. Input relegation control for gross motion of a kinematically redundant manipulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    1992-10-01

    This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the jointmore » velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.« less

  2. Communication: On the diffusion tensor in macroscopic theory of cavitation

    NASA Astrophysics Data System (ADS)

    Shneidman, Vitaly A.

    2017-08-01

    The classical description of nucleation of cavities in a stretched fluid relies on a one-dimensional Fokker-Planck equation (FPE) in the space of their sizes r, with the diffusion coefficient D(r) constructed for all r from macroscopic hydrodynamics and thermodynamics, as shown by Zeldovich. When additional variables (e.g., vapor pressure) are required to describe the state of a bubble, a similar approach to construct a diffusion tensor D ^ generally works only in the direct vicinity of the thermodynamic saddle point corresponding to the critical nucleus. It is shown, nevertheless, that "proper" kinetic variables to describe a cavity can be selected, allowing to introduce D ^ in the entire domain of parameters. In this way, for the first time, complete FPE's are constructed for viscous volatile and inertial fluids. In the former case, the FPE with symmetric D ^ is solved numerically. Alternatively, in the case of an inertial fluid, an equivalent Langevin equation is considered; results are compared with analytics. The suggested approach is quite general and can be applied beyond the cavitation problem.

  3. The effect of biological movement variability on the performance of the golf swing in high- and low-handicapped players.

    PubMed

    Bradshaw, Elizabeth J; Keogh, Justin W L; Hume, Patria A; Maulder, Peter S; Nortje, Jacques; Marnewick, Michel

    2009-06-01

    The purpose of this study was to examine the role of neuromotor noise on golf swing performance in high- and low-handicap players. Selected two-dimensional kinematic measures of 20 male golfers (n=10 per high- or low-handicap group) performing 10 golf swings with a 5-iron club was obtained through video analysis. Neuromotor noise was calculated by deducting the standard error of the measurement from the coefficient of variation obtained from intra-individual analysis. Statistical methods included linear regression analysis and one-way analysis of variance using SPSS. Absolute invariance in the key technical positions (e.g., at the top of the backswing) of the golf swing appears to be a more favorable technique for skilled performance.

  4. Numerical model for learning concepts of streamflow simulation

    USGS Publications Warehouse

    DeLong, L.L.; ,

    1993-01-01

    Numerical models are useful for demonstrating principles of open-channel flow. Such models can allow experimentation with cause-and-effect relations, testing concepts of physics and numerical techniques. Four PT is a numerical model written primarily as a teaching supplement for a course in one-dimensional stream-flow modeling. Four PT options particularly useful in training include selection of governing equations, boundary-value perturbation, and user-programmable constraint equations. The model can simulate non-trivial concepts such as flow in complex interconnected channel networks, meandering channels with variable effective flow lengths, hydraulic structures defined by unique three-parameter relations, and density-driven flow.The model is coded in FORTRAN 77, and data encapsulation is used extensively to simplify maintenance and modification and to enhance the use of Four PT modules by other programs and programmers.

  5. Second generation spectrograph for the Hubble Space Telescope

    NASA Astrophysics Data System (ADS)

    Woodgate, B. E.; Boggess, A.; Gull, T. R.; Heap, S. R.; Krueger, V. L.; Maran, S. P.; Melcher, R. W.; Rebar, F. J.; Vitagliano, H. D.; Green, R. F.; Wolff, S. C.; Hutchings, J. B.; Jenkins, E. B.; Linsky, J. L.; Moos, H. W.; Roesler, F.; Shine, R. A.; Timothy, J. G.; Weistrop, D. E.; Bottema, M.; Meyer, W.

    1986-01-01

    The preliminary design for the Space Telescope Imaging Spectrograph (STIS), which has been selected by NASA for definition study for future flight as a second-generation instrument on the Hubble Space Telescope (HST), is presented. STIS is a two-dimensional spectrograph that will operate from 1050 A to 11,000 A at the limiting HST resolution of 0.05 arcsec FWHM, with spectral resolutions of 100, 1200, 20,000, and 100,000 and a maximum field-of-view of 50 x 50 arcsec. Its basic operating modes include echelle model, long slit mode, slitless spectrograph mode, coronographic spectroscopy, photon time-tagging, and direct imaging. Research objectives are active galactic nuclei, the intergalactic medium, global properties of galaxies, the origin of stellar systems, stelalr spectral variability, and spectrographic mapping of solar system processes.

  6. Tracer water transport and subgrid precipitation variation within atmospheric general circulation models

    NASA Astrophysics Data System (ADS)

    Koster, Randal D.; Eagleson, Peter S.; Broecker, Wallace S.

    1988-03-01

    A capability is developed for monitoring tracer water movement in the three-dimensional Goddard Institute for Space Science Atmospheric General Circulation Model (GCM). A typical experiment with the tracer water model follows water evaporating from selected grid squares and determines where this water first returns to the Earth's surface as precipitation or condensate, thereby providing information on the lateral scales of hydrological transport in the GCM. Through a comparison of model results with observations in nature, inferences can be drawn concerning real world water transport. Tests of the tracer water model include a comparison of simulated and observed vertically-integrated vapor flux fields and simulations of atomic tritium transport from the stratosphere to the oceans. The inter-annual variability of the tracer water model results is also examined.

  7. The Myth of Optimality in Clinical Neuroscience.

    PubMed

    Holmes, Avram J; Patrick, Lauren M

    2018-03-01

    Clear evidence supports a dimensional view of psychiatric illness. Within this framework the expression of disorder-relevant phenotypes is often interpreted as a breakdown or departure from normal brain function. Conversely, health is reified, conceptualized as possessing a single ideal state. We challenge this concept here, arguing that there is no universally optimal profile of brain functioning. The evolutionary forces that shape our species select for a staggering diversity of human behaviors. To support our position we highlight pervasive population-level variability within large-scale functional networks and discrete circuits. We propose that, instead of examining behaviors in isolation, psychiatric illnesses can be best understood through the study of domains of functioning and associated multivariate patterns of variation across distributed brain systems. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Tracer water transport and subgrid precipitation variation within atmospheric general circulation models

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Eagleson, Peter S.; Broecker, Wallace S.

    1988-01-01

    A capability is developed for monitoring tracer water movement in the three-dimensional Goddard Institute for Space Science Atmospheric General Circulation Model (GCM). A typical experiment with the tracer water model follows water evaporating from selected grid squares and determines where this water first returns to the Earth's surface as precipitation or condensate, thereby providing information on the lateral scales of hydrological transport in the GCM. Through a comparison of model results with observations in nature, inferences can be drawn concerning real world water transport. Tests of the tracer water model include a comparison of simulated and observed vertically-integrated vapor flux fields and simulations of atomic tritium transport from the stratosphere to the oceans. The inter-annual variability of the tracer water model results is also examined.

  9. A modified estimation distribution algorithm based on extreme elitism.

    PubMed

    Gao, Shujun; de Silva, Clarence W

    2016-12-01

    An existing estimation distribution algorithm (EDA) with univariate marginal Gaussian model was improved by designing and incorporating an extreme elitism selection method. This selection method highlighted the effect of a few top best solutions in the evolution and advanced EDA to form a primary evolution direction and obtain a fast convergence rate. Simultaneously, this selection can also keep the population diversity to make EDA avoid premature convergence. Then the modified EDA was tested by means of benchmark low-dimensional and high-dimensional optimization problems to illustrate the gains in using this extreme elitism selection. Besides, no-free-lunch theorem was implemented in the analysis of the effect of this new selection on EDAs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Commercial launch systems: A risky investment?

    NASA Astrophysics Data System (ADS)

    Dupnick, Edwin; Skratt, John

    1996-03-01

    A myriad of evolutionary paths connect the current state of government-dominated space launch operations to true commercial access to space. Every potential path requires the investment of private capital sufficient to fund the commercial venture with a perceived risk/return ratio acceptable to the investors. What is the private sector willing to invest? Does government participation reduce financial risk? How viable is a commercial launch system without government participation and support? We examine the interplay between various forms of government participation in commercial launch system development, alternative launch system designs, life cycle cost estimates, and typical industry risk aversion levels. The boundaries of this n-dimensional envelope are examined with an ECON-developed business financial model which provides for the parametric assessment and interaction of SSTO design variables (including various operational scenarios with financial variables including debt/equity assumptions, and commercial enterprise burden rates on various functions. We overlay this structure with observations from previous ECON research which characterize financial risk aversion levels for selected industrial sectors in terms of acceptable initial lump-sum investments, cumulative investments, probability of failure, payback periods, and ROI. The financial model allows the construction of parametric tradeoffs based on ranges of variables which can be said to actually encompass the ``true'' cost of operations and determine what level of ``true'' costs can be tolerated by private capitalization.

  11. Dynamic Task Assignment and Path Planning of Multi-AUV System Based on an Improved Self-Organizing Map and Velocity Synthesis Method in Three-Dimensional Underwater Workspace.

    PubMed

    Zhu, Daqi; Huang, Huan; Yang, S X

    2013-04-01

    For a 3-D underwater workspace with a variable ocean current, an integrated multiple autonomous underwater vehicle (AUV) dynamic task assignment and path planning algorithm is proposed by combing the improved self-organizing map (SOM) neural network and a novel velocity synthesis approach. The goal is to control a team of AUVs to reach all appointed target locations for only one time on the premise of workload balance and energy sufficiency while guaranteeing the least total and individual consumption in the presence of the variable ocean current. First, the SOM neuron network is developed to assign a team of AUVs to achieve multiple target locations in 3-D ocean environment. The working process involves special definition of the initial neural weights of the SOM network, the rule to select the winner, the computation of the neighborhood function, and the method to update weights. Then, the velocity synthesis approach is applied to plan the shortest path for each AUV to visit the corresponding target in a dynamic environment subject to the ocean current being variable and targets being movable. Lastly, to demonstrate the effectiveness of the proposed approach, simulation results are given in this paper.

  12. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  13. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Regularized matrix regression

    PubMed Central

    Zhou, Hua; Li, Lexin

    2014-01-01

    Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830

  15. Majorization Minimization by Coordinate Descent for Concave Penalized Generalized Linear Models

    PubMed Central

    Jiang, Dingfeng; Huang, Jian

    2013-01-01

    Recent studies have demonstrated theoretical attractiveness of a class of concave penalties in variable selection, including the smoothly clipped absolute deviation and minimax concave penalties. The computation of the concave penalized solutions in high-dimensional models, however, is a difficult task. We propose a majorization minimization by coordinate descent (MMCD) algorithm for computing the concave penalized solutions in generalized linear models. In contrast to the existing algorithms that use local quadratic or local linear approximation to the penalty function, the MMCD seeks to majorize the negative log-likelihood by a quadratic loss, but does not use any approximation to the penalty. This strategy makes it possible to avoid the computation of a scaling factor in each update of the solutions, which improves the efficiency of coordinate descent. Under certain regularity conditions, we establish theoretical convergence property of the MMCD. We implement this algorithm for a penalized logistic regression model using the SCAD and MCP penalties. Simulation studies and a data example demonstrate that the MMCD works sufficiently fast for the penalized logistic regression in high-dimensional settings where the number of covariates is much larger than the sample size. PMID:25309048

  16. Large Scale Geologic Controls on Hydraulic Stimulation

    NASA Astrophysics Data System (ADS)

    McLennan, J. D.; Bhide, R.

    2014-12-01

    When simulating a hydraulic fracturing, the analyst has historically prescribed a single planar fracture. Originally (in the 1950s through the 1970s) this was necessitated by computational restrictions. In the latter part of the twentieth century, hydraulic fracture simulation evolved to incorporate vertical propagation controlled by modulus, fluid loss, and the minimum principal stress. With improvements in software, computational capacity, and recognition that in-situ discontinuities are relevant, fully three-dimensional hydraulic simulation is now becoming possible. Advances in simulation capabilities enable coupling structural geologic data (three-dimensional representation of stresses, natural fractures, and stratigraphy) with decision making processes for stimulation - volumes, rates, fluid types, completion zones. Without this interaction between simulation capabilities and geological information, low permeability formation exploitation may linger on the fringes of real economic viability. Comparative simulations have been undertaken in varying structural environments where the stress contrast and the frequency of natural discontinuities causes varying patterns of multiple, hydraulically generated or reactivated flow paths. Stress conditions and nature of the discontinuities are selected as variables and are used to simulate how fracturing can vary in different structural regimes. The basis of the simulations is commercial distinct element software (Itasca Corporation's 3DEC).

  17. Characterization of the spatial variability of channel morphology

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    2002-01-01

    The spatial variability of two fundamental morphological variables is investigated for rivers having a wide range of discharge (five orders of magnitude). The variables, water-surface width and average depth, were measured at 58 to 888 equally spaced cross-sections in channel links (river reaches between major tributaries). These measurements provide data to characterize the two-dimensional structure of a channel link which is the fundamental unit of a channel network. The morphological variables have nearly log-normal probability distributions. A general relation was determined which relates the means of the log-transformed variables to the logarithm of discharge similar to previously published downstream hydraulic geometry relations. The spatial variability of the variables is described by two properties: (1) the coefficient of variation which was nearly constant (0.13-0.42) over a wide range of discharge; and (2) the integral length scale in the downstream direction which was approximately equal to one to two mean channel widths. The joint probability distribution of the morphological variables in the downstream direction was modelled as a first-order, bivariate autoregressive process. This model accounted for up to 76 per cent of the total variance. The two-dimensional morphological variables can be scaled such that the channel width-depth process is independent of discharge. The scaling properties will be valuable to modellers of both basin and channel dynamics. Published in 2002 John Wiley and Sons, Ltd.

  18. An alternative view of continuous forest inventories

    Treesearch

    Francis A. Roesch

    2008-01-01

    A generalized three-dimensional concept of continuous forest inventories applicable to all common forest sample designs is presented and discussed. The concept recognizes the forest through time as a three-dimensional population, two dimensions in land area and the third in time. The sample is selected from a finite three-dimensional partitioning of the population. The...

  19. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    PubMed

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  20. Automatic Aircraft Collision Avoidance System and Method

    NASA Technical Reports Server (NTRS)

    Skoog, Mark (Inventor); Hook, Loyd (Inventor); McWherter, Shaun (Inventor); Willhite, Jaimie (Inventor)

    2014-01-01

    The invention is a system and method of compressing a DTM to be used in an Auto-GCAS system using a semi-regular geometric compression algorithm. In general, the invention operates by first selecting the boundaries of the three dimensional map to be compressed and dividing the three dimensional map data into regular areas. Next, a type of free-edged, flat geometric surface is selected which will be used to approximate terrain data of the three dimensional map data. The flat geometric surface is used to approximate terrain data for each regular area. The approximations are checked to determine if they fall within selected tolerances. If the approximation for a specific regular area is within specified tolerance, the data is saved for that specific regular area. If the approximation for a specific area falls outside the specified tolerances, the regular area is divided and a flat geometric surface approximation is made for each of the divided areas. This process is recursively repeated until all of the regular areas are approximated by flat geometric surfaces. Finally, the compressed three dimensional map data is provided to the automatic ground collision system for an aircraft.

  1. Selective separation of fluorinated compounds from complex organic mixtures by pyrolysis-comprehensive two-dimensional gas chromatography coupled to high-resolution time-of-flight mass spectrometry.

    PubMed

    Nakajima, Yoji; Arinami, Yuko; Yamamoto, Kiyoshi

    2014-12-29

    The usefulness of comprehensive two-dimensional gas chromatography (GC×GC) was demonstrated for the selective separation of fluorinated compounds from organic mixtures, such as kerosene/perfluorokerosene mixtures, pyrolysis products derived from polyethylene/ethylene-tetrafluoroethylene alternating copolymer mixture and poly[2-(perfluorohexyl)ethyl acrylate]. Perfluorocarbons were completely separated from hydrocarbons in the two-dimensional chromatogram. Fluorohydrocarbons in the pyrolysis products of polyethylene/ethylene-tetrafluoroethylene alternating copolymer mixture were selectively isolated from their hydrocarbon counterparts and regularly arranged according to their chain length and fluorine content in the two-dimensional chromatogram. A reliable structural analysis of the fluorohydrocarbons was achieved by combining effective GC×GC positional information with accurate mass spectral data obtained by high-resolution time-of-flight mass spectrometry (HRTOF-MS). 2-(Perfluorohexyl)ethyl acrylate monomer, dimer, and trimer as well as 2-(perfluorohexyl)ethyl alcohol in poly[2-(perfluorohexyl)ethyl acrylate] pyrolysis products were detected in the bottommost part of the two-dimensional chromatogram with separation from hydrocarbons possessing terminal structure information about the polymer, such as α-methylstyrene. Pyrolysis-GC×GC/HRTOF-MS appeared particularly suitable for the characterization of fluorinated polymer microstructures, such as monomer sequences and terminal groups. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Improving Mixed Variable Optimization of Computational and Model Parameters Using Multiple Surrogate Functions

    DTIC Science & Technology

    2008-03-01

    multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space

  3. Variable Selection through Correlation Sifting

    NASA Astrophysics Data System (ADS)

    Huang, Jim C.; Jojic, Nebojsa

    Many applications of computational biology require a variable selection procedure to sift through a large number of input variables and select some smaller number that influence a target variable of interest. For example, in virology, only some small number of viral protein fragments influence the nature of the immune response during viral infection. Due to the large number of variables to be considered, a brute-force search for the subset of variables is in general intractable. To approximate this, methods based on ℓ1-regularized linear regression have been proposed and have been found to be particularly successful. It is well understood however that such methods fail to choose the correct subset of variables if these are highly correlated with other "decoy" variables. We present a method for sifting through sets of highly correlated variables which leads to higher accuracy in selecting the correct variables. The main innovation is a filtering step that reduces correlations among variables to be selected, making the ℓ1-regularization effective for datasets on which many methods for variable selection fail. The filtering step changes both the values of the predictor variables and output values by projections onto components obtained through a computationally-inexpensive principal components analysis. In this paper we demonstrate the usefulness of our method on synthetic datasets and on novel applications in virology. These include HIV viral load analysis based on patients' HIV sequences and immune types, as well as the analysis of seasonal variation in influenza death rates based on the regions of the influenza genome that undergo diversifying selection in the previous season.

  4. Two-dimensional NMR spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrar, T.C.

    1987-06-01

    This article is the second in a two-part series. In part one (ANALYTICAL CHEMISTRY, May 15) the authors discussed one-dimensional nuclear magnetic resonance (NMR) spectra and some relatively advanced nuclear spin gymnastics experiments that provide a capability for selective sensitivity enhancements. In this article and overview and some applications of two-dimensional NMR experiments are presented. These powerful experiments are important complements to the one-dimensional experiments. As in the more sophisticated one-dimensional experiments, the two-dimensional experiments involve three distinct time periods: a preparation period, t/sub 0/; an evolution period, t/sub 1/; and a detection period, t/sub 2/.

  5. Application of high performance computing for studying cyclic variability in dilute internal combustion engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K

    2015-01-01

    Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less

  6. Combination of longitudinal and circumferential three-dimensional esophageal dose distribution predicts acute esophagitis in hypofractionated reirradiation of patients with non-small-cell lung cancer treated in stereotactic body frame.

    PubMed

    Poltinnikov, Igor M; Fallon, Kevin; Xiao, Yian; Reiff, Jay E; Curran, Walter J; Werner-Wasik, Maria

    2005-07-01

    To evaluate dosimetric predictors of acute esophagitis (AE) and clinical outcome of patients with non-small-cell lung cancer (NSCLC) receiving reirradiation. Seventeen patients with NSCLC received reirradiation to the lung tumors/mediastinum, while immobilized in stereotactic body frame (SBF). CT simulation and hypofractionated three-dimensional radiotherapy were used. Two axial segments of esophagus contours merged together were defined as esophagus disc (ED). For each ED, the percentage (%) of the volume of esophageal circumference treated to % of prescribed dose (PD) was assessed. Number of EDs with 50% or any % of volume (V) of esophageal circumference receiving more than or equal to (>/=) 50%, 80%, and 100% of PD (50% V >/=50% PD; 50% V >/=80% PD; any % V >/=100% PD) were calculated. These dosimetric variables and the length of the esophagus within the radiation therapy (RT) port were correlated with AE using exact Wilcoxon test. A median RT dose was 32 Gy with a median fraction size of 4 Gy. Eleven of 13 patients presenting with pain and/or shortness of breath had complete or partial resolution of symptoms. Median survival time from the start of reirradiation in SBF until death was 5.5 months. AE was observed in 7 patients and resolved within 3 months of RT completion. No Grade 3 or higher events were noticed. The length of the esophagus within RT port did not predict for AE (p = 0.71). However, an increased number of EDs predicted for AE for the following dosimetric variables: 50% V >/=50% PD (p = 0.023), 50% V >/=80% PD (p = 0.047), and any % V >/=100% PD (p = 0.004). Patients with at least 2 EDs receiving >/=100% PD to any % V of circumference had AE compared to those with zero EDs. Reirradiation using hypofractionated three-dimensional radiotherapy and SBF immobilization is an effective strategy for palliation of symptoms in selected patients with recurrent NSCLC. The length of the esophagus in the RT field does not predict for AE. However, an increasing number of EDs displaying the combination of longitudinal and circumferential three-dimensional dose distribution along the esophagus is a valuable predictor for AE.

  7. The dimension split element-free Galerkin method for three-dimensional potential problems

    NASA Astrophysics Data System (ADS)

    Meng, Z. J.; Cheng, H.; Ma, L. D.; Cheng, Y. M.

    2018-06-01

    This paper presents the dimension split element-free Galerkin (DSEFG) method for three-dimensional potential problems, and the corresponding formulae are obtained. The main idea of the DSEFG method is that a three-dimensional potential problem can be transformed into a series of two-dimensional problems. For these two-dimensional problems, the improved moving least-squares (IMLS) approximation is applied to construct the shape function, which uses an orthogonal function system with a weight function as the basis functions. The Galerkin weak form is applied to obtain a discretized system equation, and the penalty method is employed to impose the essential boundary condition. The finite difference method is selected in the splitting direction. For the purposes of demonstration, some selected numerical examples are solved using the DSEFG method. The convergence study and error analysis of the DSEFG method are presented. The numerical examples show that the DSEFG method has greater computational precision and computational efficiency than the IEFG method.

  8. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    PubMed

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  9. Purification of flavonoids from licorice using an off-line preparative two-dimensional normal-phase liquid chromatography/reversed-phase liquid chromatography method.

    PubMed

    Fan, Yunpeng; Fu, Yanhui; Fu, Qing; Cai, Jianfeng; Xin, Huaxia; Dai, Mei; Jin, Yu

    2016-07-01

    An orthogonal (71.9%) off-line preparative two-dimensional normal-phase liquid chromatography/reversed-phase liquid chromatography method coupled with effective sample pretreatment was developed for separation and purification of flavonoids from licorice. Most of the nonflavonoids were firstly removed using a self-made Click TE-Cys (60 μm) solid-phase extraction. In the first dimension, an industrial grade preparative chromatography was employed to purify the crude flavonoids. Click TE-Cys (10 μm) was selected as the stationary phase that provided an excellent separation with high reproducibility. Ethyl acetate/ethanol was selected as the mobile phase owing to their excellent solubility for flavonoids. Flavonoids co-eluted in the first dimension were selected for further purification using reversed-phase liquid chromatography. Multiple compounds could be isolated from one normal-phase fraction and some compounds with bad resolution in one-dimensional liquid chromatography could be prepared in this two-dimensional system owing to the orthogonal separation. Moreover, this two-dimensional liquid chromatography method was beneficial for the preparation of relatively trace flavonoid compounds, which were enriched in the first dimension and further purified in the second dimension. Totally, 24 flavonoid compounds with high purity were obtained. The results demonstrated that the off-line two-dimensional liquid chromatography method was effective for the preparative separation and purification of flavonoids from licorice. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Data driven analysis of rain events: feature extraction, clustering, microphysical /macro physical relationship

    NASA Astrophysics Data System (ADS)

    Djallel Dilmi, Mohamed; Mallet, Cécile; Barthes, Laurent; Chazottes, Aymeric

    2017-04-01

    The study of rain time series records is mainly carried out using rainfall rate or rain accumulation parameters estimated on a fixed integration time (typically 1 min, 1 hour or 1 day). In this study we used the concept of rain event. In fact, the discrete and intermittent natures of rain processes make the definition of some features inadequate when defined on a fixed duration. Long integration times (hour, day) lead to mix rainy and clear air periods in the same sample. Small integration time (seconds, minutes) will lead to noisy data with a great sensibility to detector characteristics. The analysis on the whole rain event instead of individual short duration samples of a fixed duration allows to clarify relationships between features, in particular between macro physical and microphysical ones. This approach allows suppressing the intra-event variability partly due to measurement uncertainties and allows focusing on physical processes. An algorithm based on Genetic Algorithm (GA) and Self Organising Maps (SOM) is developed to obtain a parsimonious characterisation of rain events using a minimal set of variables. The use of self-organizing map (SOM) is justified by the fact that it allows to map a high dimensional data space in a two-dimensional space while preserving as much as possible the initial space topology in an unsupervised way. The obtained SOM allows providing the dependencies between variables and consequently removing redundant variables leading to a minimal subset of only five features (the event duration, the rain rate peak, the rain event depth, the event rain rate standard deviation and the absolute rain rate variation of order 0.5). To confirm relevance of the five selected features the corresponding SOM is analyzed. This analysis shows clearly the existence of relationships between features. It also shows the independence of the inter-event time (IETp) feature or the weak dependence of the Dry percentage in event (Dd%e) feature. This confirms that a rain time series can be considered by an alternation of independent rain event and no rain period. The five selected feature are used to perform a hierarchical clustering of the events. The well-known division between stratiform and convective events appears clearly. This classification into two classes is then refined in 5 fairly homogeneous subclasses. The data driven analysis performed on whole rain events instead of fixed length samples allows identifying strong relationships between macrophysics (based on rain rate) and microphysics (based on raindrops) features. We show that among the 5 identified subclasses some of them have specific microphysics characteristics. Obtaining information on microphysical characteristics of rainfall events from rain gauges measurement suggests many implications in development of the quantitative precipitation estimation (QPE), for the improvement of rain rate retrieval algorithm in remote sensing context.

  11. Asymptotic and spectral analysis of the gyrokinetic-waterbag integro-differential operator in toroidal geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besse, Nicolas, E-mail: Nicolas.Besse@oca.eu; Institut Jean Lamour, UMR CNRS/UL 7198, Université de Lorraine, BP 70239 54506 Vandoeuvre-lès-Nancy Cedex; Coulette, David, E-mail: David.Coulette@ipcms.unistra.fr

    2016-08-15

    Achieving plasmas with good stability and confinement properties is a key research goal for magnetic fusion devices. The underlying equations are the Vlasov–Poisson and Vlasov–Maxwell (VPM) equations in three space variables, three velocity variables, and one time variable. Even in those somewhat academic cases where global equilibrium solutions are known, studying their stability requires the analysis of the spectral properties of the linearized operator, a daunting task. We have identified a model, for which not only equilibrium solutions can be constructed, but many of their stability properties are amenable to rigorous analysis. It uses a class of solution to themore » VPM equations (or to their gyrokinetic approximations) known as waterbag solutions which, in particular, are piecewise constant in phase-space. It also uses, not only the gyrokinetic approximation of fast cyclotronic motion around magnetic field lines, but also an asymptotic approximation regarding the magnetic-field-induced anisotropy: the spatial variation along the field lines is taken much slower than across them. Together, these assumptions result in a drastic reduction in the dimensionality of the linearized problem, which becomes a set of two nested one-dimensional problems: an integral equation in the poloidal variable, followed by a one-dimensional complex Schrödinger equation in the radial variable. We show here that the operator associated to the poloidal variable is meromorphic in the eigenparameter, the pulsation frequency. We also prove that, for all but a countable set of real pulsation frequencies, the operator is compact and thus behaves mostly as a finite-dimensional one. The numerical algorithms based on such ideas have been implemented in a companion paper [D. Coulette and N. Besse, “Numerical resolution of the global eigenvalue problem for gyrokinetic-waterbag model in toroidal geometry” (submitted)] and were found to be surprisingly close to those for the original gyrokinetic-Vlasov equations. The purpose of the present paper is to make these new ideas accessible to two readerships: applied mathematicians and plasma physicists.« less

  12. Approximate furrow infiltration model for time-variable ponding depth

    USDA-ARS?s Scientific Manuscript database

    A methodology is proposed for estimating furrow infiltration under time-variable ponding depth conditions. The methodology approximates the solution to the two-dimensional Richards equation, and is a modification of a procedure that was originally proposed for computing infiltration under constant ...

  13. Interfacing sensory input with motor output: does the control architecture converge to a serial process along a single channel?

    PubMed Central

    van de Kamp, Cornelis; Gawthrop, Peter J.; Gollee, Henrik; Lakie, Martin; Loram, Ian D.

    2013-01-01

    Modular organization in control architecture may underlie the versatility of human motor control; but the nature of the interface relating sensory input through task-selection in the space of performance variables to control actions in the space of the elemental variables is currently unknown. Our central question is whether the control architecture converges to a serial process along a single channel? In discrete reaction time experiments, psychologists have firmly associated a serial single channel hypothesis with refractoriness and response selection [psychological refractory period (PRP)]. Recently, we developed a methodology and evidence identifying refractoriness in sustained control of an external single degree-of-freedom system. We hypothesize that multi-segmental whole-body control also shows refractoriness. Eight participants controlled their whole body to ensure a head marker tracked a target as fast and accurately as possible. Analysis showed enhanced delays in response to stimuli with close temporal proximity to the preceding stimulus. Consistent with our preceding work, this evidence is incompatible with control as a linear time invariant process. This evidence is consistent with a single-channel serial ballistic process within the intermittent control paradigm with an intermittent interval of around 0.5 s. A control architecture reproducing intentional human movement control must reproduce refractoriness. Intermittent control is designed to provide computational time for an online optimization process and is appropriate for flexible adaptive control. For human motor control we suggest that parallel sensory input converges to a serial, single channel process involving planning, selection, and temporal inhibition of alternative responses prior to low dimensional motor output. Such design could aid robots to reproduce the flexibility of human control. PMID:23675342

  14. De Finetti representation theorem for infinite-dimensional quantum systems and applications to quantum cryptography.

    PubMed

    Renner, R; Cirac, J I

    2009-03-20

    We show that the quantum de Finetti theorem holds for states on infinite-dimensional systems, provided they satisfy certain experimentally verifiable conditions. This result can be applied to prove the security of quantum key distribution based on weak coherent states or other continuous variable states against general attacks.

  15. Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method

    ERIC Educational Resources Information Center

    Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev

    2018-01-01

    The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…

  16. Social Inferences from Faces: Ambient Images Generate a Three-Dimensional Model

    ERIC Educational Resources Information Center

    Sutherland, Clare A. M.; Oldmeadow, Julian A.; Santos, Isabel M.; Towler, John; Burt, D. Michael; Young, Andrew W.

    2013-01-01

    Three experiments are presented that investigate the two-dimensional valence/trustworthiness by dominance model of social inferences from faces (Oosterhof & Todorov, 2008). Experiment 1 used image averaging and morphing techniques to demonstrate that consistent facial cues subserve a range of social inferences, even in a highly variable sample of…

  17. Bright-dark soliton solutions for the (2+1)-dimensional variable-coefficient coupled nonlinear Schrödinger system in a graded-index waveguide

    NASA Astrophysics Data System (ADS)

    Yuan, Yu-Qiang; Tian, Bo; Xie, Xi-Yang; Chai, Jun; Liu, Lei

    2017-04-01

    Under investigation in this paper is the (2+1)-dimensional coupled nonlinear Schrödinger (NLS) system with variable coefficients, which describes the propagation of an optical beam inside the two-dimensional graded-index waveguide amplifier with the polarization effects. Through a similarity transformation, we convert that system into a set of the integrable defocusing (1+1)-dimensional coupled NLS equations, and subsequently construct the bright-dark soliton solutions for the original system which are converted from the ones of the latter set. With the graphic analysis, we discuss the soliton propagation and collision with r(t), which is related to the nonlinear, profile and gain/loss coefficients. When r(t) is a constant, one soliton propagates with the amplitude, width and velocity unvaried, while velocity and width of the one soliton can be affected, and two solitons possess the elastic collision; When r(t) is a linear function, velocity and width of the one soliton varies with t increasing, and collision of the two solitons is altered. Besides, bound-state solitons are seen.

  18. A three-dimensional Dirichlet-to-Neumann operator for water waves over topography

    NASA Astrophysics Data System (ADS)

    Andrade, D.; Nachbin, A.

    2018-06-01

    Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.

  19. Reduction of tablet weight variability by optimizing paddle speed in the forced feeder of a high-speed rotary tablet press.

    PubMed

    Peeters, Elisabeth; De Beer, Thomas; Vervaet, Chris; Remon, Jean-Paul

    2015-04-01

    Tableting is a complex process due to the large number of process parameters that can be varied. Knowledge and understanding of the influence of these parameters on the final product quality is of great importance for the industry, allowing economic efficiency and parametric release. The aim of this study was to investigate the influence of paddle speeds and fill depth at different tableting speeds on the weight and weight variability of tablets. Two excipients possessing different flow behavior, microcrystalline cellulose (MCC) and dibasic calcium phosphate dihydrate (DCP), were selected as model powders. Tablets were manufactured via a high-speed rotary tablet press using design of experiments (DoE). During each experiment also the volume of powder in the forced feeder was measured. Analysis of the DoE revealed that paddle speeds are of minor importance for tablet weight but significantly affect volume of powder inside the feeder in case of powders with excellent flowability (DCP). The opposite effect of paddle speed was observed for fairly flowing powders (MCC). Tableting speed played a role in weight and weight variability, whereas changing fill depth exclusively influenced tablet weight. The DoE approach allowed predicting the optimum combination of process parameters leading to minimum tablet weight variability. Monte Carlo simulations allowed assessing the probability to exceed the acceptable response limits if factor settings were varied around their optimum. This multi-dimensional combination and interaction of input variables leading to response criteria with acceptable probability reflected the design space.

  20. Increased Anterior Pelvic Angle Characterizes the Gait of Children with Attention Deficit/Hyperactivity Disorder (ADHD).

    PubMed

    Naruse, Hiroaki; Fujisawa, Takashi X; Yatsuga, Chiho; Kubota, Masafumi; Matsuo, Hideaki; Takiguchi, Shinichiro; Shimada, Seiichiro; Imai, Yuto; Hiratani, Michio; Kosaka, Hirotaka; Tomoda, Akemi

    2017-01-01

    Children with attention deficit/hyperactivity disorder (ADHD) frequently have motor problems. Previous studies have reported that the characteristic gait in children with ADHD is immature and that subjects demonstrate higher levels of variability in gait characteristics for the lower extremities than healthy controls. However, little is known about body movement during gait in children with ADHD. The purpose of this study was to identify the characteristic body movements associated with ADHD symptoms in children with ADHD. Using a three-dimensional motion analysis system, we compared gait variables in boys with ADHD (n = 19; mean age, 9.58 years) and boys with typical development (TD) (n = 21; mean age, 10.71 years) to determine the specific gait characteristics related to ADHD symptoms. We assessed spatiotemporal gait variables (i.e. speed, stride length, and cadence), and kinematic gait variables (i.e. angle of pelvis, hip, knee, and ankle) to measure body movement when walking at a self-selected pace. In comparison with the TD group, the ADHD group demonstrated significantly higher values in cadence (t = 3.33, p = 0.002) and anterior pelvic angle (t = 3.08, p = 0.004). In multiple regression analysis, anterior pelvic angle was associated with the ADHD rating scale hyperactive/impulsive scores (β = 0.62, t = 2.58, p = 0.025), but not other psychiatric symptoms in the ADHD group. Our results suggest that anterior pelvic angle represents a specific gait variable related to ADHD symptoms. Our kinematic findings could have potential implications for evaluating the body movement in boys with ADHD.

  1. From Metaphors to Formalism: A Heuristic Approach to Holistic Assessments of Ecosystem Health.

    PubMed

    Fock, Heino O; Kraus, Gerd

    2016-01-01

    Environmental policies employ metaphoric objectives such as ecosystem health, resilience and sustainable provision of ecosystem services, which influence corresponding sustainability assessments by means of normative settings such as assumptions on system description, indicator selection, aggregation of information and target setting. A heuristic approach is developed for sustainability assessments to avoid ambiguity and applications to the EU Marine Strategy Framework Directive (MSFD) and OSPAR assessments are presented. For MSFD, nineteen different assessment procedures have been proposed, but at present no agreed assessment procedure is available. The heuristic assessment framework is a functional-holistic approach comprising an ex-ante/ex-post assessment framework with specifically defined normative and systemic dimensions (EAEPNS). The outer normative dimension defines the ex-ante/ex-post framework, of which the latter branch delivers one measure of ecosystem health based on indicators and the former allows to account for the multi-dimensional nature of sustainability (social, economic, ecological) in terms of modeling approaches. For MSFD, the ex-ante/ex-post framework replaces the current distinction between assessments based on pressure and state descriptors. The ex-ante and the ex-post branch each comprise an inner normative and a systemic dimension. The inner normative dimension in the ex-post branch considers additive utility models and likelihood functions to standardize variables normalized with Bayesian modeling. Likelihood functions allow precautionary target setting. The ex-post systemic dimension considers a posteriori indicator selection by means of analysis of indicator space to avoid redundant indicator information as opposed to a priori indicator selection in deconstructive-structural approaches. Indicator information is expressed in terms of ecosystem variability by means of multivariate analysis procedures. The application to the OSPAR assessment for the southern North Sea showed, that with the selected 36 indicators 48% of ecosystem variability could be explained. Tools for the ex-ante branch are risk and ecosystem models with the capability to analyze trade-offs, generating model output for each of the pressure chains to allow for a phasing-out of human pressures. The Bayesian measure of ecosystem health is sensitive to trends in environmental features, but robust to ecosystem variability in line with state space models. The combination of the ex-ante and ex-post branch is essential to evaluate ecosystem resilience and to adopt adaptive management. Based on requirements of the heuristic approach, three possible developments of this concept can be envisioned, i.e. a governance driven approach built upon participatory processes, a science driven functional-holistic approach requiring extensive monitoring to analyze complete ecosystem variability, and an approach with emphasis on ex-ante modeling and ex-post assessment of well-studied subsystems.

  2. From Metaphors to Formalism: A Heuristic Approach to Holistic Assessments of Ecosystem Health

    PubMed Central

    Kraus, Gerd

    2016-01-01

    Environmental policies employ metaphoric objectives such as ecosystem health, resilience and sustainable provision of ecosystem services, which influence corresponding sustainability assessments by means of normative settings such as assumptions on system description, indicator selection, aggregation of information and target setting. A heuristic approach is developed for sustainability assessments to avoid ambiguity and applications to the EU Marine Strategy Framework Directive (MSFD) and OSPAR assessments are presented. For MSFD, nineteen different assessment procedures have been proposed, but at present no agreed assessment procedure is available. The heuristic assessment framework is a functional-holistic approach comprising an ex-ante/ex-post assessment framework with specifically defined normative and systemic dimensions (EAEPNS). The outer normative dimension defines the ex-ante/ex-post framework, of which the latter branch delivers one measure of ecosystem health based on indicators and the former allows to account for the multi-dimensional nature of sustainability (social, economic, ecological) in terms of modeling approaches. For MSFD, the ex-ante/ex-post framework replaces the current distinction between assessments based on pressure and state descriptors. The ex-ante and the ex-post branch each comprise an inner normative and a systemic dimension. The inner normative dimension in the ex-post branch considers additive utility models and likelihood functions to standardize variables normalized with Bayesian modeling. Likelihood functions allow precautionary target setting. The ex-post systemic dimension considers a posteriori indicator selection by means of analysis of indicator space to avoid redundant indicator information as opposed to a priori indicator selection in deconstructive-structural approaches. Indicator information is expressed in terms of ecosystem variability by means of multivariate analysis procedures. The application to the OSPAR assessment for the southern North Sea showed, that with the selected 36 indicators 48% of ecosystem variability could be explained. Tools for the ex-ante branch are risk and ecosystem models with the capability to analyze trade-offs, generating model output for each of the pressure chains to allow for a phasing-out of human pressures. The Bayesian measure of ecosystem health is sensitive to trends in environmental features, but robust to ecosystem variability in line with state space models. The combination of the ex-ante and ex-post branch is essential to evaluate ecosystem resilience and to adopt adaptive management. Based on requirements of the heuristic approach, three possible developments of this concept can be envisioned, i.e. a governance driven approach built upon participatory processes, a science driven functional-holistic approach requiring extensive monitoring to analyze complete ecosystem variability, and an approach with emphasis on ex-ante modeling and ex-post assessment of well-studied subsystems. PMID:27509185

  3. Ensemble of sparse classifiers for high-dimensional biological data.

    PubMed

    Kim, Sunghan; Scalzo, Fabien; Telesca, Donatello; Hu, Xiao

    2015-01-01

    Biological data are often high in dimension while the number of samples is small. In such cases, the performance of classification can be improved by reducing the dimension of data, which is referred to as feature selection. Recently, a novel feature selection method has been proposed utilising the sparsity of high-dimensional biological data where a small subset of features accounts for most variance of the dataset. In this study we propose a new classification method for high-dimensional biological data, which performs both feature selection and classification within a single framework. Our proposed method utilises a sparse linear solution technique and the bootstrap aggregating algorithm. We tested its performance on four public mass spectrometry cancer datasets along with two other conventional classification techniques such as Support Vector Machines and Adaptive Boosting. The results demonstrate that our proposed method performs more accurate classification across various cancer datasets than those conventional classification techniques.

  4. Mining nutrigenetics patterns related to obesity: use of parallel multifactor dimensionality reduction.

    PubMed

    Karayianni, Katerina N; Grimaldi, Keith A; Nikita, Konstantina S; Valavanis, Ioannis K

    2015-01-01

    This paper aims to enlighten the complex etiology beneath obesity by analysing data from a large nutrigenetics study, in which nutritional and genetic factors associated with obesity were recorded for around two thousand individuals. In our previous work, these data have been analysed using artificial neural network methods, which identified optimised subsets of factors to predict one's obesity status. These methods did not reveal though how the selected factors interact with each other in the obtained predictive models. For that reason, parallel Multifactor Dimensionality Reduction (pMDR) was used here to further analyse the pre-selected subsets of nutrigenetic factors. Within pMDR, predictive models using up to eight factors were constructed, further reducing the input dimensionality, while rules describing the interactive effects of the selected factors were derived. In this way, it was possible to identify specific genetic variations and their interactive effects with particular nutritional factors, which are now under further study.

  5. A new adaptive L1-norm for optimal descriptor selection of high-dimensional QSAR classification model for anti-hepatitis C virus activity of thiourea derivatives.

    PubMed

    Algamal, Z Y; Lee, M H

    2017-01-01

    A high-dimensional quantitative structure-activity relationship (QSAR) classification model typically contains a large number of irrelevant and redundant descriptors. In this paper, a new design of descriptor selection for the QSAR classification model estimation method is proposed by adding a new weight inside L1-norm. The experimental results of classifying the anti-hepatitis C virus activity of thiourea derivatives demonstrate that the proposed descriptor selection method in the QSAR classification model performs effectively and competitively compared with other existing penalized methods in terms of classification performance on both the training and the testing datasets. Moreover, it is noteworthy that the results obtained in terms of stability test and applicability domain provide a robust QSAR classification model. It is evident from the results that the developed QSAR classification model could conceivably be employed for further high-dimensional QSAR classification studies.

  6. 3D fluorescence anisotropy imaging using selective plane illumination microscopy.

    PubMed

    Hedde, Per Niklas; Ranjit, Suman; Gratton, Enrico

    2015-08-24

    Fluorescence anisotropy imaging is a popular method to visualize changes in organization and conformation of biomolecules within cells and tissues. In such an experiment, depolarization effects resulting from differences in orientation, proximity and rotational mobility of fluorescently labeled molecules are probed with high spatial resolution. Fluorescence anisotropy is typically imaged using laser scanning and epifluorescence-based approaches. Unfortunately, those techniques are limited in either axial resolution, image acquisition speed, or by photobleaching. In the last decade, however, selective plane illumination microscopy has emerged as the preferred choice for three-dimensional time lapse imaging combining axial sectioning capability with fast, camera-based image acquisition, and minimal light exposure. We demonstrate how selective plane illumination microscopy can be utilized for three-dimensional fluorescence anisotropy imaging of live cells. We further examined the formation of focal adhesions by three-dimensional time lapse anisotropy imaging of CHO-K1 cells expressing an EGFP-paxillin fusion protein.

  7. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.

    PubMed

    Hero, Alfred O; Rajaratnam, Bala

    2016-01-01

    When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.

  8. 'SOSORT consensus paper on brace action: TLSO biomechanics of correction (investigating the rationale for force vector selection)'

    PubMed Central

    Rigo, M; Negrini, S; Weiss, HR; Grivas, TB; Maruyama, T; Kotwicki, T

    2006-01-01

    Background The effectiveness of orthotic treatment continues to be controversial in international medical literature due to differences in the reported results and conclusions of various studies. Heterogeneity of the samples has been suggested as a reason for conflicting results. Besides the obvious theoretical differences between the brace concepts, the variability in the technical factors can also explain the contradictory results between same brace types. This paper will investigate the degree of variability among responses of scoliosis specialists from the Brace Study Ground of the International Society on Scoliosis Orthopedic and Rehabilitation Treatment SOSORT. Ultimately, this information could be a foundation for establishing a consensus and framework for future prospective controlled studies. Methods A preliminary questionnaire on the topic of 'brace action' relative to the theory of three-dimensional scoliosis correction and brace treatment was developed and circulated to specialists interested in the conservative treatment of adolescent idiopathic scoliosis. A particular case was presented (main thoracic curve with minor lumbar). Several key points emerged and were used to develop a second questionnaire which was discussed and full filed after the SOSORT consensus meeting (Milano, Italy, January 2005). Results Twenty-one questionnaires were completed. The Chêneau brace was the most frequently recommended. The importance of the three point system mechanism was stressed. Options about proper pad placement on the thoracic convexity were divided 50% for the pad reaching or involving the apical vertebra and 50% for the pad acting caudal to the apical vertebra. There was agreement about the direction of the vector force, 85% selecting a 'dorso lateral to ventro medial' direction but about the shape of the pad to produce such a force. Principles related to three-dimensional correction achieved high consensus (80%–85%), but suggested methods of correction were quite diverse. Conclusion This study reveals that among participating SOSORT specialists there continues to be a strongly held and conflicting if not a contentious opinion regarding brace design and treatment. If the goal of a 'treatment consensus' is realistic and achievable, significantly more effort will be required to reconcile these differences. PMID:16857045

  9. Comment: Spurious Correlation and Other Observations on Experimental Design for Engineering Dimensional Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.

    2013-08-01

    This article discusses the paper "Experimental Design for Engineering Dimensional Analysis" by Albrecht et al. (2013, Technometrics). That paper provides and overview of engineering dimensional analysis (DA) for use in developing DA models. The paper proposes methods for generating model-robust experimental designs to supporting fitting DA models. The specific approach is to develop a design that maximizes the efficiency of a specified empirical model (EM) in the original independent variables, subject to a minimum efficiency for a DA model expressed in terms of dimensionless groups (DGs). This discussion article raises several issues and makes recommendations regarding the proposed approach. Also,more » the concept of spurious correlation is raised and discussed. Spurious correlation results from the response DG being calculated using several independent variables that are also used to calculate predictor DGs in the DA model.« less

  10. Two-dimensional analytical modeling of a linear variable filter for spectral order sorting.

    PubMed

    Ko, Cheng-Hao; Wu, Yueh-Hsun; Tsai, Jih-Run; Wang, Bang-Ji; Chakraborty, Symphony

    2016-06-10

    A two-dimensional thin film thickness model based on the geometry of a commercial coater which can calculate more effectively the profiles of linear variable filters (LVFs) has been developed. This is done by isolating the substrate plane as an independent coordinate (local coordinate), while the rotation and translation matrices are used to establish the coordinate transformation and combine the characteristic vector with the step function to build a borderline which can conclude whether the local mask will block the deposition or not. The height of the local mask has been increased up to 40 mm in the proposed model, and two-dimensional simulations are developed to obtain a thin film profile deposition on the substrate inside the evaporation chamber to achieve the specific request of producing a LVF zone width in a more economical way than previously reported [Opt. Express23, 5102 (2015)OPEXFF1094-408710.1364/OE.23.005102].

  11. Solitons interaction and integrability for a (2+1)-dimensional variable-coefficient Broer-Kaup system in water waves

    NASA Astrophysics Data System (ADS)

    Zhao, Xue-Hui; Tian, Bo; Guo, Yong-Jiang; Li, Hui-Min

    2018-03-01

    Under investigation in this paper is a (2+1)-dimensional variable-coefficient Broer-Kaup system in water waves. Via the symbolic computation, Bell polynomials and Hirota method, the Bäcklund transformation, Lax pair, bilinear forms, one- and two-soliton solutions are derived. Propagation and interaction for the solitons are illustrated: Amplitudes and shapes of the one soliton keep invariant during the propagation, which implies that the transport of the energy is stable for the (2+1)-dimensional water waves; and inelastic interactions between the two solitons are discussed. Elastic interactions between the two parabolic-, cubic- and periodic-type solitons are displayed, where the solitonic amplitudes and shapes remain unchanged except for certain phase shifts. However, inelastically, amplitudes of the two solitons have a linear superposition after each interaction which is called as a soliton resonance phenomenon.

  12. Smoothed Particle Hydrodynamics Simulations of Ultrarelativistic Shocks with Artificial Viscosity

    NASA Astrophysics Data System (ADS)

    Siegler, S.; Riffert, H.

    2000-03-01

    We present a fully Lagrangian conservation form of the general relativistic hydrodynamic equations for perfect fluids with artificial viscosity in a given arbitrary background spacetime. This conservation formulation is achieved by choosing suitable Lagrangian time evolution variables, from which the generic fluid variables of rest-mass density, 3-velocity, and thermodynamic pressure have to be determined. We present the corresponding equations for an ideal gas and show the existence and uniqueness of the solution. On the basis of the Lagrangian formulation we have developed a three-dimensional general relativistic smoothed particle hydrodynamics (SPH) code using the standard SPH formalism as known from nonrelativistic fluid dynamics. One-dimensional simulations of a shock tube and a wall shock are presented together with a two-dimensional test calculation of an inclined shock tube. With our method we can model ultrarelativistic fluid flows including shocks with Lorentz factors of even 1000.

  13. Quantitative and qualitative measure of intralaboratory two-dimensional protein gel reproducibility and the effects of sample preparation, sample load, and image analysis.

    PubMed

    Choe, Leila H; Lee, Kelvin H

    2003-10-01

    We investigate one approach to assess the quantitative variability in two-dimensional gel electrophoresis (2-DE) separations based on gel-to-gel variability, sample preparation variability, sample load differences, and the effect of automation on image analysis. We observe that 95% of spots present in three out of four replicate gels exhibit less than a 0.52 coefficient of variation (CV) in fluorescent stain intensity (% volume) for a single sample run on multiple gels. When four parallel sample preparations are performed, this value increases to 0.57. We do not observe any significant change in quantitative value for an increase or decrease in sample load of 30% when using appropriate image analysis variables. Increasing use of automation, while necessary in modern 2-DE experiments, does change the observed level of quantitative and qualitative variability among replicate gels. The number of spots that change qualitatively for a single sample run in parallel varies from a CV = 0.03 for fully manual analysis to CV = 0.20 for a fully automated analysis. We present a systematic method by which a single laboratory can measure gel-to-gel variability using only three gel runs.

  14. A four-dimensional virtual hand brain-machine interface using active dimension selection

    NASA Astrophysics Data System (ADS)

    Rouse, Adam G.

    2016-06-01

    Objective. Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. Approach. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Main results. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s-1 for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. Significance. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.

  15. A four-dimensional virtual hand brain-machine interface using active dimension selection

    PubMed Central

    Rouse, Adam G.

    2018-01-01

    Objective Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. Approach ADS utilizes a two stage decoder by using neural signals to both i) select an active dimension being controlled and ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Main Results Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits/s for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. Significance ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand. PMID:27171896

  16. Gene selection for microarray data classification via subspace learning and manifold regularization.

    PubMed

    Tang, Chang; Cao, Lijuan; Zheng, Xiao; Wang, Minhui

    2017-12-19

    With the rapid development of DNA microarray technology, large amount of genomic data has been generated. Classification of these microarray data is a challenge task since gene expression data are often with thousands of genes but a small number of samples. In this paper, an effective gene selection method is proposed to select the best subset of genes for microarray data with the irrelevant and redundant genes removed. Compared with original data, the selected gene subset can benefit the classification task. We formulate the gene selection task as a manifold regularized subspace learning problem. In detail, a projection matrix is used to project the original high dimensional microarray data into a lower dimensional subspace, with the constraint that the original genes can be well represented by the selected genes. Meanwhile, the local manifold structure of original data is preserved by a Laplacian graph regularization term on the low-dimensional data space. The projection matrix can serve as an importance indicator of different genes. An iterative update algorithm is developed for solving the problem. Experimental results on six publicly available microarray datasets and one clinical dataset demonstrate that the proposed method performs better when compared with other state-of-the-art methods in terms of microarray data classification. Graphical Abstract The graphical abstract of this work.

  17. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases.

    PubMed

    Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M

    2006-04-21

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association studies using the case-control design, the application of a combination of several methods, including the set association approach, MDR and the random forests approach, will likely be a useful strategy to find the important genes and interaction patterns involved in complex diseases.

  18. Spectroscopic properties of a two-dimensional time-dependent Cepheid model. I. Description and validation of the model

    NASA Astrophysics Data System (ADS)

    Vasilyev, V.; Ludwig, H.-G.; Freytag, B.; Lemasle, B.; Marconi, M.

    2017-10-01

    Context. Standard spectroscopic analyses of Cepheid variables are based on hydrostatic one-dimensional model atmospheres, with convection treated using various formulations of mixing-length theory. Aims: This paper aims to carry out an investigation of the validity of the quasi-static approximation in the context of pulsating stars. We check the adequacy of a two-dimensional time-dependent model of a Cepheid-like variable with focus on its spectroscopic properties. Methods: With the radiation-hydrodynamics code CO5BOLD, we construct a two-dimensional time-dependent envelope model of a Cepheid with Teff = 5600 K, log g = 2.0, solar metallicity, and a 2.8-day pulsation period. Subsequently, we perform extensive spectral syntheses of a set of artificial iron lines in local thermodynamic equilibrium. The set of lines allows us to systematically study effects of line strength, ionization stage, and excitation potential. Results: We evaluate the microturbulent velocity, line asymmetry, projection factor, and Doppler shifts. The microturbulent velocity, averaged over all lines, depends on the pulsational phase and varies between 1.5 and 2.7 km s-1. The derived projection factor lies between 1.23 and 1.27, which agrees with observational results. The mean Doppler shift is non-zero and negative, -1 km s-1, after averaging over several full periods and lines. This residual line-of-sight velocity (related to the "K-term") is primarily caused by horizontal inhomogeneities, and consequently we interpret it as the familiar convective blueshift ubiquitously present in non-pulsating late-type stars. Limited statistics prevent firm conclusions on the line asymmetries. Conclusions: Our two-dimensional model provides a reasonably accurate representation of the spectroscopic properties of a short-period Cepheid-like variable star. Some properties are primarily controlled by convective inhomogeneities rather than by the Cepheid-defining pulsations. Extended multi-dimensional modelling offers new insight into the nature of pulsating stars.

  19. Reliability of tunnel angle in ACL reconstruction: two-dimensional versus three-dimensional guide technique.

    PubMed

    Leiter, Jeff R S; de Korompay, Nevin; Macdonald, Lindsey; McRae, Sheila; Froese, Warren; Macdonald, Peter B

    2011-08-01

    To compare the reliability of tibial tunnel position and angle produced with a standard ACL guide (two-dimensional guide) or Howell 65° Guide (three-dimensional guide) in the coronal and sagittal planes. In the sagittal plane, the dependent variables were the angle of the tibial tunnel relative to the tibial plateau and the position of the tibial tunnel with respect to the most posterior aspect of the tibia. In the coronal plane, the dependent variables were the angle of the tunnel with respect to the medial joint line of the tibia and the medial and lateral placement of the tibial tunnel relative to the most medial aspect of the tibia. The position and angle of the tibial tunnel in the coronal and sagittal planes were determined from anteroposterior and lateral radiographs, respectively, taken 2-6 months postoperatively. The two-dimensional and three-dimensional guide groups included 28 and 24 sets of radiographs, respectively. Tibial tunnel position was identified, and tunnel angle measurements were completed. Multiple investigators measured the position and angle of the tunnel 3 times, at least 7 days apart. The angle of the tibial tunnel in the coronal plane using a two-dimensional guide (61.3 ± 4.8°) was more horizontal (P < 0.05) than tunnels drilled with a three-dimensional guide (64.7 ± 6.2°). The position of the tibial tunnel in the sagittal plane was more anterior (P < 0.05) in the two-dimensional (41.6 ± 2.5%) guide group compared to the three-dimensional guide group (43.3 ± 2.9%). The Howell Tibial Guide allows for reliable placement of the tibial tunnel in the coronal plane at an angle of 65°. Tibial tunnels were within the anatomical footprint of the ACL with either technique. Future studies should investigate the effects of tibial tunnel angle on knee function and patient quality of life. Case-control retrospective comparative study, Level III.

  20. Catalytic Chemistry on Oxide Nanostructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asthagiri, Aravind; Dixon, David A.; Dohnalek, Zdenek

    2016-05-29

    Metal oxides represent one of the most important and widely employed materials in catalysis. Extreme variability of their chemistry provides a unique opportunity to tune their properties and to utilize them for the design of highly active and selective catalysts. For bulk oxides, this can be achieved by varying their stoichiometry, phase, exposed surface facets, defect, dopant densities and numerous other ways. Further, distinct properties from those of bulk oxides can be attained by restricting the oxide dimensionality and preparing them in the form of ultrathin films and nanoclusters as discussed throughout this book. In this chapter we focus onmore » demonstrating such unique catalytic properties brought by the oxide nanoscaling. In the highlighted studies planar models are carefully designed to achieve minimal dispersion of structural motifs and to attain detailed mechanistic understanding of targeted chemical transformations. Detailed level of morphological and structural characterization necessary to achieve this goal is accomplished by employing both high-resolution imaging via scanning probe methods and ensemble-averaged surface sensitive spectroscopic methods. Three prototypical examples illustrating different properties of nanoscaled oxides in different classes of reactions are selected.« less

  1. Selecting predictors for discriminant analysis of species performance: an example from an amphibious softwater plant.

    PubMed

    Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M

    2012-03-01

    Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.

  2. Multi-dimensional scores to predict mortality in patients with idiopathic pulmonary fibrosis undergoing lung transplantation assessment.

    PubMed

    Fisher, Jolene H; Al-Hejaili, Faris; Kandel, Sonja; Hirji, Alim; Shapera, Shane; Mura, Marco

    2017-04-01

    The heterogeneous progression of idiopathic pulmonary fibrosis (IPF) makes prognostication difficult and contributes to high mortality on the waitlist for lung transplantation (LTx). Multi-dimensional scores (Composite Physiologic index [CPI], [Gender-Age-Physiology [GAP]; RIsk Stratification scorE [RISE]) demonstrated enhanced predictive power towards outcome in IPF. The lung allocation score (LAS) is a multi-dimensional tool commonly used to stratify patients assessed for LTx. We sought to investigate whether IPF-specific multi-dimensional scores predict mortality in patients with IPF assessed for LTx. The study included 302 patients with IPF who underwent a LTx assessment (2003-2014). Multi-dimensional scores were calculated. The primary outcome was 12-month mortality after assessment. LTx was considered as competing event in all analyses. At the end of the observation period, there were 134 transplants, 63 deaths, and 105 patients were alive without LTx. Multi-dimensional scores predicted mortality with accuracy similar to LAS, and superior to that of individual variables: area under the curve (AUC) for LAS was 0.78 (sensitivity 71%, specificity 86%); CPI 0.75 (sensitivity 67%, specificity 82%); GAP 0.67 (sensitivity 59%, specificity 74%); RISE 0.78 (sensitivity 71%, specificity 84%). A separate analysis conducted only in patients actively listed for LTx (n = 247; 50 deaths) yielded similar results. In patients with IPF assessed for LTx as well as in those actually listed, multi-dimensional scores predict mortality better than individual variables, and with accuracy similar to the LAS. If validated, multi-dimensional scores may serve as inexpensive tools to guide decisions on the timing of referral and listing for LTx. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Introducing hydrological information in rainfall intensity-duration thresholds

    NASA Astrophysics Data System (ADS)

    Greco, Roberto; Bogaard, Thom

    2016-04-01

    Regional landslide hazard assessment is mainly based on empirically derived precipitation-intensity-duration (PID) thresholds. Generally, two features of rainfall events are plotted to discriminate between observed occurrence and absence of occurrence of mass movements. Hereafter, a separation line is drawn in logarithmic space. Although successfully applied in many case studies, such PID thresholds suffer from many false positives as well as limited physical process insight. One of the main limitations is indeed that they do not include any information about the hydrological processes occurring along the slopes, so that the triggering is only related to rainfall characteristics. In order to introduce such an hydrological information in the definition of rainfall thresholds for shallow landslide triggering assessment, in this study the introduction of non-dimensional rainfall characteristics is proposed. In particular, rain storm depth, intensity and duration are divided by a characteristic infiltration depth, a characteristic infiltration rate and a characteristic duration, respectively. These latter variables depend on the hydraulic properties and on the moisture state of the soil cover at the beginning of the precipitation. The proposed variables are applied to the case of a slope covered with shallow pyroclastic deposits in Cervinara (southern Italy), for which experimental data of hourly rainfall and soil suction were available. Rainfall thresholds defined with the proposed non-dimensional variables perform significantly better than those defined with dimensional variables, either in the intensity-duration plane or in the depth-duration plane.

  4. Time-dependent Models of Magnetospheric Accretion onto Young Stars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, C. E.; Espaillat, C. C.; Owen, J. E.

    Accretion onto Classical T Tauri stars is thought to take place through the action of magnetospheric processes, with gas in the inner disk being channeled onto the star’s surface by the stellar magnetic field lines. Young stars are known to accrete material in a time-variable manner, and the source of this variability remains an open problem, particularly on the shortest (∼day) timescales. Using one-dimensional time-dependent numerical simulations that follow the field line geometry, we find that for plausibly realistic young stars, steady-state transonic accretion occurs naturally in the absence of any other source of variability. However, we show that ifmore » the density in the inner disk varies smoothly in time with ∼day-long timescales (e.g., due to turbulence), this complication can lead to the development of shocks in the accretion column. These shocks propagate along the accretion column and ultimately hit the star, leading to rapid, large amplitude changes in the accretion rate. We argue that when these shocks hit the star, the observed time dependence will be a rapid increase in accretion luminosity, followed by a slower decline, and could be an explanation for some of the short-period variability observed in accreting young stars. Our one-dimensional approach bridges previous analytic work to more complicated multi-dimensional simulations and observations.« less

  5. Differences in aquatic habitat quality as an impact of one- and two-dimensional hydrodynamic model simulated flow variables

    NASA Astrophysics Data System (ADS)

    Benjankar, R. M.; Sohrabi, M.; Tonina, D.; McKean, J. A.

    2013-12-01

    Aquatic habitat models utilize flow variables which may be predicted with one-dimensional (1D) or two-dimensional (2D) hydrodynamic models to simulate aquatic habitat quality. Studies focusing on the effects of hydrodynamic model dimensionality on predicted aquatic habitat quality are limited. Here we present the analysis of the impact of flow variables predicted with 1D and 2D hydrodynamic models on simulated spatial distribution of habitat quality and Weighted Usable Area (WUA) for fall-spawning Chinook salmon. Our study focuses on three river systems located in central Idaho (USA), which are a straight and pool-riffle reach (South Fork Boise River), small pool-riffle sinuous streams in a large meadow (Bear Valley Creek) and a steep-confined plane-bed stream with occasional deep forced pools (Deadwood River). We consider low and high flows in simple and complex morphologic reaches. Results show that 1D and 2D modeling approaches have effects on both the spatial distribution of the habitat and WUA for both discharge scenarios, but we did not find noticeable differences between complex and simple reaches. In general, the differences in WUA were small, but depended on stream type. Nevertheless, spatially distributed habitat quality difference is considerable in all streams. The steep-confined plane bed stream had larger differences between aquatic habitat quality defined with 1D and 2D flow models compared to results for streams with well defined macro-topographies, such as pool-riffle bed forms. KEY WORDS: one- and two-dimensional hydrodynamic models, habitat modeling, weighted usable area (WUA), hydraulic habitat suitability, high and low discharges, simple and complex reaches

  6. Computation of viscous incompressible flows

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    1989-01-01

    Incompressible Navier-Stokes solution methods and their applications to three-dimensional flows are discussed. A brief review of existing methods is given followed by a detailed description of recent progress on development of three-dimensional generalized flow solvers. Emphasis is placed on primitive variable formulations which are most promising and flexible for general three-dimensional computations of viscous incompressible flows. Both steady- and unsteady-solution algorithms and their salient features are discussed. Finally, examples of real world applications of these flow solvers are given.

  7. Determination of the temperature field of shell structures

    NASA Astrophysics Data System (ADS)

    Rodionov, N. G.

    1986-10-01

    A stationary heat conduction problem is formulated for the case of shell structures, such as those found in gas-turbine and jet engines. A two-dimensional elliptic differential equation of stationary heat conduction is obtained which allows, in an approximate manner, for temperature changes along a third variable, i.e., the shell thickness. The two-dimensional problem is reduced to a series of one-dimensional problems which are then solved using efficient difference schemes. The approach proposed here is illustrated by a specific example.

  8. Hierarchical Protein Free Energy Landscapes from Variationally Enhanced Sampling.

    PubMed

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-12-13

    In recent work, we demonstrated that it is possible to obtain approximate representations of high-dimensional free energy surfaces with variationally enhanced sampling ( Shaffer, P.; Valsson, O.; Parrinello, M. Proc. Natl. Acad. Sci. , 2016 , 113 , 17 ). The high-dimensional spaces considered in that work were the set of backbone dihedral angles of a small peptide, Chignolin, and the high-dimensional free energy surface was approximated as the sum of many two-dimensional terms plus an additional term which represents an initial estimate. In this paper, we build on that work and demonstrate that we can calculate high-dimensional free energy surfaces of very high accuracy by incorporating additional terms. The additional terms apply to a set of collective variables which are more coarse than the base set of collective variables. In this way, it is possible to build hierarchical free energy surfaces, which are composed of terms that act on different length scales. We test the accuracy of these free energy landscapes for the proteins Chignolin and Trp-cage by constructing simple coarse-grained models and comparing results from the coarse-grained model to results from atomistic simulations. The approach described in this paper is ideally suited for problems in which the free energy surface has important features on different length scales or in which there is some natural hierarchy.

  9. Variable Neighborhood Search Heuristics for Selecting a Subset of Variables in Principal Component Analysis

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Singh, Renu; Steinley, Douglas

    2009-01-01

    The selection of a subset of variables from a pool of candidates is an important problem in several areas of multivariate statistics. Within the context of principal component analysis (PCA), a number of authors have argued that subset selection is crucial for identifying those variables that are required for correct interpretation of the…

  10. 2D VARIABLY SATURATED FLOWS: PHYSICAL SCALING AND BAYESIAN ESTIMATION

    EPA Science Inventory

    A novel dimensionless formulation for water flow in two-dimensional variably saturated media is presented. It shows that scaling physical systems requires conservation of the ratio between capillary forces and gravity forces. A direct result of this finding is that for two phys...

  11. Development for 2D pattern quantification method on mask and wafer

    NASA Astrophysics Data System (ADS)

    Matsuoka, Ryoichi; Mito, Hiroaki; Toyoda, Yasutaka; Wang, Zhigang

    2010-03-01

    We have developed the effective method of mask and silicon 2-dimensional metrology. The aim of this method is evaluating the performance of the silicon corresponding to Hotspot on a mask. The method adopts a metrology management system based on DBM (Design Based Metrology). This is the high accurate contouring created by an edge detection algorithm used in mask CD-SEM and silicon CD-SEM. Currently, as semiconductor manufacture moves towards even smaller feature size, this necessitates more aggressive optical proximity correction (OPC) to drive the super-resolution technology (RET). In other words, there is a trade-off between highly precise RET and mask manufacture, and this has a big impact on the semiconductor market that centers on the mask business. 2-dimensional Shape quantification is important as optimal solution over these problems. Although 1-dimensional shape measurement has been performed by the conventional technique, 2-dimensional shape management is needed in the mass production line under the influence of RET. We developed the technique of analyzing distribution of shape edge performance as the shape management technique. On the other hand, there is roughness in the silicon shape made from a mass-production line. Moreover, there is variation in the silicon shape. For this reason, quantification of silicon shape is important, in order to estimate the performance of a pattern. In order to quantify, the same shape is equalized in two dimensions. And the method of evaluating based on the shape is popular. In this study, we conducted experiments for averaging method of the pattern (Measurement Based Contouring) as two-dimensional mask and silicon evaluation technique. That is, observation of the identical position of a mask and a silicon was considered. It is possible to analyze variability of the edge of the same position with high precision. The result proved its detection accuracy and reliability of variability on two-dimensional pattern (mask and silicon) and is adaptable to following fields of mask quality management. - Estimate of the correlativity of shape variability and a process margin. - Determination of two-dimensional variability of pattern. - Verification of the performance of the pattern of various kinds of Hotspots. In this report, we introduce the experimental results and the application. We expect that the mask measurement and the shape control on mask production will make a huge contribution to mask yield-enhancement and that the DFM solution for mask quality control process will become much more important technology than ever. It is very important to observe the shape of the same location of Design, Mask, and Silicon in such a viewpoint.

  12. Temperature, Pressure, and Infrared Image Survey of an Axisymmetric Heated Exhaust Plume

    NASA Technical Reports Server (NTRS)

    Nelson, Edward L.; Mahan, J. Robert; Birckelbaw, Larry D.; Turk, Jeffrey A.; Wardwell, Douglas A.; Hange, Craig E.

    1996-01-01

    The focus of this research is to numerically predict an infrared image of a jet engine exhaust plume, given field variables such as temperature, pressure, and exhaust plume constituents as a function of spatial position within the plume, and to compare this predicted image directly with measured data. This work is motivated by the need to validate computational fluid dynamic (CFD) codes through infrared imaging. The technique of reducing the three-dimensional field variable domain to a two-dimensional infrared image invokes the use of an inverse Monte Carlo ray trace algorithm and an infrared band model for exhaust gases. This report describes an experiment in which the above-mentioned field variables were carefully measured. Results from this experiment, namely tables of measured temperature and pressure data, as well as measured infrared images, are given. The inverse Monte Carlo ray trace technique is described. Finally, experimentally obtained infrared images are directly compared to infrared images predicted from the measured field variables.

  13. Data-driven discovery of Koopman eigenfunctions using deep learning

    NASA Astrophysics Data System (ADS)

    Lusch, Bethany; Brunton, Steven L.; Kutz, J. Nathan

    2017-11-01

    Koopman operator theory transforms any autonomous non-linear dynamical system into an infinite-dimensional linear system. Since linear systems are well-understood, a mapping of non-linear dynamics to linear dynamics provides a powerful approach to understanding and controlling fluid flows. However, finding the correct change of variables remains an open challenge. We present a strategy to discover an approximate mapping using deep learning. Our neural networks find this change of variables, its inverse, and a finite-dimensional linear dynamical system defined on the new variables. Our method is completely data-driven and only requires measurements of the system, i.e. it does not require derivatives or knowledge of the governing equations. We find a minimal set of approximate Koopman eigenfunctions that are sufficient to reconstruct and advance the system to future states. We demonstrate the method on several dynamical systems.

  14. Real time three dimensional sensing system

    DOEpatents

    Gordon, S.J.

    1996-12-31

    The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane. 7 figs.

  15. DNA origami nanopillars as standards for three-dimensional superresolution microscopy.

    PubMed

    Schmied, Jürgen J; Forthmann, Carsten; Pibiri, Enrico; Lalkens, Birka; Nickels, Philipp; Liedl, Tim; Tinnefeld, Philip

    2013-02-13

    Nanopillars are promising nanostructures composed of various materials that bring new functionalities for applications ranging from photovoltaics to analytics. We developed DNA nanopillars with a height of 220 nm and a diameter of ~14 nm using the DNA origami technique. Modifying the base of the nanopillars with biotins allowed selective, upright, and rigid immobilization on solid substrates. With the help of site-selective dye labels, we visualized the structure and determined the orientation of the nanopillars by three-dimensional fluorescence superresolution microscopy. Because of their rigidity and nanometer-precise addressability, DNA origami nanopillars qualify as scaffold for the assembly of plasmonic devices as well as for three-dimensional superresolution standards.

  16. Real time three dimensional sensing system

    DOEpatents

    Gordon, Steven J.

    1996-01-01

    The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane.

  17. Model selection bias and Freedman's paradox

    USGS Publications Warehouse

    Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.

    2010-01-01

    In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.

  18. Coarse analysis of collective behaviors: Bifurcation analysis of the optimal velocity model for traffic jam formation

    NASA Astrophysics Data System (ADS)

    Miura, Yasunari; Sugiyama, Yuki

    2017-12-01

    We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.

  19. Variational analysis of the coupling between a geometrically exact Cosserat rod and an elastic continuum

    NASA Astrophysics Data System (ADS)

    Sander, Oliver; Schiela, Anton

    2014-12-01

    We formulate the static mechanical coupling of a geometrically exact Cosserat rod to a nonlinearly elastic continuum. In this setting, appropriate coupling conditions have to connect a one-dimensional model with director variables to a three-dimensional model without directors. Two alternative coupling conditions are proposed, which correspond to two different configuration trace spaces. For both, we show existence of solutions of the coupled problems, using the direct method of the calculus of variations. From the first-order optimality conditions, we also derive the corresponding conditions for the dual variables. These are then interpreted in mechanical terms.

  20. A boundary value approach for solving three-dimensional elliptic and hyperbolic partial differential equations.

    PubMed

    Biala, T A; Jator, S N

    2015-01-01

    In this article, the boundary value method is applied to solve three dimensional elliptic and hyperbolic partial differential equations. The partial derivatives with respect to two of the spatial variables (y, z) are discretized using finite difference approximations to obtain a large system of ordinary differential equations (ODEs) in the third spatial variable (x). Using interpolation and collocation techniques, a continuous scheme is developed and used to obtain discrete methods which are applied via the Block unification approach to obtain approximations to the resulting large system of ODEs. Several test problems are investigated to elucidate the solution process.

  1. Modeling and enhanced sampling of molecular systems with smooth and nonlinear data-driven collective variables

    NASA Astrophysics Data System (ADS)

    Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino

    2013-12-01

    Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. Given their importance, there is need for systematic methods that effectively identify CVs for complex systems. In recent years, nonlinear manifold learning has shown its ability to automatically characterize molecular collective behavior. Unfortunately, these methods fail to provide a differentiable function mapping high-dimensional configurations to their low-dimensional representation, as required in enhanced sampling methods. We introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule, alanine dipeptide, and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. We illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble. We further explore the transferability of SandCV from a simpler system, alanine dipeptide in vacuum, to a more complex system, alanine dipeptide in explicit water.

  2. Modeling and enhanced sampling of molecular systems with smooth and nonlinear data-driven collective variables.

    PubMed

    Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino

    2013-12-07

    Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. Given their importance, there is need for systematic methods that effectively identify CVs for complex systems. In recent years, nonlinear manifold learning has shown its ability to automatically characterize molecular collective behavior. Unfortunately, these methods fail to provide a differentiable function mapping high-dimensional configurations to their low-dimensional representation, as required in enhanced sampling methods. We introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule, alanine dipeptide, and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. We illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble. We further explore the transferability of SandCV from a simpler system, alanine dipeptide in vacuum, to a more complex system, alanine dipeptide in explicit water.

  3. Scaling of Device Variability and Subthreshold Swing in Ballistic Carbon Nanotube Transistors

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Tersoff, Jerry; Han, Shu-Jen; Penumatcha, Ashish V.

    2015-08-01

    In field-effect transistors, the inherent randomness of dopants and other charges is a major cause of device-to-device variability. For a quasi-one-dimensional device such as carbon nanotube transistors, even a single charge can drastically change the performance, making this a critical issue for their adoption as a practical technology. Here we calculate the effect of the random charges at the gate-oxide surface in ballistic carbon nanotube transistors, finding good agreement with the variability statistics in recent experiments. A combination of experimental and simulation results further reveals that these random charges are also a major factor limiting the subthreshold swing for nanotube transistors fabricated on thin gate dielectrics. We then establish that the scaling of the nanotube device uniformity with the gate dielectric, fixed-charge density, and device dimension is qualitatively different from conventional silicon transistors, reflecting the very different device physics of a ballistic transistor with a quasi-one-dimensional channel. The combination of gate-oxide scaling and improved control of fixed-charge density should provide the uniformity needed for large-scale integration of such novel one-dimensional transistors even at extremely scaled device dimensions.

  4. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    PubMed Central

    Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition- and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity. PMID:23366954

  5. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    PubMed

    Cowley, Benjamin R; Kaufman, Matthew T; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2012-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition-and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity.

  6. Variables selection methods in near-infrared spectroscopy.

    PubMed

    Xiaobo, Zou; Jiewen, Zhao; Povey, Malcolm J W; Holmes, Mel; Hanpin, Mao

    2010-05-14

    Near-infrared (NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields, such as the petrochemical, pharmaceutical, environmental, clinical, agricultural, food and biomedical sectors during the past 15 years. A NIR spectrum of a sample is typically measured by modern scanning instruments at hundreds of equally spaced wavelengths. The large number of spectral variables in most data sets encountered in NIR spectral chemometrics often renders the prediction of a dependent variable unreliable. Recently, considerable effort has been directed towards developing and evaluating different procedures that objectively identify variables which contribute useful information and/or eliminate variables containing mostly noise. This review focuses on the variable selection methods in NIR spectroscopy. Selection methods include some classical approaches, such as manual approach (knowledge based selection), "Univariate" and "Sequential" selection methods; sophisticated methods such as successive projections algorithm (SPA) and uninformative variable elimination (UVE), elaborate search-based strategies such as simulated annealing (SA), artificial neural networks (ANN) and genetic algorithms (GAs) and interval base algorithms such as interval partial least squares (iPLS), windows PLS and iterative PLS. Wavelength selection with B-spline, Kalman filtering, Fisher's weights and Bayesian are also mentioned. Finally, the websites of some variable selection software and toolboxes for non-commercial use are given. Copyright 2010 Elsevier B.V. All rights reserved.

  7. Comparison of three-dimensional multi-segmental foot models used in clinical gait laboratories.

    PubMed

    Nicholson, Kristen; Church, Chris; Takata, Colton; Niiler, Tim; Chen, Brian Po-Jung; Lennon, Nancy; Sees, Julie P; Henley, John; Miller, Freeman

    2018-05-16

    Many skin-mounted three-dimensional multi-segmented foot models are currently in use for gait analysis. Evidence regarding the repeatability of models, including between trial and between assessors, is mixed, and there are no between model comparisons of kinematic results. This study explores differences in kinematics and repeatability between five three-dimensional multi-segmented foot models. The five models include duPont, Heidelberg, Oxford Child, Leardini, and Utah. Hind foot, forefoot, and hallux angles were calculated with each model for ten individuals. Two physical therapists applied markers three times to each individual to assess within and between therapist variability. Standard deviations were used to evaluate marker placement variability. Locally weighted regression smoothing with alpha-adjusted serial T tests analysis was used to assess kinematic similarities. All five models had similar variability, however, the Leardini model showed high standard deviations in plantarflexion/dorsiflexion angles. P-value curves for the gait cycle were used to assess kinematic similarities. The duPont and Oxford models had the most similar kinematics. All models demonstrated similar marker placement variability. Lower variability was noted in the sagittal and coronal planes compared to rotation in the transverse plane, suggesting a higher minimal detectable change when clinically considering rotation and a need for additional research. Between the five models, the duPont and Oxford shared the most kinematic similarities. While patterns of movement were very similar between all models, offsets were often present and need to be considered when evaluating published data. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Close-range laser scanning in forests: towards physically based semantics across scales.

    PubMed

    Morsdorf, F; Kükenbrink, D; Schneider, F D; Abegg, M; Schaepman, M E

    2018-04-06

    Laser scanning with its unique measurement concept holds the potential to revolutionize the way we assess and quantify three-dimensional vegetation structure. Modern laser systems used at close range, be it on terrestrial, mobile or unmanned aerial platforms, provide dense and accurate three-dimensional data whose information just waits to be harvested. However, the transformation of such data to information is not as straightforward as for airborne and space-borne approaches, where typically empirical models are built using ground truth of target variables. Simpler variables, such as diameter at breast height, can be readily derived and validated. More complex variables, e.g. leaf area index, need a thorough understanding and consideration of the physical particularities of the measurement process and semantic labelling of the point cloud. Quantified structural models provide a framework for such labelling by deriving stem and branch architecture, a basis for many of the more complex structural variables. The physical information of the laser scanning process is still underused and we show how it could play a vital role in conjunction with three-dimensional radiative transfer models to shape the information retrieval methods of the future. Using such a combined forward and physically based approach will make methods robust and transferable. In addition, it avoids replacing observer bias from field inventories with instrument bias from different laser instruments. Still, an intensive dialogue with the users of the derived information is mandatory to potentially re-design structural concepts and variables so that they profit most of the rich data that close-range laser scanning provides.

  9. Fluorescence enhancement through the formation of a single-layer two-dimensional supramolecular organic framework and its application in highly selective recognition of picric acid.

    PubMed

    Zhang, Ying; Zhan, Tian-Guang; Zhou, Tian-You; Qi, Qiao-Yan; Xu, Xiao-Na; Zhao, Xin

    2016-06-18

    A two-dimensional (2D) supramolecular organic framework (SOF) has been constructed through the co-assembly of a triphenylamine-based building block and cucurbit[8]uril (CB[8]). Fluorescence turn-on of the non-emissive building block was observed upon the formation of the 2D SOF, which displayed highly selective and sensitive recognition of picric acid over a variety of nitroaromatics.

  10. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems

    PubMed Central

    de Paula, Lauro C. M.; Soares, Anderson S.; de Lima, Telma W.; Delbem, Alexandre C. B.; Coelho, Clarimar J.; Filho, Arlindo R. G.

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation. PMID:25493625

  11. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems.

    PubMed

    de Paula, Lauro C M; Soares, Anderson S; de Lima, Telma W; Delbem, Alexandre C B; Coelho, Clarimar J; Filho, Arlindo R G

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation.

  12. Geometric mean for subspace selection.

    PubMed

    Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J

    2009-02-01

    Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.

  13. Selection Practices of Group Leaders: A National Survey.

    ERIC Educational Resources Information Center

    Riva, Maria T.; Lippert, Laurel; Tackett, M. Jan

    2000-01-01

    Study surveys the selection practices of group leaders. Explores methods of selection, variables used to make selection decisions, and the types of selection errors that leaders have experienced. Results suggest that group leaders use clinical judgment to make selection decisions and endorse using some specific variables in selection. (Contains 22…

  14. Dimensional stability of flakeboards as affected by board specific gravity and flake alignment

    Treesearch

    Robert L. Geimer

    1982-01-01

    The objective was to determine the relationship between the variables specific gravity (SG) and flake alignment and the dimensional stability properties of flakeboard. Boards manufactured without a density gradient were exposed to various levels of relative humidity and a vacuum-pressure soak (VPS) treatment. Changes in moisture content (MC), thickness swelling, and...

  15. Stabilizing l1-norm prediction models by supervised feature grouping.

    PubMed

    Kamkar, Iman; Gupta, Sunil Kumar; Phung, Dinh; Venkatesh, Svetha

    2016-02-01

    Emerging Electronic Medical Records (EMRs) have reformed the modern healthcare. These records have great potential to be used for building clinical prediction models. However, a problem in using them is their high dimensionality. Since a lot of information may not be relevant for prediction, the underlying complexity of the prediction models may not be high. A popular way to deal with this problem is to employ feature selection. Lasso and l1-norm based feature selection methods have shown promising results. But, in presence of correlated features, these methods select features that change considerably with small changes in data. This prevents clinicians to obtain a stable feature set, which is crucial for clinical decision making. Grouping correlated variables together can improve the stability of feature selection, however, such grouping is usually not known and needs to be estimated for optimal performance. Addressing this problem, we propose a new model that can simultaneously learn the grouping of correlated features and perform stable feature selection. We formulate the model as a constrained optimization problem and provide an efficient solution with guaranteed convergence. Our experiments with both synthetic and real-world datasets show that the proposed model is significantly more stable than Lasso and many existing state-of-the-art shrinkage and classification methods. We further show that in terms of prediction performance, the proposed method consistently outperforms Lasso and other baselines. Our model can be used for selecting stable risk factors for a variety of healthcare problems, so it can assist clinicians toward accurate decision making. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Mean Comparison: Manifest Variable versus Latent Variable

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2006-01-01

    An extension of multiple correspondence analysis is proposed that takes into account cluster-level heterogeneity in respondents' preferences/choices. The method involves combining multiple correspondence analysis and k-means in a unified framework. The former is used for uncovering a low-dimensional space of multivariate categorical variables…

  17. Unbiased split variable selection for random survival forests using maximally selected rank statistics.

    PubMed

    Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas

    2017-04-15

    The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. A real negative selection algorithm with evolutionary preference for anomaly detection

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Chen, Wen; Li, Tao

    2017-04-01

    Traditional real negative selection algorithms (RNSAs) adopt the estimated coverage (c0) as the algorithm termination threshold, and generate detectors randomly. With increasing dimensions, the data samples could reside in the low-dimensional subspace, so that the traditional detectors cannot effectively distinguish these samples. Furthermore, in high-dimensional feature space, c0 cannot exactly reflect the detectors set coverage rate for the nonself space, and it could lead the algorithm to be terminated unexpectedly when the number of detectors is insufficient. These shortcomings make the traditional RNSAs to perform poorly in high-dimensional feature space. Based upon "evolutionary preference" theory in immunology, this paper presents a real negative selection algorithm with evolutionary preference (RNSAP). RNSAP utilizes the "unknown nonself space", "low-dimensional target subspace" and "known nonself feature" as the evolutionary preference to guide the generation of detectors, thus ensuring the detectors can cover the nonself space more effectively. Besides, RNSAP uses redundancy to replace c0 as the termination threshold, in this way RNSAP can generate adequate detectors under a proper convergence rate. The theoretical analysis and experimental result demonstrate that, compared to the classical RNSA (V-detector), RNSAP can achieve a higher detection rate, but with less detectors and computing cost.

  19. Quantitative structure-retention relationship studies for taxanes including epimers and isomeric metabolites in ultra fast liquid chromatography.

    PubMed

    Dong, Pei-Pei; Ge, Guang-Bo; Zhang, Yan-Yan; Ai, Chun-Zhi; Li, Guo-Hui; Zhu, Liang-Liang; Luan, Hong-Wei; Liu, Xing-Bao; Yang, Ling

    2009-10-16

    Seven pairs of epimers and one pair of isomeric metabolites of taxanes, each pair of which have similar structures but different retention behaviors, together with additional 13 taxanes with different substitutions were chosen to investigate the quantitative structure-retention relationship (QSRR) of taxanes in ultra fast liquid chromatography (UFLC). Monte Carlo variable selection (MCVS) method was adopted to choose descriptors. The selected four descriptors were used to build QSRR model with multi-linear regression (MLR) and artificial neural network (ANN) modeling techniques. Both linear and nonlinear models show good predictive ability, of which ANN model was better with the determination coefficient R(2) for training, validation and test set being 0.9892, 0.9747 and 0.9840, respectively. The results of 100 times' leave-12-out cross validation showed the robustness of this model. All the isomers can be correctly differentiated by this model. According to the selected descriptors, the three dimensional structural information was critical for recognition of epimers. Hydrophobic interaction was the uppermost factor for retention in UFLC. Molecules' polarizability and polarity properties were also closely correlated with retention behaviors. This QSRR model will be useful for separation and identification of taxanes including epimers and metabolites from botanical or biological samples.

  20. Integrative analysis of transcriptomic and metabolomic data via sparse canonical correlation analysis with incorporation of biological information.

    PubMed

    Safo, Sandra E; Li, Shuzhao; Long, Qi

    2018-03-01

    Integrative analysis of high dimensional omics data is becoming increasingly popular. At the same time, incorporating known functional relationships among variables in analysis of omics data has been shown to help elucidate underlying mechanisms for complex diseases. In this article, our goal is to assess association between transcriptomic and metabolomic data from a Predictive Health Institute (PHI) study that includes healthy adults at a high risk of developing cardiovascular diseases. Adopting a strategy that is both data-driven and knowledge-based, we develop statistical methods for sparse canonical correlation analysis (CCA) with incorporation of known biological information. Our proposed methods use prior network structural information among genes and among metabolites to guide selection of relevant genes and metabolites in sparse CCA, providing insight on the molecular underpinning of cardiovascular disease. Our simulations demonstrate that the structured sparse CCA methods outperform several existing sparse CCA methods in selecting relevant genes and metabolites when structural information is informative and are robust to mis-specified structural information. Our analysis of the PHI study reveals that a number of gene and metabolic pathways including some known to be associated with cardiovascular diseases are enriched in the set of genes and metabolites selected by our proposed approach. © 2017, The International Biometric Society.

  1. AtlasCBS: a web server to map and explore chemico-biological space

    NASA Astrophysics Data System (ADS)

    Cortés-Cabrera, Álvaro; Morreale, Antonio; Gago, Federico; Abad-Zapatero, Celerino

    2012-09-01

    New approaches are needed that can help decrease the unsustainable failure in small-molecule drug discovery. Ligand Efficiency Indices (LEI) are making a great impact on early-stage compound selection and prioritization. Given a target-ligand database with chemical structures and associated biological affinities/activities for a target, the AtlasCBS server generates two-dimensional, dynamical representations of its contents in terms of LEI. These variables allow an effective decoupling of the chemical (angular) and biological (radial) components. BindingDB, PDBBind and ChEMBL databases are currently implemented. Proprietary datasets can also be uploaded and compared. The utility of this atlas-like representation in the future of drug design is highlighted with some examples. The web server can be accessed at http://ub.cbm.uam.es/atlascbs and https://www.ebi.ac.uk/chembl/atlascbs.

  2. Adaptive control for a class of nonlinear complex dynamical systems with uncertain complex parameters and perturbations

    PubMed Central

    Liu, Jian; Liu, Kexin; Liu, Shutang

    2017-01-01

    In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results. PMID:28467431

  3. Compact silicon diffractive sensor: design, fabrication, and prototype.

    PubMed

    Maikisch, Jonathan S; Gaylord, Thomas K

    2012-07-01

    An in-plane constant-efficiency variable-diffraction-angle grating and an in-plane high-angular-selectivity grating are combined to enable a new compact silicon diffractive sensor. This sensor is fabricated in silicon-on-insulator and uses telecommunications wavelengths. A single sensor element has a micron-scale device size and uses intensity-based (as opposed to spectral-based) detection for increased integrability. In-plane diffraction gratings provide an intrinsic splitting mechanism to enable a two-dimensional sensor array. Detection of the relative values of diffracted and transmitted intensities is independent of attenuation and is thus robust. The sensor prototype measures refractive index changes of 10(-4). Simulations indicate that this sensor configuration may be capable of measuring refractive index changes three or four orders of magnitude smaller. The characteristics of this sensor type make it promising for lab-on-a-chip applications.

  4. Adaptive control for a class of nonlinear complex dynamical systems with uncertain complex parameters and perturbations.

    PubMed

    Liu, Jian; Liu, Kexin; Liu, Shutang

    2017-01-01

    In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results.

  5. Motions, efforts and actuations in constrained dynamic systems: a multi-link open-chain example

    NASA Astrophysics Data System (ADS)

    Duke Perreira, N.

    1999-08-01

    The effort-motion method, which describes the dynamics of open- and closed-chain topologies of rigid bodies interconnected with revolute and prismatic pairs, is interpreted geometrically. Systems are identified for which the simultaneous control of forces and velocities is desirable, and a representative open-chain system is selected for use in the ensuing analysis. Gauge invariant transformations are used to recast the commonly used kinetic and kinematic equations into a dimensional gauge invariant form. Constraint elimination techniques based on singular value decompositions then recast the invariant equations into orthogonal and reciprocal sets of motion and effort equations written in state variable form. The ideal actuation is found that simultaneously achieves the obtainable portions of the desired constraining efforts and motions. The performance is then evaluated of using the actuation closest to the ideal actuation.

  6. AtlasCBS: a web server to map and explore chemico-biological space.

    PubMed

    Cortés-Cabrera, Alvaro; Morreale, Antonio; Gago, Federico; Abad-Zapatero, Celerino

    2012-09-01

    New approaches are needed that can help decrease the unsustainable failure in small-molecule drug discovery. Ligand Efficiency Indices (LEI) are making a great impact on early-stage compound selection and prioritization. Given a target-ligand database with chemical structures and associated biological affinities/activities for a target, the AtlasCBS server generates two-dimensional, dynamical representations of its contents in terms of LEI. These variables allow an effective decoupling of the chemical (angular) and biological (radial) components. BindingDB, PDBBind and ChEMBL databases are currently implemented. Proprietary datasets can also be uploaded and compared. The utility of this atlas-like representation in the future of drug design is highlighted with some examples. The web server can be accessed at http://ub.cbm.uam.es/atlascbs and https://www.ebi.ac.uk/chembl/atlascbs.

  7. A non-linear optimization programming model for air quality planning including co-benefits for GHG emissions.

    PubMed

    Turrini, Enrico; Carnevale, Claudio; Finzi, Giovanna; Volta, Marialuisa

    2018-04-15

    This paper introduces the MAQ (Multi-dimensional Air Quality) model aimed at defining cost-effective air quality plans at different scales (urban to national) and assessing the co-benefits for GHG emissions. The model implements and solves a non-linear multi-objective, multi-pollutant decision problem where the decision variables are the application levels of emission abatement measures allowing the reduction of energy consumption, end-of pipe technologies and fuel switch options. The objectives of the decision problem are the minimization of tropospheric secondary pollution exposure and of internal costs. The model assesses CO 2 equivalent emissions in order to support decision makers in the selection of win-win policies. The methodology is tested on Lombardy region, a heavily polluted area in northern Italy. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Large Eddy Simulation of Spatially Developing Turbulent Reacting Shear Layers with the One-Dimensional Turbulence Model

    NASA Astrophysics Data System (ADS)

    Hoffie, Andreas Frank

    Large eddy simulation (LES) combined with the one-dimensional turbulence (ODT) model is used to simulate spatially developing turbulent reacting shear layers with high heat release and high Reynolds numbers. The LES-ODT results are compared to results from direct numerical simulations (DNS), for model development and validation purposes. The LES-ODT approach is based on LES solutions for momentum and pressure on a coarse grid and solutions for momentum and reactive scalars on a fine, one-dimensional, but three-dimensionally coupled ODT subgrid, which is embedded into the LES computational domain. Although one-dimensional, all three velocity components are transported along the ODT domain. The low-dimensional spatial and temporal resolution of the subgrid scales describe a new modeling paradigm, referred to as autonomous microstructure evolution (AME) models, which resolve the multiscale nature of turbulence down to the Kolmogorv scales. While this new concept aims to mimic the turbulent cascade and to reduce the number of input parameters, AME enables also regime-independent combustion modeling, capable to simulate multiphysics problems simultaneously. The LES as well as the one-dimensional transport equations are solved using an incompressible, low Mach number approximation, however the effects of heat release are accounted for through variable density computed by the ideal gas equation of state, based on temperature variations. The computations are carried out on a three-dimensional structured mesh, which is stretched in the transverse direction. While the LES momentum equation is integrated with a third-order Runge-Kutta time-integration, the time integration at the ODT level is accomplished with an explicit Forward-Euler method. Spatial finite-difference schemes of third (LES) and first (ODT) order are utilized and a fully consistent fractional-step method at the LES level is used. Turbulence closure at the LES level is achieved by utilizing the Smagorinsky model. The chemical reaction is simulated with a global single-step, second-order equilibrium reaction with an Arrhenius reaction rate. The two benchmark cases of constant density reacting and variable density non-reacting shear layers used to determine ODT parameters yield perfect agreement with regards to first and second-order flow statistics as well as shear layer growth rate. The variable density non-reacting shear layer also serves as a testing case for the LES-ODT model to simulate passive scalar mixing. The variable density, reacting shear layer cases only agree reasonably well and indicate that more work is necessary to improve variable density coupling of ODT and LES. The disagreement is attributed to the fact that the ODT filtered density is kept constant across the Runge-Kutta steps. Furthermore, a more in-depth knowledge of large scale and subgrid turbulent kinetic energy (TKE) spectra at several downstream locations as well as TKE budgets need to be studied to obtain a better understanding about the model as well as about the flow under investigation. The local Reynolds number based on the one-percent thickness at the exit is Redelta ≈ 5300, for the constant density reacting and for the variable density non-reacting case. For the variable density reacting shear layer, the Reynolds number based on the 1% thickness is Redelta ≈ 2370. The variable density reacting shear layers show suppressed growth rates due to density variations caused by heat release. This has also been reported in literature. A Lewis number parameter study is performed to extract non-unity Lewis number effects. An increase in the Lewis number leads to a further suppression of the growth rate, however to an increase spread of second-order flow statistics. Major focus and challenge of this work is to improve and advance the three-dimensional coupling of the one-dimensional ODT domains while keeping the solution correct. This entails major restructuring of the model. The turbulent reacting shear layer poses a physical challenge to the model because of its nature being a statistically stationary, non-decaying inhomogeneous and anisotropic turbulent flow. This challenge also requires additions to the eddy sampling procedure. Besides physical advancements, the LES-ODT code is also improved regarding its ability to use general cuboid geometries, an array structure that allows to apply boundary conditions based on ghost-cells and non-uniform structured meshes. The use of transverse grid-stretching requires the implementation of the ODT triplet map on a stretched grid. Further, advancing subroutine structure handling with global variables that enable serial code speed-up and parallelization with OpenMP are undertaken. Porting the code to a higher-level language, object oriented, finite-volume based CFD platform, like OpenFoam for example that allows more advanced array and parallelization features with graphics processing units (GPUs) as well as parallelization with the message passing interface (MPI) to simulate complex geometries is recommended for future work.

  9. 75 FR 77885 - Government-Owned Inventions; Availability for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-14

    ... of federally-funded research and development. Foreign patent applications are filed on selected... applications. Software System for Quantitative Assessment of Vasculature in Three Dimensional Images... three dimensional vascular networks from medical and basic research images. Deregulation of angiogenesis...

  10. Carbon dioxide separation with a two-dimensional polymer membrane.

    PubMed

    Schrier, Joshua

    2012-07-25

    Carbon dioxide gas separation is important for many environmental and energy applications. Molecular dynamics simulations are used to characterize a two-dimensional hydrocarbon polymer, PG-ES1, that uses a combination of surface adsorption and narrow pores to separate carbon dioxide from nitrogen, oxygen, and methane gases. The CO2 permeance is 3 × 10(5) gas permeation units (GPU). The CO2/N2 selectivity is 60, and the CO2/CH4 selectivity exceeds 500. The combination of high CO2 permeance and selectivity surpasses all known materials, enabling low-cost postcombustion CO2 capture, utilization of landfill gas, and horticulture applications.

  11. Microstructure from ferroelastic transitions using strain pseudospin clock models in two and three dimensions: A local mean-field analysis

    NASA Astrophysics Data System (ADS)

    Vasseur, Romain; Lookman, Turab; Shenoy, Subodh R.

    2010-09-01

    We show how microstructure can arise in first-order ferroelastic structural transitions, in two and three spatial dimensions, through a local mean-field approximation of their pseudospin Hamiltonians, that include anisotropic elastic interactions. Such transitions have symmetry-selected physical strains as their NOP -component order parameters, with Landau free energies that have a single zero-strain “austenite” minimum at high temperatures, and spontaneous-strain “martensite” minima of NV structural variants at low temperatures. The total free energy also has gradient terms, and power-law anisotropic effective interactions, induced by “no-dislocation” St Venant compatibility constraints. In a reduced description, the strains at Landau minima induce temperature dependent, clocklike ZNV+1 Hamiltonians, with NOP -component strain-pseudospin vectors S⃗ pointing to NV+1 discrete values (including zero). We study elastic texturing in five such first-order structural transitions through a local mean-field approximation of their pseudospin Hamiltonians, that include the power-law interactions. As a prototype, we consider the two-variant square/rectangle transition, with a one-component pseudospin taking NV+1=3 values of S=0,±1 , as in a generalized Blume-Capel model. We then consider transitions with two-component (NOP=2) pseudospins: the equilateral to centered rectangle (NV=3) ; the square to oblique polygon (NV=4) ; the triangle to oblique (NV=6) transitions; and finally the three-dimensional (3D) cubic to tetragonal transition (NV=3) . The local mean-field solutions in two-dimensional and 3D yield oriented domain-wall patterns as from continuous-variable strain dynamics, showing the discrete-variable models capture the essential ferroelastic texturings. Other related Hamiltonians illustrate that structural transitions in materials science can be the source of interesting spin models in statistical mechanics.

  12. Mid-latitude ionospheric irregularity spectral density as determined by ground-based GPS receiver networks

    DOE PAGES

    Lay, Erin H.; Parker, Peter A.; Light, Max; ...

    2018-05-22

    In this paper, we present a new technique to experimentally measure the spatial spectrum of ionospheric disturbances in the spatial scale regime of 40 – 200 km. This technique produces a 2-dimensional (2-D) spectrum for each time snapshot over two dense GPS receiver networks (GEONET in Japan and PBO in the Western U.S.). Because this technique created the spectrum from an instantaneous time snapshot, no assumptions are needed about the speed of ionospheric irregularities. We examine spectra from three days: one with an intense geomagnetic storm, one with significant lightning activity, and one quiet day. Radial slices along the 2-Dmore » spectra provide 1-dimensional spectra that can be fit to a power law to quantify the steepness of the fall-off in the spatial scale sizes. Continuous data of this type in a stationary location allows monitoring the variability in the 2-D spectrum over the course of a day and comparing between days, as shown here, or even over a year or many years. We find that the spectra are highly variable over the course of a day and between the two selected regions of Japan and the Western U.S. When strong travelling ionospheric disturbances (TIDs) are present, the 2-D spectra provide information about the direction of propagation of the TIDs. We compare the TID propagation direction with horizontal wind directions from the Horizontal Wind Model. Finally, TID direction is correlated with the horizontal wind direction on all days, strongly indicating that the primary source of the TIDs measured by this technique is tropospheric.« less

  13. Alterations of papilla dimensions after orthodontic closure of the maxillary midline diastema: a retrospective longitudinal study

    PubMed Central

    2016-01-01

    Purpose The aim of this study was to evaluate alterations of papilla dimensions after orthodontic closure of the diastema between maxillary central incisors. Methods Sixty patients who had a visible diastema between maxillary central incisors that had been closed by orthodontic approximation were selected for this study. Various papilla dimensions were assessed on clinical photographs and study models before the orthodontic treatment and at the follow-up examination after closure of the diastema. Influences of the variables assessed before orthodontic treatment on the alterations of papilla height (PH) and papilla base thickness (PBT) were evaluated by univariate regression analysis. To analyze potential influences of the 3-dimensional papilla dimensions before orthodontic treatment on the alterations of PH and PBT, a multiple regression model was formulated including the 3-dimensional papilla dimensions as predictor variables. Results On average, PH decreased by 0.80 mm and PBT increased after orthodontic closure of the diastema (P<0.01). Univariate regression analysis revealed that the PH (P=0.002) and PBT (P=0.047) before orthodontic treatment influenced the alteration of PH. With respect to the alteration of PBT, the diastema width (P=0.045) and PBT (P=0.000) were found to be influential factors. PBT before the orthodontic treatment significantly influenced the alteration of PBT in the multiple regression model. Conclusions PH decreased but PBT increased after orthodontic closure of the diastema. The papilla dimensions before orthodontic treatment influenced the alterations of PH and PBT after closure of the diastema. The PBT increased more when the diastema width before the orthodontic treatment was larger. PMID:27382507

  14. Emirates Mars Ultraviolet Spectrometer (EMUS) Overview from the Emirates Mars Mission

    NASA Astrophysics Data System (ADS)

    Lootah, F. H.; Almatroushi, H. R.; AlMheiri, S.; Holsclaw, G.; Deighan, J.; Chaffin, M.; Reed, H.; Lillis, R. J.; Fillingim, M. O.; England, S.

    2017-12-01

    The Emirates Mars Ultraviolet Spectrometer (EMUS) instrument is one of three science instruments on board the "Hope Probe" of the Emirates Mars Mission (EMM). EMM is a United Arab Emirates' (UAE) mission to Mars, launching in 2020, to explore the global dynamics of the Martian atmosphere, while sampling on both diurnal and seasonal timescales. The EMUS instrument is a far-ultraviolet imaging spectrograph that measures emissions in the spectral range 100-170 nm. Using a combination of its one-dimensional imaging and spacecraft motion, it will build up two-dimensional far-ultraviolet images of the Martian disk and near-space environment at several important wavelengths: the Lyman beta atomic hydrogen emission (102.6 nm), the Lyman alpha atomic hydrogen emission (121.6 nm), two atomic oxygen emissions (130.4 nm and 135.6 nm), and the carbon monoxide fourth positive group band emission (140 nm-170 nm). Radiances at these wavelengths will be used to derive the column abundance of atomic oxygen, and carbon monoxide in the Martian thermosphere, and the density of atomic oxygen and atomic hydrogen in the Martian exosphere both with spatial and sub-seasonal variability. The EMUS instrument consists of a single telescope mirror feeding a Rowland circle imaging spectrograph with selectable spectral resolution (1.3 nm, 1.8 nm, or 5 nm), and a photon-counting and locating detector (provided by the Space Sciences Laboratory at the University of California, Berkeley). The EMUS spatial resolution of less than 300 km on the disk is sufficient to characterize spatial variability in the Martian thermosphere (100-200 km altitude) and exosphere (>200 km altitude). The instrument is jointly developed by the Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado Boulder and Mohammed Bin Rashid Space Centre (MBRSC) in Dubai, UAE.

  15. Mid-latitude ionospheric irregularity spectral density as determined by ground-based GPS receiver networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lay, Erin H.; Parker, Peter A.; Light, Max

    In this paper, we present a new technique to experimentally measure the spatial spectrum of ionospheric disturbances in the spatial scale regime of 40 – 200 km. This technique produces a 2-dimensional (2-D) spectrum for each time snapshot over two dense GPS receiver networks (GEONET in Japan and PBO in the Western U.S.). Because this technique created the spectrum from an instantaneous time snapshot, no assumptions are needed about the speed of ionospheric irregularities. We examine spectra from three days: one with an intense geomagnetic storm, one with significant lightning activity, and one quiet day. Radial slices along the 2-Dmore » spectra provide 1-dimensional spectra that can be fit to a power law to quantify the steepness of the fall-off in the spatial scale sizes. Continuous data of this type in a stationary location allows monitoring the variability in the 2-D spectrum over the course of a day and comparing between days, as shown here, or even over a year or many years. We find that the spectra are highly variable over the course of a day and between the two selected regions of Japan and the Western U.S. When strong travelling ionospheric disturbances (TIDs) are present, the 2-D spectra provide information about the direction of propagation of the TIDs. We compare the TID propagation direction with horizontal wind directions from the Horizontal Wind Model. Finally, TID direction is correlated with the horizontal wind direction on all days, strongly indicating that the primary source of the TIDs measured by this technique is tropospheric.« less

  16. Mixed ligand two dimensional Cd(ii)/Ni(ii) metal organic frameworks containing dicarboxylate and tripodal N-donor ligands: Cd(ii) MOF is an efficient luminescent sensor for detection of picric acid in aqueous media.

    PubMed

    Rachuri, Yadagiri; Parmar, Bhavesh; Bisht, Kamal Kumar; Suresh, Eringathodi

    2016-05-04

    Two dimensional metal organic frameworks (MOFs) [Cd(5-BrIP)(TIB)]n () and [Ni2(5-BrIP)2(TIB)2]n (), involving the aromatic polycarboxylate ligand 5-bromo isophthalic acid (H2BrIP), flexible tripodal ligand 1,3,5-tris(imidazol-1-ylmethyl)benzene (TIB) and Cd(ii)/Ni(ii) metal nodes have been synthesized by different methods. These compounds were characterized by various analytical methods, and variable temperature X-ray diffraction data showed thermal stability of both MOFs up to 350 °C. Phase purity as well as water stability of the MOFs were established by powder X-ray diffraction, and the structural diversity of the compounds were investigated by single-crystal X-ray diffraction. Both the MOFs are mixed ligand 2D nets, and the topology of the network can be described as a binodal 3,5-c connected net with 3,5L2 topology having the point symbol {4(2)·6(7)·8}{4(2)·6}. Sensing of picric acid [2,4,6-trinitrophenol, TNP] by luminescence quenching among a large range of nitroanalytes in aqueous phase by the Cd(ii) luminescent MOF (LMOF) were been investigated. Structural studies on 1 : 1 co-crystals () of TIB and TNP were carried out. The selective and sensitive fluorescence quenching response of towards electron-deficient TNP over other nitro analytes in aqueous phase was demonstrated by fluorescence quenching titration. Concomitant occurrence of electron transfer/energy transfer processes and electrostatic interaction favours the selective sensing of TNP. A Cd(ii) LMOF ()-coated paper strip that we developed demonstrated fast and selective response to TNP, by the complete quenching of the blue fluorescence upon excitation of the paper strip at 365 nm radiation in its presence.

  17. CORRELATION PURSUIT: FORWARD STEPWISE VARIABLE SELECTION FOR INDEX MODELS

    PubMed Central

    Zhong, Wenxuan; Zhang, Tingting; Zhu, Yu; Liu, Jun S.

    2012-01-01

    In this article, a stepwise procedure, correlation pursuit (COP), is developed for variable selection under the sufficient dimension reduction framework, in which the response variable Y is influenced by the predictors X1, X2, …, Xp through an unknown function of a few linear combinations of them. Unlike linear stepwise regression, COP does not impose a special form of relationship (such as linear) between the response variable and the predictor variables. The COP procedure selects variables that attain the maximum correlation between the transformed response and the linear combination of the variables. Various asymptotic properties of the COP procedure are established, and in particular, its variable selection performance under diverging number of predictors and sample size has been investigated. The excellent empirical performance of the COP procedure in comparison with existing methods are demonstrated by both extensive simulation studies and a real example in functional genomics. PMID:23243388

  18. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  19. [Rapid prototyping: a very promising method].

    PubMed

    Haverman, T M; Karagozoglu, K H; Prins, H-J; Schulten, E A J M; Forouzanfar, T

    2013-03-01

    Rapid prototyping is a method which makes it possible to produce a three-dimensional model based on two-dimensional imaging. Various rapid prototyping methods are available for modelling, such as stereolithography, selective laser sintering, direct laser metal sintering, two-photon polymerization, laminated object manufacturing, three-dimensional printing, three-dimensional plotting, polyjet inkjet technology,fused deposition modelling, vacuum casting and milling. The various methods currently being used in the biomedical sector differ in production, materials and properties of the three-dimensional model which is produced. Rapid prototyping is mainly usedforpreoperative planning, simulation, education, and research into and development of bioengineering possibilities.

  20. Probabilistic and spatially variable niches inferred from demography

    Treesearch

    Jeffrey M. Diez; Itamar Giladi; Robert Warren; H. Ronald Pulliam

    2014-01-01

    Summary 1. Mismatches between species distributions and habitat suitability are predicted by niche theory and have important implications for forecasting how species may respond to environmental changes. Quantifying these mismatches is challenging, however, due to the high dimensionality of species niches and the large spatial and temporal variability in population...

Top