Effective dimension reduction for sparse functional data
YAO, F.; LEI, E.; WU, Y.
2015-01-01
Summary We propose a method of effective dimension reduction for functional data, emphasizing the sparse design where one observes only a few noisy and irregular measurements for some or all of the subjects. The proposed method borrows strength across the entire sample and provides a way to characterize the effective dimension reduction space, via functional cumulative slicing. Our theoretical study reveals a bias-variance trade-off associated with the regularizing truncation and decaying structures of the predictor process and the effective dimension reduction space. A simulation study and an application illustrate the superior finite-sample performance of the method. PMID:26566293
Cluster Correspondence Analysis.
van de Velden, M; D'Enza, A Iodice; Palumbo, F
2017-03-01
A method is proposed that combines dimension reduction and cluster analysis for categorical data by simultaneously assigning individuals to clusters and optimal scaling values to categories in such a way that a single between variance maximization objective is achieved. In a unified framework, a brief review of alternative methods is provided and we show that the proposed method is equivalent to GROUPALS applied to categorical data. Performance of the methods is appraised by means of a simulation study. The results of the joint dimension reduction and clustering methods are compared with the so-called tandem approach, a sequential analysis of dimension reduction followed by cluster analysis. The tandem approach is conjectured to perform worse when variables are added that are unrelated to the cluster structure. Our simulation study confirms this conjecture. Moreover, the results of the simulation study indicate that the proposed method also consistently outperforms alternative joint dimension reduction and clustering methods.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo
2018-03-01
In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.
Dimension reduction method for SPH equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, Alexandre M.; Scheibe, Timothy D.
2011-08-26
Smoothed Particle Hydrodynamics model of a complex multiscale processe often results in a system of ODEs with an enormous number of unknowns. Furthermore, a time integration of the SPH equations usually requires time steps that are smaller than the observation time by many orders of magnitude. A direct solution of these ODEs can be extremely expensive. Here we propose a novel dimension reduction method that gives an approximate solution of the SPH ODEs and provides an accurate prediction of the average behavior of the modeled system. The method consists of two main elements. First, effective equationss for evolution of averagemore » variables (e.g. average velocity, concentration and mass of a mineral precipitate) are obtained by averaging the SPH ODEs over the entire computational domain. These effective ODEs contain non-local terms in the form of volume integrals of functions of the SPH variables. Second, a computational closure is used to close the system of the effective equations. The computational closure is achieved via short bursts of the SPH model. The dimension reduction model is used to simulate flow and transport with mixing controlled reactions and mineral precipitation. An SPH model is used model transport at the porescale. Good agreement between direct solutions of the SPH equations and solutions obtained with the dimension reduction method for different boundary conditions confirms the accuracy and computational efficiency of the dimension reduction model. The method significantly accelerates SPH simulations, while providing accurate approximation of the solution and accurate prediction of the average behavior of the system.« less
An adaptive band selection method for dimension reduction of hyper-spectral remote sensing image
NASA Astrophysics Data System (ADS)
Yu, Zhijie; Yu, Hui; Wang, Chen-sheng
2014-11-01
Hyper-spectral remote sensing data can be acquired by imaging the same area with multiple wavelengths, and it normally consists of hundreds of band-images. Hyper-spectral images can not only provide spatial information but also high resolution spectral information, and it has been widely used in environment monitoring, mineral investigation and military reconnaissance. However, because of the corresponding large data volume, it is very difficult to transmit and store Hyper-spectral images. Hyper-spectral image dimensional reduction technique is desired to resolve this problem. Because of the High relation and high redundancy of the hyper-spectral bands, it is very feasible that applying the dimensional reduction method to compress the data volume. This paper proposed a novel band selection-based dimension reduction method which can adaptively select the bands which contain more information and details. The proposed method is based on the principal component analysis (PCA), and then computes the index corresponding to every band. The indexes obtained are then ranked in order of magnitude from large to small. Based on the threshold, system can adaptively and reasonably select the bands. The proposed method can overcome the shortcomings induced by transform-based dimension reduction method and prevent the original spectral information from being lost. The performance of the proposed method has been validated by implementing several experiments. The experimental results show that the proposed algorithm can reduce the dimensions of hyper-spectral image with little information loss by adaptively selecting the band images.
A greedy algorithm for species selection in dimension reduction of combustion chemistry
NASA Astrophysics Data System (ADS)
Hiremath, Varun; Ren, Zhuyin; Pope, Stephen B.
2010-09-01
Computational calculations of combustion problems involving large numbers of species and reactions with a detailed description of the chemistry can be very expensive. Numerous dimension reduction techniques have been developed in the past to reduce the computational cost. In this paper, we consider the rate controlled constrained-equilibrium (RCCE) dimension reduction method, in which a set of constrained species is specified. For a given number of constrained species, the 'optimal' set of constrained species is that which minimizes the dimension reduction error. The direct determination of the optimal set is computationally infeasible, and instead we present a greedy algorithm which aims at determining a 'good' set of constrained species; that is, one leading to near-minimal dimension reduction error. The partially-stirred reactor (PaSR) involving methane premixed combustion with chemistry described by the GRI-Mech 1.2 mechanism containing 31 species is used to test the algorithm. Results on dimension reduction errors for different sets of constrained species are presented to assess the effectiveness of the greedy algorithm. It is shown that the first four constrained species selected using the proposed greedy algorithm produce lower dimension reduction error than constraints on the major species: CH4, O2, CO2 and H2O. It is also shown that the first ten constrained species selected using the proposed greedy algorithm produce a non-increasing dimension reduction error with every additional constrained species; and produce the lowest dimension reduction error in many cases tested over a wide range of equivalence ratios, pressures and initial temperatures.
Prediction With Dimension Reduction of Multiple Molecular Data Sources for Patient Survival.
Kaplan, Adam; Lock, Eric F
2017-01-01
Predictive modeling from high-dimensional genomic data is often preceded by a dimension reduction step, such as principal component analysis (PCA). However, the application of PCA is not straightforward for multisource data, wherein multiple sources of 'omics data measure different but related biological components. In this article, we use recent advances in the dimension reduction of multisource data for predictive modeling. In particular, we apply exploratory results from Joint and Individual Variation Explained (JIVE), an extension of PCA for multisource data, for prediction of differing response types. We conduct illustrative simulations to illustrate the practical advantages and interpretability of our approach. As an application example, we consider predicting survival for patients with glioblastoma multiforme from 3 data sources measuring messenger RNA expression, microRNA expression, and DNA methylation. We also introduce a method to estimate JIVE scores for new samples that were not used in the initial dimension reduction and study its theoretical properties; this method is implemented in the R package R.JIVE on CRAN, in the function jive.predict.
An estimating equation approach to dimension reduction for longitudinal data
Xu, Kelin; Guo, Wensheng; Xiong, Momiao; Zhu, Liping; Jin, Li
2016-01-01
Sufficient dimension reduction has been extensively explored in the context of independent and identically distributed data. In this article we generalize sufficient dimension reduction to longitudinal data and propose an estimating equation approach to estimating the central mean subspace. The proposed method accounts for the covariance structure within each subject and improves estimation efficiency when the covariance structure is correctly specified. Even if the covariance structure is misspecified, our estimator remains consistent. In addition, our method relaxes distributional assumptions on the covariates and is doubly robust. To determine the structural dimension of the central mean subspace, we propose a Bayesian-type information criterion. We show that the estimated structural dimension is consistent and that the estimated basis directions are root-\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$n$\\end{document} consistent, asymptotically normal and locally efficient. Simulations and an analysis of the Framingham Heart Study data confirm the effectiveness of our approach. PMID:27017956
NASA Astrophysics Data System (ADS)
Aytaç Korkmaz, Sevcan; Binol, Hamidullah
2018-03-01
Patients who die from stomach cancer are still present. Early diagnosis is crucial in reducing the mortality rate of cancer patients. Therefore, computer aided methods have been developed for early detection in this article. Stomach cancer images were obtained from Fırat University Medical Faculty Pathology Department. The Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) features of these images are calculated. At the same time, Sammon mapping, Stochastic Neighbor Embedding (SNE), Isomap, Classical multidimensional scaling (MDS), Local Linear Embedding (LLE), Linear Discriminant Analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Laplacian Eigenmaps methods are used for dimensional the reduction of the features. The high dimension of these features has been reduced to lower dimensions using dimensional reduction methods. Artificial neural networks (ANN) and Random Forest (RF) classifiers were used to classify stomach cancer images with these new lower feature sizes. New medical systems have developed to measure the effects of these dimensions by obtaining features in different dimensional with dimensional reduction methods. When all the methods developed are compared, it has been found that the best accuracy results are obtained with LBP_MDS_ANN and LBP_LLE_ANN methods.
Active Subspace Methods for Data-Intensive Inverse Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qiqi
2017-04-27
The project has developed theory and computational tools to exploit active subspaces to reduce the dimension in statistical calibration problems. This dimension reduction enables MCMC methods to calibrate otherwise intractable models. The same theoretical and computational tools can also reduce the measurement dimension for calibration problems that use large stores of data.
New Trends in Television Consumption.
ERIC Educational Resources Information Center
Richeri, Giuseppe
A phenomenon which tends to transform the function and methods of traditional television consumption is the gradual reduction of its "mass" dimensions, which tend to disappear for an increasing share of the audience. This reduction of the mass dimension ranges from fragmentation of the audience to its segmentation, and, in the most…
On the connection between multigrid and cyclic reduction
NASA Technical Reports Server (NTRS)
Merriam, M. L.
1984-01-01
A technique is shown whereby it is possible to relate a particular multigrid process to cyclic reduction using purely mathematical arguments. This technique suggest methods for solving Poisson's equation in 1-, 2-, or 3-dimensions with Dirichlet or Neumann boundary conditions. In one dimension the method is exact and, in fact, reduces to cyclic reduction. This provides a valuable reference point for understanding multigrid techniques. The particular multigrid process analyzed is referred to here as Approximate Cyclic Reduction (ACR) and is one of a class known as Multigrid Reduction methods in the literature. It involves one approximation with a known error term. It is possible to relate the error term in this approximation with certain eigenvector components of the error. These are sharply reduced in amplitude by classical relaxation techniques. The approximation can thus be made a very good one.
Zhao, Mingbo; Zhang, Zhao; Chow, Tommy W S; Li, Bing
2014-07-01
Dealing with high-dimensional data has always been a major problem in research of pattern recognition and machine learning, and Linear Discriminant Analysis (LDA) is one of the most popular methods for dimension reduction. However, it only uses labeled samples while neglecting unlabeled samples, which are abundant and can be easily obtained in the real world. In this paper, we propose a new dimension reduction method, called "SL-LDA", by using unlabeled samples to enhance the performance of LDA. The new method first propagates label information from the labeled set to the unlabeled set via a label propagation process, where the predicted labels of unlabeled samples, called "soft labels", can be obtained. It then incorporates the soft labels into the construction of scatter matrixes to find a transformed matrix for dimension reduction. In this way, the proposed method can preserve more discriminative information, which is preferable when solving the classification problem. We further propose an efficient approach for solving SL-LDA under a least squares framework, and a flexible method of SL-LDA (FSL-LDA) to better cope with datasets sampled from a nonlinear manifold. Extensive simulations are carried out on several datasets, and the results show the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Dimension reduction techniques for the integrative analysis of multi-omics data
Zeleznik, Oana A.; Thallinger, Gerhard G.; Kuster, Bernhard; Gholami, Amin M.
2016-01-01
State-of-the-art next-generation sequencing, transcriptomics, proteomics and other high-throughput ‘omics' technologies enable the efficient generation of large experimental data sets. These data may yield unprecedented knowledge about molecular pathways in cells and their role in disease. Dimension reduction approaches have been widely used in exploratory analysis of single omics data sets. This review will focus on dimension reduction approaches for simultaneous exploratory analyses of multiple data sets. These methods extract the linear relationships that best explain the correlated structure across data sets, the variability both within and between variables (or observations) and may highlight data issues such as batch effects or outliers. We explore dimension reduction techniques as one of the emerging approaches for data integration, and how these can be applied to increase our understanding of biological systems in normal physiological function and disease. PMID:26969681
Tensor sufficient dimension reduction
Zhong, Wenxuan; Xing, Xin; Suslick, Kenneth
2015-01-01
Tensor is a multiway array. With the rapid development of science and technology in the past decades, large amount of tensor observations are routinely collected, processed, and stored in many scientific researches and commercial activities nowadays. The colorimetric sensor array (CSA) data is such an example. Driven by the need to address data analysis challenges that arise in CSA data, we propose a tensor dimension reduction model, a model assuming the nonlinear dependence between a response and a projection of all the tensor predictors. The tensor dimension reduction models are estimated in a sequential iterative fashion. The proposed method is applied to a CSA data collected for 150 pathogenic bacteria coming from 10 bacterial species and 14 bacteria from one control species. Empirical performance demonstrates that our proposed method can greatly improve the sensitivity and specificity of the CSA technique. PMID:26594304
Ly, Cheng
2013-10-01
The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.
McHugh, Kieran; Naranjo, Arlene; Van Ryn, Collin; Kirby, Chaim; Brock, Penelope; Lyons, Karen A.; States, Lisa J.; Rojas, Yesenia; Miller, Alexandra; Volchenboum, Sam L.; Simon, Thorsten; Krug, Barbara; Sarnacki, Sabine; Valteau-Couanet, Dominique; von Schweinitz, Dietrich; Kammer, Birgit; Granata, Claudio; Pio, Luca; Park, Julie R.; Nuchtern, Jed
2016-01-01
Purpose The International Neuroblastoma Response Criteria (INRC) require serial measurements of primary tumors in three dimensions, whereas the Response Evaluation Criteria in Solid Tumors (RECIST) require measurement in one dimension. This study was conducted to identify the preferred method of primary tumor response assessment for use in revised INRC. Patients and Methods Patients younger than 20 years with high-risk neuroblastoma were eligible if they were diagnosed between 2000 and 2012 and if three primary tumor measurements (antero-posterior, width, cranio-caudal) were recorded at least twice before resection. Responses were defined as ≥ 30% reduction in longest dimension as per RECIST, ≥ 50% reduction in volume as per INRC, or ≥ 65% reduction in volume. Results Three-year event-free survival for all patients (N = 229) was 44% and overall survival was 58%. The sensitivity of both volume response measures (ability to detect responses in patients who survived) exceeded the sensitivity of the single dimension measure, but the specificity of all response measures (ability to identify lack of response in patients who later died) was low. In multivariable analyses, none of the response measures studied was predictive of outcome, and none was predictive of the extent of resection. Conclusion None of the methods of primary tumor response assessment was predictive of outcome. Measurement of three dimensions followed by calculation of resultant volume is more complex than measurement of a single dimension. Primary tumor response in children with high-risk neuroblastoma should therefore be evaluated in accordance with RECIST criteria, using the single longest dimension. PMID:26755515
Nagarajan, Mahesh B.; Huber, Markus B.; Schlossbauer, Thomas; Leinsinger, Gerda; Krol, Andrzej; Wismüller, Axel
2014-01-01
Objective While dimension reduction has been previously explored in computer aided diagnosis (CADx) as an alternative to feature selection, previous implementations of its integration into CADx do not ensure strict separation between training and test data required for the machine learning task. This compromises the integrity of the independent test set, which serves as the basis for evaluating classifier performance. Methods and Materials We propose, implement and evaluate an improved CADx methodology where strict separation is maintained. This is achieved by subjecting the training data alone to dimension reduction; the test data is subsequently processed with out-of-sample extension methods. Our approach is demonstrated in the research context of classifying small diagnostically challenging lesions annotated on dynamic breast magnetic resonance imaging (MRI) studies. The lesions were dynamically characterized through topological feature vectors derived from Minkowski functionals. These feature vectors were then subject to dimension reduction with different linear and non-linear algorithms applied in conjunction with out-of-sample extension techniques. This was followed by classification through supervised learning with support vector regression. Area under the receiver-operating characteristic curve (AUC) was evaluated as the metric of classifier performance. Results Of the feature vectors investigated, the best performance was observed with Minkowski functional ’perimeter’ while comparable performance was observed with ’area’. Of the dimension reduction algorithms tested with ’perimeter’, the best performance was observed with Sammon’s mapping (0.84 ± 0.10) while comparable performance was achieved with exploratory observation machine (0.82 ± 0.09) and principal component analysis (0.80 ± 0.10). Conclusions The results reported in this study with the proposed CADx methodology present a significant improvement over previous results reported with such small lesions on dynamic breast MRI. In particular, non-linear algorithms for dimension reduction exhibited better classification performance than linear approaches, when integrated into our CADx methodology. We also note that while dimension reduction techniques may not necessarily provide an improvement in classification performance over feature selection, they do allow for a higher degree of feature compaction. PMID:24355697
Wavelet packets for multi- and hyper-spectral imagery
NASA Astrophysics Data System (ADS)
Benedetto, J. J.; Czaja, W.; Ehler, M.; Flake, C.; Hirn, M.
2010-01-01
State of the art dimension reduction and classification schemes in multi- and hyper-spectral imaging rely primarily on the information contained in the spectral component. To better capture the joint spatial and spectral data distribution we combine the Wavelet Packet Transform with the linear dimension reduction method of Principal Component Analysis. Each spectral band is decomposed by means of the Wavelet Packet Transform and we consider a joint entropy across all the spectral bands as a tool to exploit the spatial information. Dimension reduction is then applied to the Wavelet Packets coefficients. We present examples of this technique for hyper-spectral satellite imaging. We also investigate the role of various shrinkage techniques to model non-linearity in our approach.
NASA Astrophysics Data System (ADS)
Khoudeir, A.; Montemayor, R.; Urrutia, Luis F.
2008-09-01
Using the parent Lagrangian method together with a dimensional reduction from D to (D-1) dimensions, we construct dual theories for massive spin two fields in arbitrary dimensions in terms of a mixed symmetry tensor TA[A1A2…AD-2]. Our starting point is the well-studied massless parent action in dimension D. The resulting massive Stueckelberg-like parent actions in (D-1) dimensions inherit all the gauge symmetries of the original massless action and can be gauge fixed in two alternative ways, yielding the possibility of having a parent action with either a symmetric or a nonsymmetric Fierz-Pauli field eAB. Even though the dual sector in terms of the standard spin two field includes only the symmetrical part e{AB} in both cases, these two possibilities yield different results in terms of the alternative dual field TA[A1A2…AD-2]. In particular, the nonsymmetric case reproduces the Freund-Curtright action as the dual to the massive spin two field action in four dimensions.
Yang, Jie; McArdle, Conor; Daniels, Stephen
2014-01-01
A new data dimension-reduction method, called Internal Information Redundancy Reduction (IIRR), is proposed for application to Optical Emission Spectroscopy (OES) datasets obtained from industrial plasma processes. For example in a semiconductor manufacturing environment, real-time spectral emission data is potentially very useful for inferring information about critical process parameters such as wafer etch rates, however, the relationship between the spectral sensor data gathered over the duration of an etching process step and the target process output parameters is complex. OES sensor data has high dimensionality (fine wavelength resolution is required in spectral emission measurements in order to capture data on all chemical species involved in plasma reactions) and full spectrum samples are taken at frequent time points, so that dynamic process changes can be captured. To maximise the utility of the gathered dataset, it is essential that information redundancy is minimised, but with the important requirement that the resulting reduced dataset remains in a form that is amenable to direct interpretation of the physical process. To meet this requirement and to achieve a high reduction in dimension with little information loss, the IIRR method proposed in this paper operates directly in the original variable space, identifying peak wavelength emissions and the correlative relationships between them. A new statistic, Mean Determination Ratio (MDR), is proposed to quantify the information loss after dimension reduction and the effectiveness of IIRR is demonstrated using an actual semiconductor manufacturing dataset. As an example of the application of IIRR in process monitoring/control, we also show how etch rates can be accurately predicted from IIRR dimension-reduced spectral data. PMID:24451453
NASA Astrophysics Data System (ADS)
Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.
2015-08-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.
The staircase method: integrals for periodic reductions of integrable lattice equations
NASA Astrophysics Data System (ADS)
van der Kamp, Peter H.; Quispel, G. R. W.
2010-11-01
We show, in full generality, that the staircase method (Papageorgiou et al 1990 Phys. Lett. A 147 106-14, Quispel et al 1991 Physica A 173 243-66) provides integrals for mappings, and correspondences, obtained as traveling wave reductions of (systems of) integrable partial difference equations. We apply the staircase method to a variety of equations, including the Korteweg-De Vries equation, the five-point Bruschi-Calogero-Droghei equation, the quotient-difference (QD)-algorithm and the Boussinesq system. We show that, in all these cases, if the staircase method provides r integrals for an n-dimensional mapping, with 2r, then one can introduce q <= 2r variables, which reduce the dimension of the mapping from n to q. These dimension-reducing variables are obtained as joint invariants of k-symmetries of the mappings. Our results support the idea that often the staircase method provides sufficiently many integrals for the periodic reductions of integrable lattice equations to be completely integrable. We also study reductions on other quad-graphs than the regular {\\ Z}^2 lattice, and we prove linear growth of the multi-valuedness of iterates of high-dimensional correspondences obtained as reductions of the QD-algorithm.
Reduction of Large Dynamical Systems by Minimization of Evolution Rate
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.
1999-01-01
Reduction of a large system of equations to a lower-dimensional system of similar dynamics is investigated. For dynamical systems with disparate timescales, a criterion for determining redundant dimensions and a general reduction method based on the minimization of evolution rate are proposed.
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
Spatiotemporal Interpolation for Environmental Modelling
Susanto, Ferry; de Souza, Paulo; He, Jing
2016-01-01
A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497
Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Youngsoo; Carlberg, Kevin Thomas
Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over allmore » space and time in a weighted ℓ 2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.« less
Paschoal, Sérgio Márcio Pacheco; Filho, Wilson Jacob; Litvoc, Júlio
2008-01-01
OBJECTIVE To describe item reduction and its distribution into dimensions in the construction process of a quality of life evaluation instrument for the elderly. METHODS The sampling method was chosen by convenience through quotas, with selection of elderly subjects from four programs to achieve heterogeneity in the “health status”, “functional capacity”, “gender”, and “age” variables. The Clinical Impact Method was used, consisting of the spontaneous and elicited selection by the respondents of relevant items to the construct Quality of Life in Old Age from a previously elaborated item pool. The respondents rated each item’s importance using a 5-point Likert scale. The product of the proportion of elderly selecting the item as relevant (frequency) and the mean importance score they attributed to it (importance) represented the overall impact of that item in their quality of life (impact). The items were ordered according to their impact scores and the top 46 scoring items were grouped in dimensions by three experts. A review of the negative items was performed. RESULTS One hundred and ninety three people (122 women and 71 men) were interviewed. Experts distributed the 46 items into eight dimensions. Closely related items were grouped and dimensions not reaching the minimum expected number of items received additional items resulting in eight dimensions and 43 items. DISCUSSION The sample was heterogeneous and similar to what was expected. The dimensions and items demonstrated the multidimensionality of the construct. The Clinical Impact Method was appropriate to construct the instrument, which was named Elderly Quality of Life Index - EQoLI. An accuracy process will be examined in the future. PMID:18438571
A decentralized linear quadratic control design method for flexible structures
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Craig, Roy R., Jr.
1990-01-01
A decentralized suboptimal linear quadratic control design procedure which combines substructural synthesis, model reduction, decentralized control design, subcontroller synthesis, and controller reduction is proposed for the design of reduced-order controllers for flexible structures. The procedure starts with a definition of the continuum structure to be controlled. An evaluation model of finite dimension is obtained by the finite element method. Then, the finite element model is decomposed into several substructures by using a natural decomposition called substructuring decomposition. Each substructure, at this point, still has too large a dimension and must be reduced to a size that is Riccati-solvable. Model reduction of each substructure can be performed by using any existing model reduction method, e.g., modal truncation, balanced reduction, Krylov model reduction, or mixed-mode method. Then, based on the reduced substructure model, a subcontroller is designed by an LQ optimal control method for each substructure independently. After all subcontrollers are designed, a controller synthesis method called substructural controller synthesis is employed to synthesize all subcontrollers into a global controller. The assembling scheme used is the same as that employed for the structure matrices. Finally, a controller reduction scheme, called the equivalent impulse response energy controller (EIREC) reduction algorithm, is used to reduce the global controller to a reasonable size for implementation. The EIREC reduced controller preserves the impulse response energy of the full-order controller and has the property of matching low-frequency moments and low-frequency power moments. An advantage of the substructural controller synthesis method is that it relieves the computational burden associated with dimensionality. Besides that, the SCS design scheme is also a highly adaptable controller synthesis method for structures with varying configuration, or varying mass and stiffness properties.
Identification of seedling cabbages and weeds using hyperspectral imaging
USDA-ARS?s Scientific Manuscript database
Target detectionis one of research focues for precision chemical application. This study developed a method to identify seedling cabbages and weeds using hyperspectral spectral imaging. In processing the image data, with ENVI software, after dimension reduction, noise reduction, de-correlation for h...
Comparisons of non-Gaussian statistical models in DNA methylation analysis.
Ma, Zhanyu; Teschendorff, Andrew E; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-06-16
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.
Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis
Ma, Zhanyu; Teschendorff, Andrew E.; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-01-01
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance. PMID:24937687
Visualizing phylogenetic tree landscapes.
Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A
2017-02-02
Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D projections significantly increase the fit between the tree-to-tree distances and can facilitate the interpretation of the relationship among phylogenetic trees. We demonstrate that the choice of dimensionality reduction method can significantly influence the spatial relationship among a large set of competing phylogenetic trees. We highlight the importance of selecting a dimensionality reduction method to visualize large multi-locus phylogenetic landscapes and demonstrate that 3D projections of mitochondrial tree landscapes better capture the relationship among the trees being compared.
QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.
Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.
Data Reduction Algorithm Using Nonnegative Matrix Factorization with Nonlinear Constraints
NASA Astrophysics Data System (ADS)
Sembiring, Pasukat
2017-12-01
Processing ofdata with very large dimensions has been a hot topic in recent decades. Various techniques have been proposed in order to execute the desired information or structure. Non- Negative Matrix Factorization (NMF) based on non-negatives data has become one of the popular methods for shrinking dimensions. The main strength of this method is non-negative object, the object model by a combination of some basic non-negative parts, so as to provide a physical interpretation of the object construction. The NMF is a dimension reduction method thathasbeen used widely for numerous applications including computer vision,text mining, pattern recognitions,and bioinformatics. Mathematical formulation for NMF did not appear as a convex optimization problem and various types of algorithms have been proposed to solve the problem. The Framework of Alternative Nonnegative Least Square(ANLS) are the coordinates of the block formulation approaches that have been proven reliable theoretically and empirically efficient. This paper proposes a new algorithm to solve NMF problem based on the framework of ANLS.This algorithm inherits the convergenceproperty of the ANLS framework to nonlinear constraints NMF formulations.
The attractor dimension of solar decimetric radio pulsations
NASA Technical Reports Server (NTRS)
Kurths, J.; Benz, A. O.; Aschwanden, M. J.
1991-01-01
The temporal characteristics of decimetric pulsations and related radio emissions during solar flares are analyzed using statistical methods recently developed for nonlinear dynamic systems. The results of the analysis is consistent with earlier reports on low-dimensional attractors of such events and yield a quantitative description of their temporal characteristics and hidden order. The estimated dimensions of typical decimetric pulsations are generally in the range of 3.0 + or - 0.5. Quasi-periodic oscillations and sudden reductions may have dimensions as low as 2. Pulsations of decimetric type IV continua have typically a dimension of about 4.
A dimension reduction method for flood compensation operation of multi-reservoir system
NASA Astrophysics Data System (ADS)
Jia, B.; Wu, S.; Fan, Z.
2017-12-01
Multiple reservoirs cooperation compensation operations coping with uncontrolled flood play vital role in real-time flood mitigation. This paper come up with a reservoir flood compensation operation index (ResFCOI), which formed by elements of flood control storage, flood inflow volume, flood transmission time and cooperation operations period, then establish a flood cooperation compensation operations model of multi-reservoir system, according to the ResFCOI to determine a computational order of each reservoir, and lastly the differential evolution algorithm is implemented for computing single reservoir flood compensation optimization in turn, so that a dimension reduction method is formed to reduce computational complexity. Shiguan River Basin with two large reservoirs and an extensive uncontrolled flood area, is used as a case study, results show that (a) reservoirs' flood discharges and the uncontrolled flood are superimposed at Jiangjiaji Station, while the formed flood peak flow is as small as possible; (b) cooperation compensation operations slightly increase in usage of flood storage capacity in reservoirs, when comparing to rule-based operations; (c) it takes 50 seconds in average when computing a cooperation compensation operations scheme. The dimension reduction method to guide flood compensation operations of multi-reservoir system, can make each reservoir adjust its flood discharge strategy dynamically according to the uncontrolled flood magnitude and pattern, so as to mitigate the downstream flood disaster.
Universal and integrable nonlinear evolution systems of equations in 2+1 dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maccari, A.
1997-08-01
Integrable systems of nonlinear partial differential equations (PDEs) are obtained from integrable equations in 2+1 dimensions, by means of a reduction method of broad applicability based on Fourier expansion and spatio{endash}temporal rescalings, which is asymptotically exact in the limit of weak nonlinearity. The integrability by the spectral transform is explicitly demonstrated, because the corresponding Lax pairs have been derived, applying the same reduction method to the Lax pair of the initial equation. These systems of nonlinear PDEs are likely to be of applicative relevance and have a {open_quotes}universal{close_quotes} character, inasmuch as they may be derived from a very large classmore » of nonlinear evolution equations with a linear dispersive part. {copyright} {ital 1997 American Institute of Physics.}« less
An error bound for a discrete reduced order model of a linear multivariable system
NASA Technical Reports Server (NTRS)
Al-Saggaf, Ubaid M.; Franklin, Gene F.
1987-01-01
The design of feasible controllers for high dimension multivariable systems can be greatly aided by a method of model reduction. In order for the design based on the order reduction to include a guarantee of stability, it is sufficient to have a bound on the model error. Previous work has provided such a bound for continuous-time systems for algorithms based on balancing. In this note an L-infinity bound is derived for model error for a method of order reduction of discrete linear multivariable systems based on balancing.
Properties of dimension witnesses and their semidefinite programming relaxations
NASA Astrophysics Data System (ADS)
Mironowicz, Piotr; Li, Hong-Wei; Pawłowski, Marcin
2014-08-01
In this paper we develop a method for investigating semi-device-independent randomness expansion protocols that was introduced in Li et al. [H.-W. Li, P. Mironowicz, M. Pawłowski, Z.-Q. Yin, Y.-C. Wu, S. Wang, W. Chen, H.-G. Hu, G.-C. Guo, and Z.-F. Han, Phys. Rev. A 87, 020302(R) (2013), 10.1103/PhysRevA.87.020302]. This method allows us to lower bound, with semi-definite programming, the randomness obtained from random number generators based on dimension witnesses. We also investigate the robustness of some randomness expanders using this method. We show the role of an assumption about the trace of the measurement operators and a way to avoid it. The method is also generalized to systems of arbitrary dimension and for a more general form of dimension witnesses than in our previous paper. Finally, we introduce a procedure of dimension witness reduction, which can be used to obtain from an existing witness a new one with a higher amount of certifiable randomness. The presented methods find an application for experiments [J. Ahrens, P. Badziag, M. Pawlowski, M. Zukowski, and M. Bourennane, Phys. Rev. Lett. 112, 140401 (2014), 10.1103/PhysRevLett.112.140401].
Data-Driven Model Reduction and Transfer Operator Approximation
NASA Astrophysics Data System (ADS)
Klus, Stefan; Nüske, Feliks; Koltai, Péter; Wu, Hao; Kevrekidis, Ioannis; Schütte, Christof; Noé, Frank
2018-06-01
In this review paper, we will present different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out similarities and differences between methods developed independently by the dynamical systems, fluid dynamics, and molecular dynamics communities such as time-lagged independent component analysis, dynamic mode decomposition, and their respective generalizations. As a result, extensions and best practices developed for one particular method can be carried over to other related methods.
NASA Technical Reports Server (NTRS)
Ratcliffe, James G.
2010-01-01
This paper details part of an effort focused on the development of a standardized facesheet/core peel debonding test procedure. The purpose of the test is to characterize facesheet/core peel in sandwich structure, accomplished through the measurement of the critical strain energy release rate associated with the debonding process. The specific test method selected for the standardized test procedure utilizes a single cantilever beam (SCB) specimen configuration. The objective of the current work is to develop a method for establishing SCB specimen dimensions. This is achieved by imposing specific limitations on specimen dimensions, with the objectives of promoting a linear elastic specimen response, and simplifying the data reduction method required for computing the critical strain energy release rate associated with debonding. The sizing method is also designed to be suitable for incorporation into a standardized test protocol. Preliminary application of the resulting sizing method yields practical specimen dimensions.
Partial Least Squares for Discrimination in fMRI Data
Andersen, Anders H.; Rayens, William S.; Liu, Yushu; Smith, Charles D.
2011-01-01
Multivariate methods for discrimination were used in the comparison of brain activation patterns between groups of cognitively normal women who are at either high or low Alzheimer's disease risk based on family history and apolipoprotein-E4 status. Linear discriminant analysis (LDA) was preceded by dimension reduction using either principal component analysis (PCA), partial least squares (PLS), or a new oriented partial least squares (OrPLS) method. The aim was to identify a spatial pattern of functionally connected brain regions that was differentially expressed by the risk groups and yielded optimal classification accuracy. Multivariate dimension reduction is required prior to LDA when the data contains more feature variables than there are observations on individual subjects. Whereas PCA has been commonly used to identify covariance patterns in neuroimaging data, this approach only identifies gross variability and is not capable of distinguishing among-groups from within-groups variability. PLS and OrPLS provide a more focused dimension reduction by incorporating information on class structure and therefore lead to more parsimonious models for discrimination. Performance was evaluated in terms of the cross-validated misclassification rates. The results support the potential of using fMRI as an imaging biomarker or diagnostic tool to discriminate individuals with disease or high risk. PMID:22227352
NASA Astrophysics Data System (ADS)
Song, Bowen; Zhang, Guopeng; Wang, Huafeng; Zhu, Wei; Liang, Zhengrong
2013-02-01
Various types of features, e.g., geometric features, texture features, projection features etc., have been introduced for polyp detection and differentiation tasks via computer aided detection and diagnosis (CAD) for computed tomography colonography (CTC). Although these features together cover more information of the data, some of them are statistically highly-related to others, which made the feature set redundant and burdened the computation task of CAD. In this paper, we proposed a new dimension reduction method which combines hierarchical clustering and principal component analysis (PCA) for false positives (FPs) reduction task. First, we group all the features based on their similarity using hierarchical clustering, and then PCA is employed within each group. Different numbers of principal components are selected from each group to form the final feature set. Support vector machine is used to perform the classification. The results show that when three principal components were chosen from each group we can achieve an area under the curve of receiver operating characteristics of 0.905, which is as high as the original dataset. Meanwhile, the computation time is reduced by 70% and the feature set size is reduce by 77%. It can be concluded that the proposed method captures the most important information of the feature set and the classification accuracy is not affected after the dimension reduction. The result is promising and further investigation, such as automatically threshold setting, are worthwhile and are under progress.
Reducing particle dimensions of chunkwood.
Robert C. Radcliffe
1990-01-01
Presents and compares the chunkwood sizes obtainable with the USDA Forest Service prototype wood chunker using four different blade configurations, and the results of further chunkwood reduction with three methods totally separate from the chunking process.
Model-based reinforcement learning with dimension reduction.
Tangkaratt, Voot; Morimoto, Jun; Sugiyama, Masashi
2016-12-01
The goal of reinforcement learning is to learn an optimal policy which controls an agent to acquire the maximum cumulative reward. The model-based reinforcement learning approach learns a transition model of the environment from data, and then derives the optimal policy using the transition model. However, learning an accurate transition model in high-dimensional environments requires a large amount of data which is difficult to obtain. To overcome this difficulty, in this paper, we propose to combine model-based reinforcement learning with the recently developed least-squares conditional entropy (LSCE) method, which simultaneously performs transition model estimation and dimension reduction. We also further extend the proposed method to imitation learning scenarios. The experimental results show that policy search combined with LSCE performs well for high-dimensional control tasks including real humanoid robot control. Copyright © 2016 Elsevier Ltd. All rights reserved.
Williams, Monnica T.; Farris, Samantha G.; Turkheimer, Eric N.; Franklin, Martin E.; Simpson, H. Blair; Liebowitz, Michael; Foa, Edna B.
2014-01-01
Objective Obsessive-compulsive disorder (OCD) is a severe condition with varied symptom presentations. The behavioral treatment with the most empirical support is exposure and ritual prevention (EX/RP). This study examined the impact of symptom dimensions on EX/RP outcomes in OCD patients. Method The Yale-Brown Obsessive-Compulsive Scale (Y-BOCS) was used to determine primary symptoms for each participant. An exploratory factor analysis (EFA) of 238 patients identified five dimensions: contamination/cleaning, doubts about harm/checking, hoarding, symmetry/ordering, and unacceptable/taboo thoughts (including religious/moral and somatic obsessions among others). A linear regression was conducted on those who had received EX/RP (n = 87) to examine whether scores on the five symptom dimensions predicted post-treatment Y-BOCS scores, accounting for pre-treatment Y-BOCS scores. Results The average reduction in Y-BOCS score was 43.0%, however the regression indicated that unacceptable/taboo thoughts (β = .27, p = .02) and hoarding dimensions (β = .23, p = .04) were associated with significantly poorer EX/RP treatment outcomes. Specifically, patients endorsing religious/moral obsessions, somatic concerns, and hoarding obsessions showed significantly smaller reductions in Y-BOCS severity scores. Conclusions EX/RP was effective for all symptom dimensions, however it was less effective for unacceptable/taboo thoughts and hoarding than for other dimensions. Clinical implications and directions for research are discussed. PMID:24983796
Spectral Data Reduction via Wavelet Decomposition
NASA Technical Reports Server (NTRS)
Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)
2002-01-01
The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.
Dimension reduction of frequency-based direct Granger causality measures on short time series.
Siggiridou, Elsa; Kimiskidis, Vasilios K; Kugiumtzis, Dimitris
2017-09-01
The mainstream in the estimation of effective brain connectivity relies on Granger causality measures in the frequency domain. If the measure is meant to capture direct causal effects accounting for the presence of other observed variables, as in multi-channel electroencephalograms (EEG), typically the fit of a vector autoregressive (VAR) model on the multivariate time series is required. For short time series of many variables, the estimation of VAR may not be stable requiring dimension reduction resulting in restricted or sparse VAR models. The restricted VAR obtained by the modified backward-in-time selection method (mBTS) is adapted to the generalized partial directed coherence (GPDC), termed restricted GPDC (RGPDC). Dimension reduction on other frequency based measures, such the direct directed transfer function (dDTF), is straightforward. First, a simulation study using linear stochastic multivariate systems is conducted and RGPDC is favorably compared to GPDC on short time series in terms of sensitivity and specificity. Then the two measures are tested for their ability to detect changes in brain connectivity during an epileptiform discharge (ED) from multi-channel scalp EEG. It is shown that RGPDC identifies better than GPDC the connectivity structure of the simulated systems, as well as changes in the brain connectivity, and is less dependent on the free parameter of VAR order. The proposed dimension reduction in frequency measures based on VAR constitutes an appropriate strategy to estimate reliably brain networks within short-time windows. Copyright © 2017 Elsevier B.V. All rights reserved.
Contributing Factors to Driver's Over-trust in a Driving Support System for Workload Reduction
NASA Astrophysics Data System (ADS)
Itoh, Makoto
Avoiding over-trust in machines is a vital issue in order to establish intelligent driver support systems. It is necessary to distinguish systems for workload reduction from systems for accident prevention/mitigation. This study focuses on over-trust in an Adaptive Cruise Control (ACC) system as a typical driving support system for workload reduction. By conducting an experiment, we obtained a case in which a driver trusted the ACC system too much. Concretely speaking, the driver just watched the ACC system crashing into a stopped car even though the ACC system was designed to ignore such stopped cars. This paper investigates possible contributing factors to the driver' s over-trust in the ACC system. The results suggest that emerging trust in the dimension of performance may cause over-trust in the dimension of method or purpose.
NASA Astrophysics Data System (ADS)
Ibrahim, R. S.; El-Kalaawy, O. H.
2006-10-01
The relativistic nonlinear self-consistent equations for a collisionless cold plasma with stationary ions [R. S. Ibrahim, IMA J. Appl. Math. 68, 523 (2003)] are extended to 3 and 3+1 dimensions. The resulting system of equations is reduced to the sine-Poisson equation. The truncated Painlevé expansion and reduction of the partial differential equation to a quadrature problem (RQ method) are described and applied to obtain the traveling wave solutions of the sine-Poisson equation for stationary and nonstationary equations in 3 and 3+1 dimensions describing the charge-density equilibrium configuration model.
Methods of Sparse Modeling and Dimensionality Reduction to Deal with Big Data
2015-04-01
supervised learning (c). Our framework consists of two separate phases: (a) first find an initial space in an unsupervised manner; then (b) utilize label...model that can learn thousands of topics from a large set of documents and infer the topic mixture of each document, 2) a supervised dimension reduction...model that can learn thousands of topics from a large set of documents and infer the topic mixture of each document, (i) a method of supervised
The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason L. Wright; Milos Manic
2010-05-01
This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.
A Review on Dimension Reduction
Ma, Yanyuan; Zhu, Liping
2013-01-01
Summary Summarizing the effect of many covariates through a few linear combinations is an effective way of reducing covariate dimension and is the backbone of (sufficient) dimension reduction. Because the replacement of high-dimensional covariates by low-dimensional linear combinations is performed with a minimum assumption on the specific regression form, it enjoys attractive advantages as well as encounters unique challenges in comparison with the variable selection approach. We review the current literature of dimension reduction with an emphasis on the two most popular models, where the dimension reduction affects the conditional distribution and the conditional mean, respectively. We discuss various estimation and inference procedures in different levels of detail, with the intention of focusing on their underneath idea instead of technicalities. We also discuss some unsolved problems in this area for potential future research. PMID:23794782
QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION
Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy
2016-01-01
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method—named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)—for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results. PMID:26778864
NASA Astrophysics Data System (ADS)
Zhang, Peng; Peng, Jing; Sims, S. Richard F.
2005-05-01
In ATR applications, each feature is a convolution of an image with a filter. It is important to use most discriminant features to produce compact representations. We propose two novel subspace methods for dimension reduction to address limitations associated with Fukunaga-Koontz Transform (FKT). The first method, Scatter-FKT, assumes that target is more homogeneous, while clutter can be anything other than target and anywhere. Thus, instead of estimating a clutter covariance matrix, Scatter-FKT computes a clutter scatter matrix that measures the spread of clutter from the target mean. We choose dimensions along which the difference in variation between target and clutter is most pronounced. When the target follows a Gaussian distribution, Scatter-FKT can be viewed as a generalization of FKT. The second method, Optimal Bayesian Subspace, is derived from the optimal Bayesian classifier. It selects dimensions such that the minimum Bayes error rate can be achieved. When both target and clutter follow Gaussian distributions, OBS computes optimal subspace representations. We compare our methods against FKT using character image as well as IR data.
Domain decomposition for a mixed finite element method in three dimensions
Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.
2003-01-01
We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.
Sheets, H David; Covino, Kristen M; Panasiewicz, Joanna M; Morris, Sara R
2006-01-01
Background Geometric morphometric methods of capturing information about curves or outlines of organismal structures may be used in conjunction with canonical variates analysis (CVA) to assign specimens to groups or populations based on their shapes. This methodological paper examines approaches to optimizing the classification of specimens based on their outlines. This study examines the performance of four approaches to the mathematical representation of outlines and two different approaches to curve measurement as applied to a collection of feather outlines. A new approach to the dimension reduction necessary to carry out a CVA on this type of outline data with modest sample sizes is also presented, and its performance is compared to two other approaches to dimension reduction. Results Two semi-landmark-based methods, bending energy alignment and perpendicular projection, are shown to produce roughly equal rates of classification, as do elliptical Fourier methods and the extended eigenshape method of outline measurement. Rates of classification were not highly dependent on the number of points used to represent a curve or the manner in which those points were acquired. The new approach to dimensionality reduction, which utilizes a variable number of principal component (PC) axes, produced higher cross-validation assignment rates than either the standard approach of using a fixed number of PC axes or a partial least squares method. Conclusion Classification of specimens based on feather shape was not highly dependent of the details of the method used to capture shape information. The choice of dimensionality reduction approach was more of a factor, and the cross validation rate of assignment may be optimized using the variable number of PC axes method presented herein. PMID:16978414
Neuroanatomical profiles of alexithymia dimensions and subtypes.
Goerlich-Dobre, Katharina Sophia; Votinov, Mikhail; Habel, Ute; Pripfl, Juergen; Lamm, Claus
2015-10-01
Alexithymia, a major risk factor for a range of psychiatric and neurological disorders, has been recognized to comprise two dimensions, a cognitive dimension (difficulties identifying, analyzing, and verbalizing feelings) and an affective one (difficulties emotionalizing and fantasizing). Based on these dimensions, the existence of four distinct alexithymia subtypes has been proposed, but never empirically tested. In this study, 125 participants were assigned to four groups corresponding to the proposed alexithymia subtypes: Type I (impairment on both dimensions), Type II (impairment on the cognitive, but not the affective dimension), Type III (impairment on the affective, but not the cognitive dimension), and Lexithymics (no impairment on either dimension). By means of voxel-based morphometry, associations of the alexithymia dimensions and subtypes with gray and white matter volumes were analyzed. Type I and Type II alexithymia were characterized by gray matter volume reductions in the left amygdala and the thalamus. The cognitive dimension was further linked to volume reductions in the right amygdala, left posterior insula, precuneus, caudate, hippocampus, and parahippocampus. Type III alexithymia was marked by volume reduction in the MCC only, and the affective dimension was further characterized by larger sgACC volume. Moreover, individuals with the intermediate alexithymia Types II and III showed gray matter volume reductions in distinct regions, and had larger corpus callosum volumes compared to Lexithymics. These results substantiate the notion of a differential impact of the cognitive and affective alexithymia dimensions on brain morphology and provide evidence for separable neuroanatomical representations of the different alexithymia subtypes. © 2015 Wiley Periodicals, Inc.
Zhang, Yu; Wu, Jianxin; Cai, Jianfei
2016-05-01
In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.
Dimension Reduction of Hyperspectral Data on Beowulf Clusters
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek
2000-01-01
Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operation. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold a great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, which is used widely in remote sensing, is the Principal Components Analysis (PCA). In light of the growing number of spectral channels of modern instruments, the paper reports on the development of a parallel PCA and its implementation on two Beowulf cluster configurations, on with fast Ethernet switch and the other is with a Myrinet interconnection.
Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification
NASA Astrophysics Data System (ADS)
Sharif, I.; Khare, S.
2014-11-01
With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.
Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan
2015-01-01
Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836
A Modified Sparse Representation Method for Facial Expression Recognition.
Wang, Wei; Xu, LiHong
2016-01-01
In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.
A Modified Sparse Representation Method for Facial Expression Recognition
Wang, Wei; Xu, LiHong
2016-01-01
In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878
Sufficient Dimension Reduction for Longitudinally Measured Predictors
Pfeiffer, Ruth M.; Forzani, Liliana; Bura, Efstathia
2013-01-01
We propose a method to combine several predictors (markers) that are measured repeatedly over time into a composite marker score without assuming a model and only requiring a mild condition on the predictor distribution. Assuming that the first and second moments of the predictors can be decomposed into a time and a marker component via a Kronecker product structure, that accommodates the longitudinal nature of the predictors, we develop first moment sufficient dimension reduction techniques to replace the original markers with linear transformations that contain sufficient information for the regression of the predictors on the outcome. These linear combinations can then be combined into a score that has better predictive performance than the score built under a general model that ignores the longitudinal structure of the data. Our methods can be applied to either continuous or categorical outcome measures. In simulations we focus on binary outcomes and show that our method outperforms existing alternatives using the AUC, the area under the receiver-operator characteristics (ROC) curve, as a summary measure of the discriminatory ability of a single continuous diagnostic marker for binary disease outcomes. PMID:22161635
A regularized approach for geodesic-based semisupervised multimanifold learning.
Fan, Mingyu; Zhang, Xiaoqin; Lin, Zhouchen; Zhang, Zhongfei; Bao, Hujun
2014-05-01
Geodesic distance, as an essential measurement for data dissimilarity, has been successfully used in manifold learning. However, most geodesic distance-based manifold learning algorithms have two limitations when applied to classification: 1) class information is rarely used in computing the geodesic distances between data points on manifolds and 2) little attention has been paid to building an explicit dimension reduction mapping for extracting the discriminative information hidden in the geodesic distances. In this paper, we regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed. The method consists of three techniques: a semisupervised graph construction method, replacement of original data points with feature vectors which are built by geodesic distances, and a new semisupervised dimension reduction method for feature vectors. Experiments on the MNIST, USPS handwritten digit data sets, MIT CBCL face versus nonface data set, and an intelligent traffic data set show the effectiveness of the proposed algorithm.
Cohen, Trevor; Schvaneveldt, Roger; Widdows, Dominic
2010-04-01
The discovery of implicit connections between terms that do not occur together in any scientific document underlies the model of literature-based knowledge discovery first proposed by Swanson. Corpus-derived statistical models of semantic distance such as Latent Semantic Analysis (LSA) have been evaluated previously as methods for the discovery of such implicit connections. However, LSA in particular is dependent on a computationally demanding method of dimension reduction as a means to obtain meaningful indirect inference, limiting its ability to scale to large text corpora. In this paper, we evaluate the ability of Random Indexing (RI), a scalable distributional model of word associations, to draw meaningful implicit relationships between terms in general and biomedical language. Proponents of this method have achieved comparable performance to LSA on several cognitive tasks while using a simpler and less computationally demanding method of dimension reduction than LSA employs. In this paper, we demonstrate that the original implementation of RI is ineffective at inferring meaningful indirect connections, and evaluate Reflective Random Indexing (RRI), an iterative variant of the method that is better able to perform indirect inference. RRI is shown to lead to more clearly related indirect connections and to outperform existing RI implementations in the prediction of future direct co-occurrence in the MEDLINE corpus. 2009 Elsevier Inc. All rights reserved.
Nagarajan, Mahesh B; Coan, Paola; Huber, Markus B; Diemoz, Paul C; Wismüller, Axel
2015-01-01
Phase contrast X-ray computed tomography (PCI-CT) has been demonstrated as a novel imaging technique that can visualize human cartilage with high spatial resolution and soft tissue contrast. Different textural approaches have been previously investigated for characterizing chondrocyte organization on PCI-CT to enable classification of healthy and osteoarthritic cartilage. However, the large size of feature sets extracted in such studies motivates an investigation into algorithmic feature reduction for computing efficient feature representations without compromising their discriminatory power. For this purpose, geometrical feature sets derived from the scaling index method (SIM) were extracted from 1392 volumes of interest (VOI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. The extracted feature sets were subject to linear and non-linear dimension reduction techniques as well as feature selection based on evaluation of mutual information criteria. The reduced feature set was subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Our results show that the classification performance achieved by 9-D SIM-derived geometric feature sets (AUC: 0.96 ± 0.02) can be maintained with 2-D representations computed from both dimension reduction and feature selection (AUC values as high as 0.97 ± 0.02). Thus, such feature reduction techniques can offer a high degree of compaction to large feature sets extracted from PCI-CT images while maintaining their ability to characterize the underlying chondrocyte patterns.
Pressure garment design tool to monitor exerted pressures.
Macintyre, Lisa; Ferguson, Rhona
2013-09-01
Pressure garments are used in the treatment of hypertrophic scarring following serious burns. The use of pressure garments is believed to hasten the maturation process, reduce pruritus associated with immature hypertrophic scars and prevent the formation of contractures over flexor joints. Pressure garments are normally made to measure for individual patients from elastic fabrics and are worn continuously for up to 2 years or until scar maturation. There are 2 methods of constructing pressure garments. The most common method, called the Reduction Factor method, involves reducing the patient's circumferential measurements by a certain percentage. The second method uses the Laplace Law to calculate the dimensions of pressure garments based on the circumferential measurements of the patient and the tension profile of the fabric. The Laplace Law method is complicated to utilise manually and no design tool is currently available to aid this process. This paper presents the development and suggested use of 2 new pressure garment design tools that will aid pressure garment design using the Reduction Factor and Laplace Law methods. Both tools calculate the pressure garment dimensions and the mean pressure that will be exerted around the body at each measurement point. Monitoring the pressures exerted by pressure garments and noting the clinical outcome would enable clinicians to build an understanding of the implications of particular pressures on scar outcome, maturation times and patient compliance rates. Once the optimum pressure for particular treatments is known, the Laplace Law method described in this paper can be used to deliver those average pressures to all patients. This paper also presents the results of a small scale audit of measurements taken for the fabrication of pressure garments in two UK hospitals. This audit highlights the wide range of pressures that are exerted using the Reduction Factor method and that manual pattern 'smoothing' can dramatically change the actual Reduction Factors used. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
a Hyperspectral Image Classification Method Using Isomap and Rvm
NASA Astrophysics Data System (ADS)
Chang, H.; Wang, T.; Fang, H.; Su, Y.
2018-04-01
Classification is one of the most significant applications of hyperspectral image processing and even remote sensing. Though various algorithms have been proposed to implement and improve this application, there are still drawbacks in traditional classification methods. Thus further investigations on some aspects, such as dimension reduction, data mining, and rational use of spatial information, should be developed. In this paper, we used a widely utilized global manifold learning approach, isometric feature mapping (ISOMAP), to address the intrinsic nonlinearities of hyperspectral image for dimension reduction. Considering the impropriety of Euclidean distance in spectral measurement, we applied spectral angle (SA) for substitute when constructed the neighbourhood graph. Then, relevance vector machines (RVM) was introduced to implement classification instead of support vector machines (SVM) for simplicity, generalization and sparsity. Therefore, a probability result could be obtained rather than a less convincing binary result. Moreover, taking into account the spatial information of the hyperspectral image, we employ a spatial vector formed by different classes' ratios around the pixel. At last, we combined the probability results and spatial factors with a criterion to decide the final classification result. To verify the proposed method, we have implemented multiple experiments with standard hyperspectral images compared with some other methods. The results and different evaluation indexes illustrated the effectiveness of our method.
Hierarchical optimization for neutron scattering problems
Bao, Feng; Archibald, Rick; Bansal, Dipanshu; ...
2016-03-14
In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.
Hierarchical optimization for neutron scattering problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Archibald, Rick; Bansal, Dipanshu
In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.
Levels of reduction in van Manen's phenomenological hermeneutic method: an empirical example.
Heinonen, Kristiina
2015-05-01
To describe reduction as a method using van Manen's phenomenological hermeneutic research approach. Reduction involves several levels that can be distinguished for their methodological usefulness. Researchers can use reduction in different ways and dimensions for their methodological needs. A study of Finnish multiple-birth families in which open interviews (n=38) were conducted with public health nurses, family care workers and parents of twins. A systematic literature and knowledge review showed there were no articles on multiple-birth families that used van Manen's method. Discussion The phenomena of the 'lifeworlds' of multiple-birth families consist of three core essential themes as told by parents: 'a state of constant vigilance', 'ensuring that they can continue to cope' and 'opportunities to share with other people'. Reduction provides the opportunity to carry out in-depth phenomenological hermeneutic research and understand people's lives. It helps to keep research stages separate but also enables a consolidated view. Social care and healthcare professionals have to hear parents' voices better to comprehensively understand their situation; they need further tools and training to be able to empower parents of twins. This paper adds an empirical example to the discussion of phenomenology, hermeneutic study and reduction as a method. It opens up reduction for researchers to exploit.
Methodological and hermeneutic reduction - a study of Finnish multiple-birth families.
Heinonen, Kristiina
2015-07-01
To describe reduction as a method in methodological and hermeneutic reduction and the hermeneutic circle using van Manen's principles, with the empirical example of the lifeworlds of multiple-birth families in Finland. Reduction involves several levels that can be distinguished for their methodological usefulness. Researchers can use reduction in different ways and dimensions for their methodological needs. Open interviews with public health nurses, family care workers and parents of twins. The systematic literature and knowledge review shows there were no articles on multiple-birth families that used van Manen's method. This paper presents reduction as a method that uses the hermeneutic circle. The lifeworlds of multiple-birth families consist of three core themes: 'A state of constant vigilance'; 'Ensuring that they can continue to cope'; and 'Opportunities to share with other people'. Reduction allows us to perform deep phenomenological-hermeneutic research and understand people's lifeworlds. It helps to keep research stages separate but also enables a consolidated view. Social care and healthcare professionals have to hear parents' voices better to comprehensively understand their situation; they also need further tools and training to be able to empower parents of twins. The many variations in adapting reduction mean its use can be very complex and confusing. This paper adds to the discussion of phenomenology, hermeneutic study and reduction.
NASA Astrophysics Data System (ADS)
Kim, Jeonglae; Pope, Stephen B.
2014-05-01
A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.
Nagarajan, Mahesh B.; Coan, Paola; Huber, Markus B.; Diemoz, Paul C.; Wismüller, Axel
2015-01-01
Phase contrast X-ray computed tomography (PCI-CT) has been demonstrated as a novel imaging technique that can visualize human cartilage with high spatial resolution and soft tissue contrast. Different textural approaches have been previously investigated for characterizing chondrocyte organization on PCI-CT to enable classification of healthy and osteoarthritic cartilage. However, the large size of feature sets extracted in such studies motivates an investigation into algorithmic feature reduction for computing efficient feature representations without compromising their discriminatory power. For this purpose, geometrical feature sets derived from the scaling index method (SIM) were extracted from 1392 volumes of interest (VOI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. The extracted feature sets were subject to linear and non-linear dimension reduction techniques as well as feature selection based on evaluation of mutual information criteria. The reduced feature set was subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Our results show that the classification performance achieved by 9-D SIM-derived geometric feature sets (AUC: 0.96 ± 0.02) can be maintained with 2-D representations computed from both dimension reduction and feature selection (AUC values as high as 0.97 ± 0.02). Thus, such feature reduction techniques can offer a high degree of compaction to large feature sets extracted from PCI-CT images while maintaining their ability to characterize the underlying chondrocyte patterns. PMID:25710875
Chaos and Robustness in a Single Family of Genetic Oscillatory Networks
Fu, Daniel; Tan, Patrick; Kuznetsov, Alexey; Molkov, Yaroslav I.
2014-01-01
Genetic oscillatory networks can be mathematically modeled with delay differential equations (DDEs). Interpreting genetic networks with DDEs gives a more intuitive understanding from a biological standpoint. However, it presents a problem mathematically, for DDEs are by construction infinitely-dimensional and thus cannot be analyzed using methods common for systems of ordinary differential equations (ODEs). In our study, we address this problem by developing a method for reducing infinitely-dimensional DDEs to two- and three-dimensional systems of ODEs. We find that the three-dimensional reductions provide qualitative improvements over the two-dimensional reductions. We find that the reducibility of a DDE corresponds to its robustness. For non-robust DDEs that exhibit high-dimensional dynamics, we calculate analytic dimension lines to predict the dependence of the DDEs’ correlation dimension on parameters. From these lines, we deduce that the correlation dimension of non-robust DDEs grows linearly with the delay. On the other hand, for robust DDEs, we find that the period of oscillation grows linearly with delay. We find that DDEs with exclusively negative feedback are robust, whereas DDEs with feedback that changes its sign are not robust. We find that non-saturable degradation damps oscillations and narrows the range of parameter values for which oscillations exist. Finally, we deduce that natural genetic oscillators with highly-regular periods likely have solely negative feedback. PMID:24667178
Unsteady Flow Simulation: A Numerical Challenge
2003-03-01
drive to convergence the numerical unsteady term. The time marching procedure is based on the approximate implicit Newton method for systems of non...computed through analytical derivatives of S. The linear system stemming from equation (3) is solved at each integration step by the same iterative method...significant reduction of memory usage, thanks to the reduced dimensions of the linear system matrix during the implicit marching of the solution. The
Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad
2014-01-01
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices. PMID:25177107
Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection.
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad
2014-11-01
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices.
Davalos, Angel D; Luben, Thomas J; Herring, Amy H; Sacks, Jason D
2017-02-01
Air pollution epidemiology traditionally focuses on the relationship between individual air pollutants and health outcomes (e.g., mortality). To account for potential copollutant confounding, individual pollutant associations are often estimated by adjusting or controlling for other pollutants in the mixture. Recently, the need to characterize the relationship between health outcomes and the larger multipollutant mixture has been emphasized in an attempt to better protect public health and inform more sustainable air quality management decisions. New and innovative statistical methods to examine multipollutant exposures were identified through a broad literature search, with a specific focus on those statistical approaches currently used in epidemiologic studies of short-term exposures to criteria air pollutants (i.e., particulate matter, carbon monoxide, sulfur dioxide, nitrogen dioxide, and ozone). Five broad classes of statistical approaches were identified for examining associations between short-term multipollutant exposures and health outcomes, specifically additive main effects, effect measure modification, unsupervised dimension reduction, supervised dimension reduction, and nonparametric methods. These approaches are characterized including advantages and limitations in different epidemiologic scenarios. By highlighting the characteristics of various studies in which multipollutant statistical methods have been used, this review provides epidemiologists and biostatisticians with a resource to aid in the selection of the most optimal statistical method to use when examining multipollutant exposures. Published by Elsevier Inc.
The Allusion of the Gene: Misunderstandings of the Concepts Heredity and Gene
ERIC Educational Resources Information Center
Falk, Raphael
2014-01-01
Life sciences became Biology, a formal scientific discipline, at the turn of the nineteenth century, when it adopted the methods of reductive physics and chemistry. Mendel's hypothesis of inheritance of discrete factors further introduced a quantitative reductionist dimension into biology. In 1910 Johannsen differentiated between the…
Restoration of dimensional reduction in the random-field Ising model at five dimensions
NASA Astrophysics Data System (ADS)
Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D <6 to their values in the pure Ising model at D -2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.
Restoration of dimensional reduction in the random-field Ising model at five dimensions.
Fytas, Nikolaos G; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D-2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D=5. We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3≤D<6 to their values in the pure Ising model at D-2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.
2016-04-01
Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.
Radar cross-section reduction based on an iterative fast Fourier transform optimized metasurface
NASA Astrophysics Data System (ADS)
Song, Yi-Chuan; Ding, Jun; Guo, Chen-Jiang; Ren, Yu-Hui; Zhang, Jia-Kai
2016-07-01
A novel polarization insensitive metasurface with over 25 dB monostatic radar cross-section (RCS) reduction is introduced. The proposed metasurface is comprised of carefully arranged unit cells with spatially varied dimension, which enables approximate uniform diffusion of incoming electromagnetic (EM) energy and reduces the threat from bistatic radar system. An iterative fast Fourier transform (FFT) method for conventional antenna array pattern synthesis is innovatively applied to find the best unit cell geometry parameter arrangement. Finally, a metasurface sample is fabricated and tested to validate RCS reduction behavior predicted by full wave simulation software Ansys HFSSTM and marvelous agreement is observed.
Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.
Ly, Cheng
2015-12-01
Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.
Sample Dimensionality Effects on d' and Proportion of Correct Responses in Discrimination Testing.
Bloom, David J; Lee, Soo-Yeun
2016-09-01
Products in the food and beverage industry have varying levels of dimensionality ranging from pure water to multicomponent food products, which can modify sensory perception and possibly influence discrimination testing results. The objectives of the study were to determine the impact of (1) sample dimensionality and (2) complex formulation changes on the d' and proportion of correct response of the 3-AFC and triangle methods. Two experiments were conducted using 47 prescreened subjects who performed either triangle or 3-AFC test procedures. In Experiment I, subjects performed 3-AFC and triangle tests using model solutions with different levels of dimensionality. Samples increased in dimensionality from 1-dimensional sucrose in water solution to 3-dimensional sucrose, citric acid, and flavor in water solution. In Experiment II, subjects performed 3-AFC and triangle tests using 3-dimensional solutions. Sample pairs differed in all 3 dimensions simultaneously to represent complex formulation changes. Two forms of complexity were compared: dilution, where all dimensions decreased in the same ratio, and compensation, where a dimension was increased to compensate for a reduction in another. The proportion of correct responses decreased for both methods when the dimensionality was increased from 1- to 2-dimensional samples. No reduction in correct responses was observed from 2- to 3-dimensional samples. No significant differences in d' were demonstrated between the 2 methods when samples with complex formulation changes were tested. Results reveal an impact on proportion of correct responses due to sample dimensionality and should be explored further using a wide range of sample formulations. © 2016 Institute of Food Technologists®
On the dimension of complex responses in nonlinear structural vibrations
NASA Astrophysics Data System (ADS)
Wiebe, R.; Spottswood, S. M.
2016-07-01
The ability to accurately model engineering systems under extreme dynamic loads would prove a major breakthrough in many aspects of aerospace, mechanical, and civil engineering. Extreme loads frequently induce both nonlinearities and coupling which increase the complexity of the response and the computational cost of finite element models. Dimension reduction has recently gained traction and promises the ability to distill dynamic responses down to a minimal dimension without sacrificing accuracy. In this context, the dimensionality of a response is related to the number of modes needed in a reduced order model to accurately simulate the response. Thus, an important step is characterizing the dimensionality of complex nonlinear responses of structures. In this work, the dimensionality of the nonlinear response of a post-buckled beam is investigated. Significant detail is dedicated to carefully introducing the experiment, the verification of a finite element model, and the dimensionality estimation algorithm as it is hoped that this system may help serve as a benchmark test case. It is shown that with minor modifications, the method of false nearest neighbors can quantitatively distinguish between the response dimension of various snap-through, non-snap-through, random, and deterministic loads. The state-space dimension of the nonlinear system in question increased from 2-to-10 as the system response moved from simple, low-level harmonic to chaotic snap-through. Beyond the problem studied herein, the techniques developed will serve as a prescriptive guide in developing fast and accurate dimensionally reduced models of nonlinear systems, and eventually as a tool for adaptive dimension-reduction in numerical modeling. The results are especially relevant in the aerospace industry for the design of thin structures such as beams, panels, and shells, which are all capable of spatio-temporally complex dynamic responses that are difficult and computationally expensive to model.
Neural Network Machine Learning and Dimension Reduction for Data Visualization
NASA Technical Reports Server (NTRS)
Liles, Charles A.
2014-01-01
Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.
A New Method of Facial Expression Recognition Based on SPE Plus SVM
NASA Astrophysics Data System (ADS)
Ying, Zilu; Huang, Mingwei; Wang, Zhen; Wang, Zhewei
A novel method of facial expression recognition (FER) is presented, which uses stochastic proximity embedding (SPE) for data dimension reduction, and support vector machine (SVM) for expression classification. The proposed algorithm is applied to Japanese Female Facial Expression (JAFFE) database for FER, better performance is obtained compared with some traditional algorithms, such as PCA and LDA etc.. The result have further proved the effectiveness of the proposed algorithm.
NeatMap--non-clustering heat map alternatives in R.
Rajaram, Satwik; Oono, Yoshi
2010-01-22
The clustered heat map is the most popular means of visualizing genomic data. It compactly displays a large amount of data in an intuitive format that facilitates the detection of hidden structures and relations in the data. However, it is hampered by its use of cluster analysis which does not always respect the intrinsic relations in the data, often requiring non-standardized reordering of rows/columns to be performed post-clustering. This sometimes leads to uninformative and/or misleading conclusions. Often it is more informative to use dimension-reduction algorithms (such as Principal Component Analysis and Multi-Dimensional Scaling) which respect the topology inherent in the data. Yet, despite their proven utility in the analysis of biological data, they are not as widely used. This is at least partially due to the lack of user-friendly visualization methods with the visceral impact of the heat map. NeatMap is an R package designed to meet this need. NeatMap offers a variety of novel plots (in 2 and 3 dimensions) to be used in conjunction with these dimension-reduction techniques. Like the heat map, but unlike traditional displays of such results, it allows the entire dataset to be displayed while visualizing relations between elements. It also allows superimposition of cluster analysis results for mutual validation. NeatMap is shown to be more informative than the traditional heat map with the help of two well-known microarray datasets. NeatMap thus preserves many of the strengths of the clustered heat map while addressing some of its deficiencies. It is hoped that NeatMap will spur the adoption of non-clustering dimension-reduction algorithms.
Generation Algorithm of Discrete Line in Multi-Dimensional Grids
NASA Astrophysics Data System (ADS)
Du, L.; Ben, J.; Li, Y.; Wang, R.
2017-09-01
Discrete Global Grids System (DGGS) is a kind of digital multi-resolution earth reference model, in terms of structure, it is conducive to the geographical spatial big data integration and mining. Vector is one of the important types of spatial data, only by discretization, can it be applied in grids system to make process and analysis. Based on the some constraint conditions, this paper put forward a strict definition of discrete lines, building a mathematic model of the discrete lines by base vectors combination method. Transforming mesh discrete lines issue in n-dimensional grids into the issue of optimal deviated path in n-minus-one dimension using hyperplane, which, therefore realizing dimension reduction process in the expression of mesh discrete lines. On this basis, we designed a simple and efficient algorithm for dimension reduction and generation of the discrete lines. The experimental results show that our algorithm not only can be applied in the two-dimensional rectangular grid, also can be applied in the two-dimensional hexagonal grid and the three-dimensional cubic grid. Meanwhile, when our algorithm is applied in two-dimensional rectangular grid, it can get a discrete line which is more similar to the line in the Euclidean space.
Kaluza-Klein cosmology from five-dimensional Lovelock-Cartan theory
NASA Astrophysics Data System (ADS)
Castillo-Felisola, Oscar; Corral, Cristóbal; del Pino, Simón; Ramírez, Francisca
2016-12-01
We study the Kaluza-Klein dimensional reduction of the Lovelock-Cartan theory in five-dimensional spacetime, with a compact dimension of S1 topology. We find cosmological solutions of the Friedmann-Robertson-Walker class in the reduced spacetime. The torsion and the fields arising from the dimensional reduction induce a nonvanishing energy-momentum tensor in four dimensions. We find solutions describing expanding, contracting, and bouncing universes. The model shows a dynamical compactification of the extra dimension in some regions of the parameter space.
Reliability optimization design of the gear modification coefficient based on the meshing stiffness
NASA Astrophysics Data System (ADS)
Wang, Qianqian; Wang, Hui
2018-04-01
Since the time varying meshing stiffness of gear system is the key factor affecting gear vibration, it is important to design the meshing stiffness to reduce vibration. Based on the effect of gear modification coefficient on the meshing stiffness, considering the random parameters, reliability optimization design of the gear modification is researched. The dimension reduction and point estimation method is used to estimate the moment of the limit state function, and the reliability is obtained by the forth moment method. The cooperation of the dynamic amplitude results before and after optimization indicates that the research is useful for the reduction of vibration and noise and the improvement of the reliability.
Integrand-level reduction of loop amplitudes by computational algebraic geometry methods
NASA Astrophysics Data System (ADS)
Zhang, Yang
2012-09-01
We present an algorithm for the integrand-level reduction of multi-loop amplitudes of renormalizable field theories, based on computational algebraic geometry. This algorithm uses (1) the Gröbner basis method to determine the basis for integrand-level reduction, (2) the primary decomposition of an ideal to classify all inequivalent solutions of unitarity cuts. The resulting basis and cut solutions can be used to reconstruct the integrand from unitarity cuts, via polynomial fitting techniques. The basis determination part of the algorithm has been implemented in the Mathematica package, BasisDet. The primary decomposition part can be readily carried out by algebraic geometry softwares, with the output of the package BasisDet. The algorithm works in both D = 4 and D = 4 - 2 ɛ dimensions, and we present some two and three-loop examples of applications of this algorithm.
On iterative processes in the Krylov-Sonneveld subspaces
NASA Astrophysics Data System (ADS)
Ilin, Valery P.
2016-10-01
The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.
VAMPnets for deep learning of molecular kinetics.
Mardt, Andreas; Pasquali, Luca; Wu, Hao; Noé, Frank
2018-01-02
There is an increasing demand for computing the relevant structures, equilibria, and long-timescale kinetics of biomolecular processes, such as protein-drug binding, from high-throughput molecular dynamics simulations. Current methods employ transformation of simulated coordinates into structural features, dimension reduction, clustering the dimension-reduced data, and estimation of a Markov state model or related model of the interconversion rates between molecular structures. This handcrafted approach demands a substantial amount of modeling expertise, as poor decisions at any step will lead to large modeling errors. Here we employ the variational approach for Markov processes (VAMP) to develop a deep learning framework for molecular kinetics using neural networks, dubbed VAMPnets. A VAMPnet encodes the entire mapping from molecular coordinates to Markov states, thus combining the whole data processing pipeline in a single end-to-end framework. Our method performs equally or better than state-of-the-art Markov modeling methods and provides easily interpretable few-state kinetic models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng
An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classificationmore » rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
A Graph-Embedding Approach to Hierarchical Visual Word Mergence.
Wang, Lei; Liu, Lingqiao; Zhou, Luping
2017-02-01
Appropriately merging visual words are an effective dimension reduction method for the bag-of-visual-words model in image classification. The approach of hierarchically merging visual words has been extensively employed, because it gives a fully determined merging hierarchy. Existing supervised hierarchical merging methods take different approaches and realize the merging process with various formulations. In this paper, we propose a unified hierarchical merging approach built upon the graph-embedding framework. Our approach is able to merge visual words for any scenario, where a preferred structure and an undesired structure are defined, and, therefore, can effectively attend to all kinds of requirements for the word-merging process. In terms of computational efficiency, we show that our algorithm can seamlessly integrate a fast search strategy developed in our previous work and, thus, well maintain the state-of-the-art merging speed. To the best of our survey, the proposed approach is the first one that addresses the hierarchical visual word mergence in such a flexible and unified manner. As demonstrated, it can maintain excellent image classification performance even after a significant dimension reduction, and outperform all the existing comparable visual word-merging methods. In a broad sense, our work provides an open platform for applying, evaluating, and developing new criteria for hierarchical word-merging tasks.
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
2010-01-01
Background The purpose of this study was to reduce the number of items, create a scoring method and assess the psychometric properties of the Freedom from Glasses Value Scale (FGVS), which measures benefits of freedom from glasses perceived by cataract and presbyopic patients after multifocal intraocular lens (IOL) surgery. Methods The 21-item FGVS, developed simultaneously in French and Spanish, was administered by phone during an observational study to 152 French and 152 Spanish patients who had undergone cataract or presbyopia surgery at least 1 year before the study. Reduction of items and creation of the scoring method employed statistical methods (principal component analysis, multitrait analysis) and content analysis. Psychometric properties (validation of the structure, internal consistency reliability, and known-group validity) of the resulting version were assessed in the pooled population and per country. Results One item was deleted and 3 were kept but not aggregated in a dimension. The other 17 items were grouped into 2 dimensions ('global evaluation', 9 items; 'advantages', 8 items) and divided into 5 sub-dimensions, with higher scores indicating higher benefit of surgery. The structure was validated (good item convergent and discriminant validity). Internal consistency reliability was good for all dimensions and sub-dimensions (Cronbach's alphas above 0.70). The FGVS was able to discriminate between patients wearing glasses or not after surgery (higher scores for patients not wearing glasses). FGVS scores were significantly higher in Spain than France; however, the measure had similar psychometric performances in both countries. Conclusions The FGVS is a valid and reliable instrument measuring benefits of freedom from glasses perceived by cataract and presbyopic patients after multifocal IOL surgery. PMID:20497555
Multiresolution generalized N dimension PCA for ultrasound image denoising
2014-01-01
Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917
Riblets for aircraft skin-friction reduction
NASA Technical Reports Server (NTRS)
Walsh, Michael J.
1986-01-01
Energy conservation and aerodynamic efficiency are the driving forces behind research into methods to reduce turbulent skin friction drag on aircraft fuselages. Fuselage skin friction reductions as small as 10 percent provide the potential for a 250 million dollar per year fuel savings for the commercial airline fleet. One passive drag reduction concept which is relatively simple to implement and retrofit is that of longitudinally grooved surfaces aligned with the stream velocity. These grooves (riblets) have heights and spacings on the order of the turbulent wall streak and burst dimensions. The riblet performance (8 percent net drag reduction thus far), sensitivity to operational/application considerations such as yaw and Reynolds number variation, an alternative fabrication technique, results of extensive parametric experiments for geometrical optimization, and flight test applications are summarized.
A mixed finite difference/Galerkin method for three-dimensional Rayleigh-Benard convection
NASA Technical Reports Server (NTRS)
Buell, Jeffrey C.
1988-01-01
A fast and accurate numerical method, for nonlinear conservation equation systems whose solutions are periodic in two of the three spatial dimensions, is presently implemented for the case of Rayleigh-Benard convection between two rigid parallel plates in the parameter region where steady, three-dimensional convection is known to be stable. High-order streamfunctions secure the reduction of the system of five partial differential equations to a system of only three. Numerical experiments are presented which verify both the expected convergence rates and the absolute accuracy of the method.
EPR oximetry in three spatial dimensions using sparse spin distribution
NASA Astrophysics Data System (ADS)
Som, Subhojit; Potter, Lee C.; Ahmad, Rizwan; Vikram, Deepti S.; Kuppusamy, Periannan
2008-08-01
A method is presented to use continuous wave electron paramagnetic resonance imaging for rapid measurement of oxygen partial pressure in three spatial dimensions. A particulate paramagnetic probe is employed to create a sparse distribution of spins in a volume of interest. Information encoding location and spectral linewidth is collected by varying the spatial orientation and strength of an applied magnetic gradient field. Data processing exploits the spatial sparseness of spins to detect voxels with nonzero spin and to estimate the spectral linewidth for those voxels. The parsimonious representation of spin locations and linewidths permits an order of magnitude reduction in data acquisition time, compared to four-dimensional tomographic reconstruction using traditional spectral-spatial imaging. The proposed oximetry method is experimentally demonstrated for a lithium octa- n-butoxy naphthalocyanine (LiNc-BuO) probe using an L-band EPR spectrometer.
T-duality constraints on higher derivatives revisited
NASA Astrophysics Data System (ADS)
Hohm, Olaf; Zwiebach, Barton
2016-04-01
We ask to what extent are the higher-derivative corrections of string theory constrained by T-duality. The seminal early work by Meissner tests T-duality by reduction to one dimension using a distinguished choice of field variables in which the bosonic string action takes a Gauss-Bonnet-type form. By analyzing all field redefinitions that may or may not be duality covariant and may or may not be gauge covariant we extend the procedure to test T-duality starting from an action expressed in arbitrary field variables. We illustrate the method by showing that it determines uniquely the first-order α' corrections of the bosonic string, up to terms that vanish in one dimension. We also use the method to glean information about the O({α}^' 2}) corrections in the double field theory with Green-Schwarz deformation.
NASA Technical Reports Server (NTRS)
Handschuh, Katherine M.; Miller, Sandi G.; Sinnott, Matthew J.; Kohlman, Lee W.; Roberts, Gary D.; Pereira, J. Michael; Ruggeri, Charles R.
2014-01-01
Application of polymer matrix composite materials for jet engine fan blades is becoming attractive as an alternative to metallic blades; particularly for large engines where significant weight savings are recognized on moving to a composite structure. However, the weight benefit of the composite of is offset by a reduction of aerodynamic efficiency resulting from a necessary increase in blade thickness; relative to the titanium blades. Blade dimensions are largely driven by resistance to damage on bird strike. Further development of the composite material is necessary to allow composite blade designs to approximate the dimensions of a metallic fan blade. The reduction in thickness over the state of the art composite blades is expected to translate into structural weight reduction, improved aerodynamic efficiency, and therefore reduced fuel consumption. This paper presents test article design, subcomponent blade leading edge fabrication, test method development, and initial results from ballistic impact of a gelatin projectile on the leading edge of composite fan blades. The simplified test article geometry was developed to realistically simulate a blade leading edge while decreasing fabrication complexity. Impact data is presented on baseline composite blades and toughened blades; where a considerable improvement to impact resistance was recorded.
NASA Technical Reports Server (NTRS)
Miller, Sandi G.; Handschuh, Katherine; Sinnott, Matthew J.; Kohlman, Lee W.; Roberts, Gary D.; Martin, Richard E.; Ruggeri, Charles R.; Pereira, J. Michael
2015-01-01
Application of polymer matrix composite materials for jet engine fan blades is becoming attractive as an alternative to metallic blades; particularly for large engines where significant weight savings are recognized on moving to a composite structure. However, the weight benefit of the composite is offset by a reduction of aerodynamic efficiency resulting from a necessary increase in blade thickness; relative to the titanium blades. Blade dimensions are largely driven by resistance to damage on bird strike. Further development of the composite material is necessary to allow composite blade designs to approximate the dimensions of a metallic fan blade. The reduction in thickness over the state of the art composite blades is expected to translate into structural weight reduction, improved aerodynamic efficiency, and therefore reduced fuel consumption. This paper presents test article design, subcomponent blade leading edge fabrication, test method development, and initial results from ballistic impact of a gelatin projectile on the leading edge of composite fan blades. The simplified test article geometry was developed to realistically simulate a blade leading edge while decreasing fabrication complexity. Impact data is presented on baseline composite blades and toughened blades; where a considerable improvement to impact resistance was recorded.
NASA Astrophysics Data System (ADS)
Anh-Nga, Nguyen T.; Tuan-Anh, Nguyen; Thanh-Quoc, Nguyen; Ha, Do Tuong
2018-04-01
Copper nanoparticles, due to their special properties, small dimensions and low-cost preparation, have many potential applications such as in optical, electronics, catalysis, sensors, antibacterial agents. In this study, copper nanoparticles were synthesized by chemical reduction method with different conditions in order to investigate the optimum conditions which gave the smallest (particle diameter) dimensions. The synthesis step used copper (II) acetate salt as precursor, ascorbic acid as reducing agent, glycerin and polyvinylpyrrolidone (PVP) as protector and stabilizer. The assistance of ultrasonic was were considered as the significant factor affecting the size of the synthesized particles. The results showed that the copper nanoparticles have been successfully synthesized with the diameter as small as 20-40 nm and the conditions of ultrasonic waves were 48 kHz of frequency, 20 minutes of treated time and 65-70 °C of temperature. The synthesized copper nanoparticles were characterized by optical absorption spectrum, scanning electron microscopy (SEM), and Fourier Transform Infrared Spectrometry.
ERIC Educational Resources Information Center
Klasen, Stephan
2005-01-01
The aim of this Working Paper is to broaden the debate on "pro-poor growth". An exclusive focus on the income dimension of poverty has neglected the non-income dimensions. After an examination of prominent views on the linkages between economic growth, inequality, and poverty reduction this paper discusses the proper definition and…
The Kadomtsev{endash}Petviashvili equation as a source of integrable model equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maccari, A.
1996-12-01
A new integrable and nonlinear partial differential equation (PDE) in 2+1 dimensions is obtained, by an asymptotically exact reduction method based on Fourier expansion and spatiotemporal rescaling, from the Kadomtsev{endash}Petviashvili equation. The integrability property is explicitly demonstrated, by exhibiting the corresponding Lax pair, that is obtained by applying the reduction technique to the Lax pair of the Kadomtsev{endash}Petviashvili equation. This model equation is likely to be of applicative relevance, because it may be considered a consistent approximation of a large class of nonlinear evolution PDEs. {copyright} {ital 1996 American Institute of Physics.}
Functional traits, convergent evolution, and periodic tables of niches.
Winemiller, Kirk O; Fitzgerald, Daniel B; Bower, Luke M; Pianka, Eric R
2015-08-01
Ecology is often said to lack general theories sufficiently predictive for applications. Here, we examine the concept of a periodic table of niches and feasibility of niche classification schemes from functional trait and performance data. Niche differences and their influence on ecological patterns and processes could be revealed effectively by first performing data reduction/ordination analyses separately on matrices of trait and performance data compiled according to logical associations with five basic niche 'dimensions', or aspects: habitat, life history, trophic, defence and metabolic. Resultant patterns then are integrated to produce interpretable niche gradients, ordinations and classifications. Degree of scheme periodicity would depend on degrees of niche conservatism and convergence causing species clustering across multiple niche dimensions. We analysed a sample data set containing trait and performance data to contrast two approaches for producing niche schemes: species ordination within niche gradient space, and niche categorisation according to trait-value thresholds. Creation of niche schemes useful for advancing ecological knowledge and its applications will depend on research that produces functional trait and performance datasets directly related to niche dimensions along with criteria for data standardisation and quality. As larger databases are compiled, opportunities will emerge to explore new methods for data reduction, ordination and classification. © 2015 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.
Reducing Water/Hull Drag By Injecting Air Into Grooves
NASA Technical Reports Server (NTRS)
Reed, Jason C.; Bushnell, Dennis M.; Weinstein, Leonard M.
1991-01-01
Proposed technique for reduction of friction drag on hydrodynamic body involves use of grooves and combinations of surfactants to control motion of layer on surface of such body. Surface contains many rows of side-by-side, evenly spaced, longitudinal grooves. Dimensions of grooves and sharpnesses of tips in specific case depends on conditions of flow about vessel. Requires much less air than does microbubble-injection method.
Variable cross-section windings for efficiency improvement of electric machines
NASA Astrophysics Data System (ADS)
Grachev, P. Yu; Bazarov, A. A.; Tabachinskiy, A. S.
2018-02-01
Implementation of energy-saving technologies in industry is impossible without efficiency improvement of electric machines. The article considers the ways of efficiency improvement and mass and dimensions reduction of electric machines with electronic control. Features of compact winding design for stators and armatures are described. Influence of compact winding on thermal and electrical process is given. Finite element method was used in computer simulation.
Factor analytic reduction of the carotid-cardiac baroreflex parameters
NASA Technical Reports Server (NTRS)
Ludwig, David A.
1989-01-01
An accepted method for measuring the responsiveness of the carotid-cardiac baroreflex to arterial pressure changes is to artificially stimulate the baroreceptors in the neck. This is accomplished by using a pressurized neck cuff which constricts and distends the carotid artery and subsequently stimulates the baroreceptors. Nine physiological responses to this type of stimulation are quantified and used as indicators of the baroreflex. Thirty male humans between the ages 27 and 46 underwent the carotid-cardiac baroreflex test. The data for the nine response parameters were analyzed by principle component factor analysis. The results of this analysis indicated that 93 percent of the total variance across all nine parameters could be explained in four dimensions. Examination of the factor loadings following an orthogonal rotation of the principle components indicated four well defined dimensions. The first two dimensions reflected location points for R-R interval and carotid distending pressure respectively. The third dimension was composed of measures reflecting the gain of the reflex. The fourth dimension was the ratio of the resting R-R interval to R-R interval during simulated hypertension. The data suggests that the analysis of all nine baroreflex parameters is redundant.
Random phase detection in multidimensional NMR.
Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C
2011-10-04
Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.
NASA Astrophysics Data System (ADS)
Yulianur, Alfiansyah; Fauzi, Amir; Humaira, Zaitun
2018-05-01
The changes of land use and diminishing of open field that persistently occur are projected to cause rates acceleration of runoff, which decreases the opportunity for rainwater to infiltrate. It has an impact on the surface runoff into the channels, which eventually may lead to overflow and inundate the surrounding area. Some efforts are required to increase the infiltration of rainfall. Thus, bio pore could be one of the most effective methods to be implemented. The objective of this study is to evaluate the effect of bio pore towards the reduction of runoff discharge into the drainage channel and to determine whether that reduction could lead to effectively lessen the channels’ dimension. This study is commenced at Kopelma Darussalam in the southern part where there were several spots that submerged by inundation flood during the rainy season, namely Sektor Timur area. Rational modification formula is used to calculate the surface runoff discharge on the land without the use of bio pore. Meanwhile, runoff discharge on the land with the use of bio pores is calculated by the use of water balance formula. The number of bio pores that have planned in Sektor Timur area is 3350 bio pores with the diameter of each is ∅10 cm and 80 cm in depth. The result indicates that those bio pores can reduce the runoff discharge on average of 27% and its’ reduction lead to the decrease of drainage channel dimension for the average of 26.9%.
Köppl, Tobias; Santin, Gabriele; Haasdonk, Bernard; Helmig, Rainer
2018-05-06
In this work, we consider two kinds of model reduction techniques to simulate blood flow through the largest systemic arteries, where a stenosis is located in a peripheral artery i.e. in an artery that is located far away from the heart. For our simulations we place the stenosis in one of the tibial arteries belonging to the right lower leg (right post tibial artery). The model reduction techniques that are used are on the one hand dimensionally reduced models (1-D and 0-D models, the so-called mixed-dimension model) and on the other hand surrogate models produced by kernel methods. Both methods are combined in such a way that the mixed-dimension models yield training data for the surrogate model, where the surrogate model is parametrised by the degree of narrowing of the peripheral stenosis. By means of a well-trained surrogate model, we show that simulation data can be reproduced with a satisfactory accuracy and that parameter optimisation or state estimation problems can be solved in a very efficient way. Furthermore it is demonstrated that a surrogate model enables us to present after a very short simulation time the impact of a varying degree of stenosis on blood flow, obtaining a speedup of several orders over the full model. This article is protected by copyright. All rights reserved.
Multiview Locally Linear Embedding for Effective Medical Image Retrieval
Shen, Hualei; Tao, Dacheng; Ma, Dianfu
2013-01-01
Content-based medical image retrieval continues to gain attention for its potential to assist radiological image interpretation and decision making. Many approaches have been proposed to improve the performance of medical image retrieval system, among which visual features such as SIFT, LBP, and intensity histogram play a critical role. Typically, these features are concatenated into a long vector to represent medical images, and thus traditional dimension reduction techniques such as locally linear embedding (LLE), principal component analysis (PCA), or laplacian eigenmaps (LE) can be employed to reduce the “curse of dimensionality”. Though these approaches show promising performance for medical image retrieval, the feature-concatenating method ignores the fact that different features have distinct physical meanings. In this paper, we propose a new method called multiview locally linear embedding (MLLE) for medical image retrieval. Following the patch alignment framework, MLLE preserves the geometric structure of the local patch in each feature space according to the LLE criterion. To explore complementary properties among a range of features, MLLE assigns different weights to local patches from different feature spaces. Finally, MLLE employs global coordinate alignment and alternating optimization techniques to learn a smooth low-dimensional embedding from different features. To justify the effectiveness of MLLE for medical image retrieval, we compare it with conventional spectral embedding methods. We conduct experiments on a subset of the IRMA medical image data set. Evaluation results show that MLLE outperforms state-of-the-art dimension reduction methods. PMID:24349277
Assessment of Schrodinger Eigenmaps for target detection
NASA Astrophysics Data System (ADS)
Dorado Munoz, Leidy P.; Messinger, David W.; Czaja, Wojtek
2014-06-01
Non-linear dimensionality reduction methods have been widely applied to hyperspectral imagery due to its structure as the information can be represented in a lower dimension without losing information, and because the non-linear methods preserve the local geometry of the data while the dimension is reduced. One of these methods is Laplacian Eigenmaps (LE), which assumes that the data lies on a low dimensional manifold embedded in a high dimensional space. LE builds a nearest neighbor graph, computes its Laplacian and performs the eigendecomposition of the Laplacian. These eigenfunctions constitute a basis for the lower dimensional space in which the geometry of the manifold is preserved. In addition to the reduction problem, LE has been widely used in tasks such as segmentation, clustering, and classification. In this regard, a new Schrodinger Eigenmaps (SE) method was developed and presented as a semi-supervised classification scheme in order to improve the classification performance and take advantage of the labeled data. SE is an algorithm built upon LE, where the former Laplacian operator is replaced by the Schrodinger operator. The Schrodinger operator includes a potential term V, that, taking advantage of the additional information such as labeled data, allows clustering of similar points. In this paper, we explore the idea of using SE in target detection. In this way, we present a framework where the potential term V is defined as a barrier potential: a diagonal matrix encoding the spatial position of the target, and the detection performance is evaluated by using different targets and different hyperspectral scenes.
Berdeaux, Gilles; Meunier, Juliette; Arnould, Benoit; Viala-Danten, Muriel
2010-05-24
The purpose of this study was to reduce the number of items, create a scoring method and assess the psychometric properties of the Freedom from Glasses Value Scale (FGVS), which measures benefits of freedom from glasses perceived by cataract and presbyopic patients after multifocal intraocular lens (IOL) surgery. The 21-item FGVS, developed simultaneously in French and Spanish, was administered by phone during an observational study to 152 French and 152 Spanish patients who had undergone cataract or presbyopia surgery at least 1 year before the study. Reduction of items and creation of the scoring method employed statistical methods (principal component analysis, multitrait analysis) and content analysis. Psychometric properties (validation of the structure, internal consistency reliability, and known-group validity) of the resulting version were assessed in the pooled population and per country. One item was deleted and 3 were kept but not aggregated in a dimension. The other 17 items were grouped into 2 dimensions ('global evaluation', 9 items; 'advantages', 8 items) and divided into 5 sub-dimensions, with higher scores indicating higher benefit of surgery. The structure was validated (good item convergent and discriminant validity). Internal consistency reliability was good for all dimensions and sub-dimensions (Cronbach's alphas above 0.70). The FGVS was able to discriminate between patients wearing glasses or not after surgery (higher scores for patients not wearing glasses). FGVS scores were significantly higher in Spain than France; however, the measure had similar psychometric performances in both countries. The FGVS is a valid and reliable instrument measuring benefits of freedom from glasses perceived by cataract and presbyopic patients after multifocal IOL surgery.
A sparse grid based method for generative dimensionality reduction of high-dimensional data
NASA Astrophysics Data System (ADS)
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
Emotion computing using Word Mover's Distance features based on Ren_CECps.
Ren, Fuji; Liu, Ning
2018-01-01
In this paper, we propose an emotion separated method(SeTF·IDF) to assign the emotion labels of sentences with different values, which has a better visual effect compared with the values represented by TF·IDF in the visualization of a multi-label Chinese emotional corpus Ren_CECps. Inspired by the enormous improvement of the visualization map propelled by the changed distances among the sentences, we being the first group utilizes the Word Mover's Distance(WMD) algorithm as a way of feature representation in Chinese text emotion classification. Our experiments show that both in 80% for training, 20% for testing and 50% for training, 50% for testing experiments of Ren_CECps, WMD features get the best f1 scores and have a greater increase compared with the same dimension feature vectors obtained by dimension reduction TF·IDF method. Compared experiments in English corpus also show the efficiency of WMD features in the cross-language field.
Limited Rank Matrix Learning, discriminative dimension reduction and visualization.
Bunte, Kerstin; Schneider, Petra; Hammer, Barbara; Schleif, Frank-Michael; Villmann, Thomas; Biehl, Michael
2012-02-01
We present an extension of the recently introduced Generalized Matrix Learning Vector Quantization algorithm. In the original scheme, adaptive square matrices of relevance factors parameterize a discriminative distance measure. We extend the scheme to matrices of limited rank corresponding to low-dimensional representations of the data. This allows to incorporate prior knowledge of the intrinsic dimension and to reduce the number of adaptive parameters efficiently. In particular, for very large dimensional data, the limitation of the rank can reduce computation time and memory requirements significantly. Furthermore, two- or three-dimensional representations constitute an efficient visualization method for labeled data sets. The identification of a suitable projection is not treated as a pre-processing step but as an integral part of the supervised training. Several real world data sets serve as an illustration and demonstrate the usefulness of the suggested method. Copyright © 2011 Elsevier Ltd. All rights reserved.
Simplex-stochastic collocation method with improved scalability
NASA Astrophysics Data System (ADS)
Edeling, W. N.; Dwight, R. P.; Cinnella, P.
2016-04-01
The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.
Boosted Kaluza-Klein magnetic monopole
NASA Astrophysics Data System (ADS)
Hashemi, S. Sedigheh; Riazi, Nematollah
2018-06-01
We consider a Kaluza-Klein vacuum solution which is closely related to the Gross-Perry-Sorkin (GPS) magnetic monopole. The solution can be obtained from the Euclidean Taub-NUT solution with an extra compact fifth spatial dimension within the formalism of Kaluza-Klein reduction. We study its physical properties as appearing in (3 + 1) spacetime dimensions, which turns out to be a static magnetic monopole. We then boost the GPS magnetic monopole along the extra dimension, and perform the Kaluza-Klein reduction. The resulting four-dimensional spacetime is a rotating stationary system, with both electric and magnetic fields. In fact, after the boost the magnetic monopole turns into a string connected to a dyon.
NASA Astrophysics Data System (ADS)
Esmaeilzad, Armin; Khanlari, Karen
2018-07-01
As the number of degrees of freedom (DOFs) in structural dynamic problems becomes larger, the analyzing complexity and CPU usage of computers increase drastically. Condensation (or reduction) method is an efficient technique to reduce the size of the full model or the dimension of the structural matrices by eliminating the unimportant DOFs. After the first presentation of condensation method by Guyan in 1965 for undamped structures, which ignores the dynamic effects of the mass term, various forms of dynamic condensation methods were presented to overcome this issue. Moreover, researchers have tried to expand the dynamic condensation method to non-classically damped structures. Dynamic reduction of such systems is far more complicated than undamped systems. The proposed non-iterative method in this paper is introduced as 'Maclaurin Expansion of the frequency response function in Laplace Domain' (MELD) applied for dynamic reduction of non-classically damped structures. The present approach is implemented in four numerical examples of 2D bending-shear-axial frames with various numbers of stories and spans and also a floating raft isolation system. The results of natural frequencies and dynamic responses of models are compared with each other before and after the dynamic reduction. It is shown that the result accuracy has acceptable convergence in both cases. In addition, it is indicated that the result of the proposed method is more accurate than the results of some other existing condensation methods.
Environmental Barriers and Social Participation in Individuals With Spinal Cord Injury
Tsai, I-Hsuan; Graves, Daniel E.; Chan, Wenyaw; Darkoh, Charles; Lee, Meei-Shyuan; Pompeii, Lisa A.
2018-01-01
Objective The study aimed to examine the relationship between environmental barriers and social participation among individuals with spinal cord injury (SCI). Method Individuals admitted to regional centers of the Model Spinal Cord Injury System in the United States due to traumatic SCI were interviewed and included in the National Spinal Cord Injury Database. This cross-sectional study applied a secondary analysis with a mixed effect model on the data from 3,162 individuals who received interviews from 2000 through 2005. Five dimensions of environmental barriers were estimated using the short form of the Craig Hospital Inventory of Environmental Factors—Short Form (CHIEF-SF). Social participation was measured with the short form of the Craig Handicap Assessment and Reporting Technique—Short Form (CHART-SF) and their employment status. Results Subscales of environmental barriers were negatively associated with the social participation measures. Each 1 point increase in CHIEF-SF total score (indicated greater environmental barriers) was associated with a 0.82 point reduction in CHART-SF total score (95% CI: −1.07, −0.57) (decreased social participation) and 4% reduction in the odds of being employed. Among the 5 CHIEF-SF dimensions, assistance barriers exhibited the strongest negative association with CHART-SF social participation score when compared to other dimensions, while work/school dimension demonstrated the weakest association with CHART-SF. Conclusions Environmental barriers are negatively associated with social participation in the SCI population. Working toward eliminating environmental barriers, especially assistance/service barriers, may help enhance social participation for people with SCI. PMID:28045281
Gutknecht, Mandy; Danner, Marion; Schaarschmidt, Marthe-Lisa; Gross, Christian; Augustin, Matthias
2018-02-15
To define treatment benefit, the Patient Benefit Index contains a weighting of patient-relevant treatment goals using the Patient Needs Questionnaire, which includes a 5-point Likert scale ranging from 0 ("not important at all") to 4 ("very important"). These treatment goals have been assigned to five health dimensions. The importance of each dimension can be derived by averaging the importance ratings on the Likert scales of associated treatment goals. As the use of a Likert scale does not allow for a relative assessment of importance, the objective of this study was to estimate relative importance weights for health dimensions and associated treatment goals in patients with psoriasis by using the analytic hierarchy process and to compare these weights with the weights resulting from the Patient Needs Questionnaire. Furthermore, patients' judgments on the difficulty of the methods were investigated. Dimensions of the Patient Benefit Index and their treatment goals were mapped into a hierarchy of criteria and sub-criteria to develop the analytic hierarchy process questionnaire. Adult patients with psoriasis starting a new anti-psoriatic therapy in the outpatient clinic of the Institute for Health Services Research in Dermatology and Nursing at the University Medical Center Hamburg (Germany) were recruited and completed both methods (analytic hierarchy process, Patient Needs Questionnaire). Ratings of treatment goals on the Likert scales (Patient Needs Questionnaire) were summarized within each dimension to assess the importance of the respective health dimension/criterion. Following the analytic hierarchy process approach, consistency in judgments was assessed using a standardized measurement (consistency ratio). At the analytic hierarchy process level of criteria, 78 of 140 patients achieved the accepted consistency. Using the analytic hierarchy process, the dimension "improvement of physical functioning" was most important, followed by "improvement of social functioning". Concerning the Patient Needs Questionnaire results, these dimensions were ranked in second and fifth position, whereas "strengthening of confidence in the therapy and in a possible healing" was ranked most important, which was least important in the analytic hierarchy process ranking. In both methods, "improvement of psychological well-being" and "reduction of impairments due to therapy" were equally ranked in positions three and four. In contrast to this, on the level of sub-criteria, predominantly a similar ranking of treatment goals could be observed between the analytic hierarchy process and the Patient Needs Questionnaire. From the patients' point of view, the Likert scales (Patient Needs Questionnaire) were easier to complete than the analytic hierarchy process pairwise comparisons. Patients with psoriasis assign different importance to health dimensions and associated treatment goals. In choosing a method to assess the importance of health dimensions and/or treatment goals, it needs to be considered that resulting importance weights may differ in dependence on the used method. However, in this study, observed discrepancies in importance weights of the health dimensions were most likely caused by the different methodological approaches focusing on treatment goals to assess the importance of health dimensions on the one hand (Patient Needs Questionnaire) or directly assessing health dimensions on the other hand (analytic hierarchy process).
Low-dimensional, morphologically accurate models of subthreshold membrane potential
Kellems, Anthony R.; Roos, Derrick; Xiao, Nan; Cox, Steven J.
2009-01-01
The accurate simulation of a neuron’s ability to integrate distributed synaptic input typically requires the simultaneous solution of tens of thousands of ordinary differential equations. For, in order to understand how a cell distinguishes between input patterns we apparently need a model that is biophysically accurate down to the space scale of a single spine, i.e., 1 μm. We argue here that one can retain this highly detailed input structure while dramatically reducing the overall system dimension if one is content to accurately reproduce the associated membrane potential at a small number of places, e.g., at the site of action potential initiation, under subthreshold stimulation. The latter hypothesis permits us to approximate the active cell model with an associated quasi-active model, which in turn we reduce by both time-domain (Balanced Truncation) and frequency-domain (ℋ2 approximation of the transfer function) methods. We apply and contrast these methods on a suite of typical cells, achieving up to four orders of magnitude in dimension reduction and an associated speed-up in the simulation of dendritic democratization and resonance. We also append a threshold mechanism and indicate that this reduction has the potential to deliver an accurate quasi-integrate and fire model. PMID:19172386
NASA Astrophysics Data System (ADS)
Sugawara, Sumio; Sasaki, Yoshifumi; Kudo, Subaru
2018-07-01
The frequency-change-type two-axis acceleration sensor uses a cross-type vibrator consisting of four bending vibrators. When coupling vibration exists between these four bending vibrators, the resonance frequency of each vibrator cannot be adjusted independently. In this study, methods of reducing the coupling vibration were investigated by finite-element analysis. A method of adjusting the length of the short arm of each vibrator was proposed for reducing the vibration. When piezoelectric ceramics were bonded to the single-sided surface of the vibrator, the method was not sufficient. Thus, the ceramics with the same dimensions were bonded to double-sided surfaces. As a result, a marked reduction was obtained in this case. Also, the linearity of the sensor characteristics was significantly improved in a small acceleration range. Accordingly, it was clarified that considering the symmetry along the thickness direction of the vibrator is very important.
NASA Astrophysics Data System (ADS)
Grundland, A. M.; Lalague, L.
1996-04-01
This paper presents a new method of constructing, certain classes of solutions of a system of partial differential equations (PDEs) describing the non-stationary and isentropic flow for an ideal compressible fluid. A generalization of the symmetry reduction method to the case of partially-invariant solutions (PISs) has been formulated. We present a new algorithm for constructing PISs and discuss in detail the necessary conditions for the existence of non-reducible PISs. All these solutions have the defect structure 0305-4470/29/8/019/img1 and are computed from four-dimensional symmetric subalgebras. These theoretical considerations are illustrated by several examples. Finally, some new classes of invariant solutions obtained by the symmetry reduction method are included. These solutions represent central, conical, rational, spherical, cylindrical and non-scattering double waves.
Global quasi-linearization (GQL) versus QSSA for a hydrogen-air auto-ignition problem.
Yu, Chunkan; Bykov, Viatcheslav; Maas, Ulrich
2018-04-25
A recently developed automatic reduction method for systems of chemical kinetics, the so-called Global Quasi-Linearization (GQL) method, has been implemented to study and reduce the dimensions of a homogeneous combustion system. The results of application of the GQL and the Quasi-Steady State Assumption (QSSA) are compared. A number of drawbacks of the QSSA are discussed, i.e. the selection criteria of QSS-species and its sensitivity to system parameters, initial conditions, etc. To overcome these drawbacks, the GQL approach has been developed as a robust, automatic and scaling invariant method for a global analysis of the system timescale hierarchy and subsequent model reduction. In this work the auto-ignition problem of the hydrogen-air system is considered in a wide range of system parameters and initial conditions. The potential of the suggested approach to overcome most of the drawbacks of the standard approaches is illustrated.
A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4
NASA Technical Reports Server (NTRS)
Park, Young-Keun; Fahrenthold, Eric P.
2004-01-01
An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.
Surface-structured diffuser by iterative down-size molding with glass sintering technology.
Lee, Xuan-Hao; Tsai, Jung-Lin; Ma, Shih-Hsin; Sun, Ching-Cherng
2012-03-12
In this paper, a down-size sintering scheme for making high-performance diffusers with micro structure to perform beam shaping is presented and demonstrated. By using down-size sintering method, a surface-structure film is designed and fabricated to verify the feasibility of the sintering technology, in which up to 1/8 dimension reduction has been achieved. Besides, a special impressing technology has been applied to fabricate diffuser film with various materials and the transmission efficiency is as high as 85% and above. By introducing the diffuser into possible lighting applications, the diffusers have been shown high performance in glare reduction, beam shaping and energy saving.
NASA Astrophysics Data System (ADS)
Campoamor-Stursberg, R.
2018-03-01
A procedure for the construction of nonlinear realizations of Lie algebras in the context of Vessiot-Guldberg-Lie algebras of first-order systems of ordinary differential equations (ODEs) is proposed. The method is based on the reduction of invariants and projection of lowest-dimensional (irreducible) representations of Lie algebras. Applications to the description of parameterized first-order systems of ODEs related by contraction of Lie algebras are given. In particular, the kinematical Lie algebras in (2 + 1)- and (3 + 1)-dimensions are realized simultaneously as Vessiot-Guldberg-Lie algebras of parameterized nonlinear systems in R3 and R4, respectively.
Damage characterization in dimension limestone cladding using noncollinear ultrasonic wave mixing
NASA Astrophysics Data System (ADS)
McGovern, Megan; Reis, Henrique
2016-01-01
A method capable of characterizing artificial weathering damage in dimension stone cladding using access to one side only is presented. Dolomitic limestone test samples with increasing levels of damage were created artificially by exposing undamaged samples to increasing temperature levels of 100°C, 200°C, 300°C, 400°C, 500°C, 600°C, and 700°C for a 90 min period of time. Using access to one side only, these test samples were nondestructively evaluated using a nonlinear approach based upon noncollinear wave mixing, which involves mixing two critically refracted dilatational ultrasonic waves. Criteria were used to assure that the detected scattered wave originated via wave interaction in the limestone and not from nonlinearities in the testing equipment. Bending tests were used to evaluate the flexure strength of beam samples extracted from the artificially weathered samples. It was observed that the percentage of strength reduction is linearly correlated (R2=98) with the temperature to which the specimens were exposed; it was noted that samples exposed to 400°C and 600°C had a strength reduction of 60% and 90%, respectively. It was also observed that results from the noncollinear wave mixing approach correlated well (R2=0.98) with the destructively obtained percentage of strength reduction.
Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics
NASA Astrophysics Data System (ADS)
Wehmeyer, Christoph; Noé, Frank
2018-06-01
Inspired by the success of deep learning techniques in the physical and chemical sciences, we apply a modification of an autoencoder type deep neural network to the task of dimension reduction of molecular dynamics data. We can show that our time-lagged autoencoder reliably finds low-dimensional embeddings for high-dimensional feature spaces which capture the slow dynamics of the underlying stochastic processes—beyond the capabilities of linear dimension reduction techniques.
Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap
NASA Astrophysics Data System (ADS)
Spiwok, Vojtěch; Králová, Blanka
2011-12-01
Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling.
Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.
Single-shot speckle reduction in numerical reconstruction of digitally recorded holograms.
Hincapie, Diego; Herrera-Ramírez, Jorge; Garcia-Sucerquia, Jorge
2015-04-15
A single-shot method to reduce the speckle noise in the numerical reconstructions of electronically recorded holograms is presented. A recorded hologram with the dimensions N×M is split into S=T×T sub-holograms. The uncorrelated superposition of the individually reconstructed sub-holograms leads to an image with the speckle noise reduced proportionally to the 1/S law. The experimental results are presented to support the proposed methodology.
Dimension Reduction With Extreme Learning Machine.
Kasun, Liyanaarachchi Lekamalage Chamara; Yang, Yan; Huang, Guang-Bin; Zhang, Zhengyou
2016-08-01
Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP), and auto-encoder (AE), is to reduce the noise or irrelevant information of the data. The features of PCA (eigenvectors) and linear AE are not able to represent data as parts (e.g. nose in a face image). On the other hand, NMF and non-linear AE are maimed by slow learning speed and RP only represents a subspace of original data. This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace. To this end, this paper investigates a linear and non-linear dimension reduction framework referred to as extreme learning machine AE (ELM-AE) and sparse ELM-AE (SELM-AE). In contrast to tied weight AE, the hidden neurons in ELM-AE and SELM-AE need not be tuned, and their parameters (e.g, input weights in additive neurons) are initialized using orthogonal and sparse random weights, respectively. Experimental results on USPS handwritten digit recognition data set, CIFAR-10 object recognition, and NORB object recognition data set show the efficacy of linear and non-linear ELM-AE and SELM-AE in terms of discriminative capability, sparsity, training time, and normalized mean square error.
In-silico experiments of zebrafish behaviour: modeling swimming in three dimensions
NASA Astrophysics Data System (ADS)
Mwaffo, Violet; Butail, Sachit; Porfiri, Maurizio
2017-01-01
Zebrafish is fast becoming a species of choice in biomedical research for the investigation of functional and dysfunctional processes coupled with their genetic and pharmacological modulation. As with mammals, experimentation with zebrafish constitutes a complicated ethical issue that calls for the exploration of alternative testing methods to reduce the number of subjects, refine experimental designs, and replace live animals. Inspired by the demonstrated advantages of computational studies in other life science domains, we establish an authentic data-driven modelling framework to simulate zebrafish swimming in three dimensions. The model encapsulates burst-and-coast swimming style, speed modulation, and wall interaction, laying the foundations for in-silico experiments of zebrafish behaviour. Through computational studies, we demonstrate the ability of the model to replicate common ethological observables such as speed and spatial preference, and anticipate experimental observations on the correlation between tank dimensions on zebrafish behaviour. Reaching to other experimental paradigms, our framework is expected to contribute to a reduction in animal use and suffering.
In-silico experiments of zebrafish behaviour: modeling swimming in three dimensions
Mwaffo, Violet; Butail, Sachit; Porfiri, Maurizio
2017-01-01
Zebrafish is fast becoming a species of choice in biomedical research for the investigation of functional and dysfunctional processes coupled with their genetic and pharmacological modulation. As with mammals, experimentation with zebrafish constitutes a complicated ethical issue that calls for the exploration of alternative testing methods to reduce the number of subjects, refine experimental designs, and replace live animals. Inspired by the demonstrated advantages of computational studies in other life science domains, we establish an authentic data-driven modelling framework to simulate zebrafish swimming in three dimensions. The model encapsulates burst-and-coast swimming style, speed modulation, and wall interaction, laying the foundations for in-silico experiments of zebrafish behaviour. Through computational studies, we demonstrate the ability of the model to replicate common ethological observables such as speed and spatial preference, and anticipate experimental observations on the correlation between tank dimensions on zebrafish behaviour. Reaching to other experimental paradigms, our framework is expected to contribute to a reduction in animal use and suffering. PMID:28071731
Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.
2017-09-17
In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.
In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, J; Followill, D; Howell, R
2015-06-15
Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less
NASA Astrophysics Data System (ADS)
Lyashenko, Ya. A.; Popov, V. L.
2018-01-01
A dynamic model of the nanostructuring burnishing of a surface of metallic details taking into consideration plastic deformations has been suggested. To describe the plasticity, the ideology of dimension reduction method supplemented with the plasticity criterion is used. The model considers the action of the normal burnishing force and the tangential friction force. The effect of the coefficient of friction and the periodical oscillation of the burnishing force on the burnishing kinetics are investigated.
2012-05-22
tabulation of the reduced space is performed using the In Situ Adaptive Tabulation ( ISAT ) algorithm. In addition, we use x2f mpi – a Fortran library...for parallel vector-valued function evaluation (used with ISAT in this context) – to efficiently redistribute the chemistry workload among the...Constrained-Equilibrium (RCCE) method, and tabulation of the reduced space is performed using the In Situ Adaptive Tabulation ( ISAT ) algorithm. In addition
Turgeon, Maxime; Oualkacha, Karim; Ciampi, Antonio; Miftah, Hanane; Dehghan, Golsa; Zanke, Brent W; Benedet, Andréa L; Rosa-Neto, Pedro; Greenwood, Celia Mt; Labbe, Aurélie
2018-05-01
The genomics era has led to an increase in the dimensionality of data collected in the investigation of biological questions. In this context, dimension-reduction techniques can be used to summarise high-dimensional signals into low-dimensional ones, to further test for association with one or more covariates of interest. This paper revisits one such approach, previously known as principal component of heritability and renamed here as principal component of explained variance (PCEV). As its name suggests, the PCEV seeks a linear combination of outcomes in an optimal manner, by maximising the proportion of variance explained by one or several covariates of interest. By construction, this method optimises power; however, due to its computational complexity, it has unfortunately received little attention in the past. Here, we propose a general analytical PCEV framework that builds on the assets of the original method, i.e. conceptually simple and free of tuning parameters. Moreover, our framework extends the range of applications of the original procedure by providing a computationally simple strategy for high-dimensional outcomes, along with exact and asymptotic testing procedures that drastically reduce its computational cost. We investigate the merits of the PCEV using an extensive set of simulations. Furthermore, the use of the PCEV approach is illustrated using three examples taken from the fields of epigenetics and brain imaging.
Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+=0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+=0.90 with interval [0.847;0.919], all using the MCMC-BANN. Conclusions: Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space. PMID:20175497
Model and controller reduction of large-scale structures based on projection methods
NASA Astrophysics Data System (ADS)
Gildin, Eduardo
The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that is, the reduced-order controller implemented with the full-order plant. A controller reduction approach is proposed such that to guarantee closed-loop stability. It is based on the concept of dissipativity (or positivity) of linear dynamical systems. Utilizing passivity preserving model reduction together with dissipative-LQG controllers, effective low-order optimal controllers are obtained. Results are shown through simulations.
Efficient Implementations of the Quadrature-Free Discontinuous Galerkin Method
NASA Technical Reports Server (NTRS)
Lockard, David P.; Atkins, Harold L.
1999-01-01
The efficiency of the quadrature-free form of the dis- continuous Galerkin method in two dimensions, and briefly in three dimensions, is examined. Most of the work for constant-coefficient, linear problems involves the volume and edge integrations, and the transformation of information from the volume to the edges. These operations can be viewed as matrix-vector multiplications. Many of the matrices are sparse as a result of symmetry, and blocking and specialized multiplication routines are used to account for the sparsity. By optimizing these operations, a 35% reduction in total CPU time is achieved. For nonlinear problems, the calculation of the flux becomes dominant because of the cost associated with polynomial products and inversion. This component of the work can be reduced by up to 75% when the products are approximated by truncating terms. Because the cost is high for nonlinear problems on general elements, it is suggested that simplified physics and the most efficient element types be used over most of the domain.
Emotion computing using Word Mover’s Distance features based on Ren_CECps
2018-01-01
In this paper, we propose an emotion separated method(SeTF·IDF) to assign the emotion labels of sentences with different values, which has a better visual effect compared with the values represented by TF·IDF in the visualization of a multi-label Chinese emotional corpus Ren_CECps. Inspired by the enormous improvement of the visualization map propelled by the changed distances among the sentences, we being the first group utilizes the Word Mover’s Distance(WMD) algorithm as a way of feature representation in Chinese text emotion classification. Our experiments show that both in 80% for training, 20% for testing and 50% for training, 50% for testing experiments of Ren_CECps, WMD features get the best f1 scores and have a greater increase compared with the same dimension feature vectors obtained by dimension reduction TF·IDF method. Compared experiments in English corpus also show the efficiency of WMD features in the cross-language field. PMID:29624573
Application of empirical and dynamical closure methods to simple climate models
NASA Astrophysics Data System (ADS)
Padilla, Lauren Elizabeth
This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.
Sensitivity Analysis for Probabilistic Neural Network Structure Reduction.
Kowalski, Piotr A; Kusy, Maciej
2018-05-01
In this paper, we propose the use of local sensitivity analysis (LSA) for the structure simplification of the probabilistic neural network (PNN). Three algorithms are introduced. The first algorithm applies LSA to the PNN input layer reduction by selecting significant features of input patterns. The second algorithm utilizes LSA to remove redundant pattern neurons of the network. The third algorithm combines the proposed two and constitutes the solution of how they can work together. PNN with a product kernel estimator is used, where each multiplicand computes a one-dimensional Cauchy function. Therefore, the smoothing parameter is separately calculated for each dimension by means of the plug-in method. The classification qualities of the reduced and full structure PNN are compared. Furthermore, we evaluate the performance of PNN, for which global sensitivity analysis (GSA) and the common reduction methods are applied, both in the input layer and the pattern layer. The models are tested on the classification problems of eight repository data sets. A 10-fold cross validation procedure is used to determine the prediction ability of the networks. Based on the obtained results, it is shown that the LSA can be used as an alternative PNN reduction approach.
Moschonas, Galatios; Geornaras, Ifigenia; Stopforth, Jarret D; Woerner, Dale R; Belk, Keith E; Smith, Gary C; Sofos, John N
2015-12-01
Not-ready-to-eat breaded chicken products formulated with antimicrobial ingredients were tested for the effect of sample dimensions, surface browning method and final internal sample temperature on inoculated Salmonella populations. Fresh chicken breast meat portions (5 × 5 × 5 cm), inoculated with Salmonella (7-strain mixture; 5 log CFU/g), were mixed with (5% v/w total moisture enhancement) (i) distilled water (control), (ii) caprylic acid (CAA; 0.0625%) and carvacrol (CAR; 0.075%), (iii) CAA (0.25%) and ε-polylysine (POL; 0.5%), (iv) CAR (0.15%) and POL (0.5%), or (v) CAA (0.0625%), CAR (0.075%) and POL (0.5%). Sodium chloride (1.2%) and sodium tripolyphosphate (0.3%) were added to all treatments. The mixtures were then ground and formed into 9 × 5 × 3 cm (150 g) or 9 × 2.5 × 2 cm (50 g) portions. The products were breaded, browned in (i) an oven (208 °C, 15 min) or (ii) deep fryer (190 °C, 15 s), packaged, and stored at -20 °C (8 d). Overall, maximum internal temperatures of 62.4 ± 4.0 °C (9 × 2.5 × 2 cm) and 46.0 ± 3.0 °C (9 × 5 × 3 cm) were reached in oven-browned samples, and 35.0 ± 1.1 °C (9 × 2.5 × 2 cm) and 31.7 ± 2.6 °C (9 × 5 × 3 cm) in fryer-browned samples. Irrespective of formulation treatment, total (after frozen storage) reductions of Salmonella were greater (P < 0.05) for 9 × 2.5 × 2 cm oven-browned samples (3.8 to at least 4.6 log CFU/g) than for 9 × 5 × 3 cm oven-browned samples (0.7 to 2.5 log CFU/g). Product dimensions did not (P ≥ 0.05) affect Salmonella reductions (0.6 to 2.8 log CFU/g) in fryer-browned samples. All antimicrobial treatments reduced Salmonella to undetectable levels (<0.3 log CFU/g) in oven-browned 9 × 2.5 × 2 cm samples. Overall, the data may be useful for the selection of antimicrobials, product dimensions, and surface browning methods for reducing Salmonella contamination. © 2015 Institute of Food Technologists®
A BiCGStab2 variant of the IDR(s) method for solving linear equations
NASA Astrophysics Data System (ADS)
Abe, Kuniyoshi; Sleijpen, Gerard L. G.
2012-09-01
The hybrid Bi-Conjugate Gradient (Bi-CG) methods, such as the BiCG STABilized (BiCGSTAB), BiCGstab(l), BiCGStab2 and BiCG×MR2 methods are well-known solvers for solving a linear equation with a nonsymmetric matrix. The Induced Dimension Reduction (IDR)(s) method has recently been proposed, and it has been reported that IDR(s) is often more effective than the hybrid BiCG methods. IDR(s) combining the stabilization polynomial of BiCGstab(l) has been designed to improve the convergence of the original IDR(s) method. We therefore propose IDR(s) combining the stabilization polynomial of BiCGStab2. Numerical experiments show that our proposed variant of IDR(s) is more effective than the original IDR(s) and BiCGStab2 methods.
DiBona, G F; Jones, S Y; Sawin, L L
2000-09-01
Nonlinear dynamic analysis was used to examine the chaotic behavior of renal sympathetic nerve activity in conscious rats subjected to either complete baroreceptor denervation (sinoaortic and cardiac baroreceptor denervation) or induction of congestive heart failure (CHF). The peak interval sequence of synchronized renal sympathetic nerve discharge was extracted and used for analysis. In control rats, this yielded a system whose correlation dimension converged to a low value over the embedding dimension range of 10-15 and whose greatest Lyapunov exponent was positive. Complete baroreceptor denervation was associated with a decrease in the correlation dimension of the system (before 2.65 +/- 0.27, after 1.64 +/- 0.17; P < 0.01) and a reduction in chaotic behavior (greatest Lyapunov exponent: 0.201 +/- 0.008 bits/data point before, 0.177 +/- 0.004 bits/data point after, P < 0.02). CHF, a state characterized by impaired sinoaortic and cardiac baroreceptor regulation of renal sympathetic nerve activity, was associated with a similar decrease in the correlation dimension (control 3.41 +/- 0.23, CHF 2.62 +/- 0.26; P < 0.01) and a reduction in chaotic behavior (greatest Lyapunov exponent: 0.205 +/- 0.048 bits/data point control, 0.136 +/- 0.033 bits/data point CHF, P < 0.02). These results indicate that removal of sinoaortic and cardiac baroreceptor regulation of renal sympathetic nerve activity, occurring either physiologically or pathophysiologically, is associated with a decrease in the correlation dimensions of the system and a reduction in chaotic behavior.
Williams, Monnica T; Farris, Samantha G; Turkheimer, Eric N; Franklin, Martin E; Simpson, H Blair; Liebowitz, Michael; Foa, Edna B
2014-08-01
Obsessive-compulsive disorder (OCD) is a severe condition with varied symptom presentations. The behavioral treatment with the most empirical support is exposure and ritual prevention (EX/RP). This study examined the impact of symptom dimensions on EX/RP outcomes in OCD patients. The Yale-Brown Obsessive-Compulsive Scale (Y-BOCS) was used to determine primary symptoms for each participant. An exploratory factor analysis (EFA) of 238 patients identified five dimensions: contamination/cleaning, doubts about harm/checking, hoarding, symmetry/ordering, and unacceptable/taboo thoughts (including religious/moral and somatic obsessions among others). A linear regression was conducted on those who had received EX/RP (n=87) to examine whether scores on the five symptom dimensions predicted post-treatment Y-BOCS scores, accounting for pre-treatment Y-BOCS scores. The average reduction in Y-BOCS score was 43.0%, however the regression indicated that unacceptable/taboo thoughts (β=.27, p=.02) and hoarding dimensions (β=.23, p=.04) were associated with significantly poorer EX/RP treatment outcomes. Specifically, patients endorsing religious/moral obsessions, somatic concerns, and hoarding obsessions showed significantly smaller reductions in Y-BOCS severity scores. EX/RP was effective for all symptom dimensions, however it was less effective for unacceptable/taboo thoughts and hoarding than for other dimensions. Clinical implications and directions for research are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap.
Spiwok, Vojtěch; Králová, Blanka
2011-12-14
Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling. © 2011 American Institute of Physics
Optimization of Selected Remote Sensing Algorithms for Embedded NVIDIA Kepler GPU Architecture
NASA Technical Reports Server (NTRS)
Riha, Lubomir; Le Moigne, Jacqueline; El-Ghazawi, Tarek
2015-01-01
This paper evaluates the potential of embedded Graphic Processing Units in the Nvidias Tegra K1 for onboard processing. The performance is compared to a general purpose multi-core CPU and full fledge GPU accelerator. This study uses two algorithms: Wavelet Spectral Dimension Reduction of Hyperspectral Imagery and Automated Cloud-Cover Assessment (ACCA) Algorithm. Tegra K1 achieved 51 for ACCA algorithm and 20 for the dimension reduction algorithm, as compared to the performance of the high-end 8-core server Intel Xeon CPU with 13.5 times higher power consumption.
Wu, Dingming; Wang, Dongfang; Zhang, Michael Q; Gu, Jin
2015-12-01
One major goal of large-scale cancer omics study is to identify molecular subtypes for more accurate cancer diagnoses and treatments. To deal with high-dimensional cancer multi-omics data, a promising strategy is to find an effective low-dimensional subspace of the original data and then cluster cancer samples in the reduced subspace. However, due to data-type diversity and big data volume, few methods can integrative and efficiently find the principal low-dimensional manifold of the high-dimensional cancer multi-omics data. In this study, we proposed a novel low-rank approximation based integrative probabilistic model to fast find the shared principal subspace across multiple data types: the convexity of the low-rank regularized likelihood function of the probabilistic model ensures efficient and stable model fitting. Candidate molecular subtypes can be identified by unsupervised clustering hundreds of cancer samples in the reduced low-dimensional subspace. On testing datasets, our method LRAcluster (low-rank approximation based multi-omics data clustering) runs much faster with better clustering performances than the existing method. Then, we applied LRAcluster on large-scale cancer multi-omics data from TCGA. The pan-cancer analysis results show that the cancers of different tissue origins are generally grouped as independent clusters, except squamous-like carcinomas. While the single cancer type analysis suggests that the omics data have different subtyping abilities for different cancer types. LRAcluster is a very useful method for fast dimension reduction and unsupervised clustering of large-scale multi-omics data. LRAcluster is implemented in R and freely available via http://bioinfo.au.tsinghua.edu.cn/software/lracluster/ .
Shark-skin surfaces for fluid-drag reduction in turbulent flow: a review.
Dean, Brian; Bhushan, Bharat
2010-10-28
The skin of fast-swimming sharks exhibits riblet structures aligned in the direction of flow that are known to reduce skin friction drag in the turbulent-flow regime. Structures have been fabricated for study and application that replicate and improve upon the natural shape of the shark-skin riblets, providing a maximum drag reduction of nearly 10 per cent. Mechanisms of fluid drag in turbulent flow and riblet-drag reduction theories from experiment and simulation are discussed. A review of riblet-performance studies is given, and optimal riblet geometries are defined. A survey of studies experimenting with riblet-topped shark-scale replicas is also given. A method for selecting optimal riblet dimensions based on fluid-flow characteristics is detailed, and current manufacturing techniques are outlined. Due to the presence of small amounts of mucus on the skin of a shark, it is expected that the localized application of hydrophobic materials will alter the flow field around the riblets in some way beneficial to the goals of increased drag reduction.
SparseCT: interrupted-beam acquisition and sparse reconstruction for radiation dose reduction
NASA Astrophysics Data System (ADS)
Koesters, Thomas; Knoll, Florian; Sodickson, Aaron; Sodickson, Daniel K.; Otazo, Ricardo
2017-03-01
State-of-the-art low-dose CT methods reduce the x-ray tube current and use iterative reconstruction methods to denoise the resulting images. However, due to compromises between denoising and image quality, only moderate dose reductions up to 30-40% are accepted in clinical practice. An alternative approach is to reduce the number of x-ray projections and use compressed sensing to reconstruct the full-tube-current undersampled data. This idea was recognized in the early days of compressed sensing and proposals for CT dose reduction appeared soon afterwards. However, no practical means of undersampling has yet been demonstrated in the challenging environment of a rapidly rotating CT gantry. In this work, we propose a moving multislit collimator as a practical incoherent undersampling scheme for compressed sensing CT and evaluate its application for radiation dose reduction. The proposed collimator is composed of narrow slits and moves linearly along the slice dimension (z), to interrupt the incident beam in different slices for each x-ray tube angle (θ). The reduced projection dataset is then reconstructed using a sparse approach, where 3D image gradients are employed to enforce sparsity. The effects of the collimator slits on the beam profile were measured and represented as a continuous slice profile. SparseCT was tested using retrospective undersampling and compared against commercial current-reduction techniques on phantoms and in vivo studies. Initial results suggest that SparseCT may enable higher performance than current-reduction, particularly for high dose reduction factors.
Integrative sparse principal component analysis of gene expression data.
Liu, Mengque; Fan, Xinyan; Fang, Kuangnan; Zhang, Qingzhao; Ma, Shuangge
2017-12-01
In the analysis of gene expression data, dimension reduction techniques have been extensively adopted. The most popular one is perhaps the PCA (principal component analysis). To generate more reliable and more interpretable results, the SPCA (sparse PCA) technique has been developed. With the "small sample size, high dimensionality" characteristic of gene expression data, the analysis results generated from a single dataset are often unsatisfactory. Under contexts other than dimension reduction, integrative analysis techniques, which jointly analyze the raw data of multiple independent datasets, have been developed and shown to outperform "classic" meta-analysis and other multidatasets techniques and single-dataset analysis. In this study, we conduct integrative analysis by developing the iSPCA (integrative SPCA) method. iSPCA achieves the selection and estimation of sparse loadings using a group penalty. To take advantage of the similarity across datasets and generate more accurate results, we further impose contrasted penalties. Different penalties are proposed to accommodate different data conditions. Extensive simulations show that iSPCA outperforms the alternatives under a wide spectrum of settings. The analysis of breast cancer and pancreatic cancer data further shows iSPCA's satisfactory performance. © 2017 WILEY PERIODICALS, INC.
Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua
2014-01-01
The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.
Mesial temporal lobe epilepsy lateralization using SPHARM-based features of hippocampus and SVM
NASA Astrophysics Data System (ADS)
Esmaeilzadeh, Mohammad; Soltanian-Zadeh, Hamid; Jafari-Khouzani, Kourosh
2012-02-01
This paper improves the Lateralization (identification of the epileptogenic hippocampus) accuracy in Mesial Temporal Lobe Epilepsy (mTLE). In patients with this kind of epilepsy, usually one of the brain's hippocampi is the focus of the epileptic seizures, and resection of the seizure focus is the ultimate treatment to control or reduce the seizures. Moreover, the epileptogenic hippocampus is prone to shrinkage and deformation; therefore, shape analysis of the hippocampus is advantageous in the preoperative assessment for the Lateralization. The method utilized for shape analysis is the Spherical Harmonics (SPHARM). In this method, the shape of interest is decomposed using a set of bases functions and the obtained coefficients of expansion are the features describing the shape. To perform shape comparison and analysis, some pre- and post-processing steps such as "alignment of different subjects' hippocampi" and the "reduction of feature-space dimension" are required. To this end, first order ellipsoid is used for alignment. For dimension reduction, we propose to keep only the SPHARM coefficients with maximum conformity to the hippocampus shape. Then, using these coefficients of normal and epileptic subjects along with 3D invariants, specific lateralization indices are proposed. Consequently, the 1536 SPHARM coefficients of each subject are summarized into 3 indices, where for each index the negative (positive) value shows that the left (right) hippocampus is deformed (diseased). Employing these indices, the best achieved lateralization accuracy for clustering and classification algorithms are 85% and 92%, respectively. This is a significant improvement compared to the conventional volumetric method.
A new method for mapping multidimensional data to lower dimensions
NASA Technical Reports Server (NTRS)
Gowda, K. C.
1983-01-01
A multispectral mapping method is proposed which is based on the new concept of BEND (Bidimensional Effective Normalised Difference). The method, which involves taking one sample point at a time and finding the interrelationships between its features, is found very economical from the point of view of storage and processing time. It has good dimensionality reduction and clustering properties, and is highly suitable for computer analysis of large amounts of data. The transformed values obtained by this procedure are suitable for either a planar 2-space mapping of geological sample points or for making grayscale and color images of geo-terrains. A few examples are given to justify the efficacy of the proposed procedure.
Joint Research on Scatterometry and AFM Wafer Metrology
NASA Astrophysics Data System (ADS)
Bodermann, Bernd; Buhr, Egbert; Danzebrink, Hans-Ulrich; Bär, Markus; Scholze, Frank; Krumrey, Michael; Wurm, Matthias; Klapetek, Petr; Hansen, Poul-Erik; Korpelainen, Virpi; van Veghel, Marijn; Yacoot, Andrew; Siitonen, Samuli; El Gawhary, Omar; Burger, Sven; Saastamoinen, Toni
2011-11-01
Supported by the European Commission and EURAMET, a consortium of 10 participants from national metrology institutes, universities and companies has started a joint research project with the aim of overcoming current challenges in optical scatterometry for traceable linewidth metrology. Both experimental and modelling methods will be enhanced and different methods will be compared with each other and with specially adapted atomic force microscopy (AFM) and scanning electron microscopy (SEM) measurement systems in measurement comparisons. Additionally novel methods for sophisticated data analysis will be developed and investigated to reach significant reductions of the measurement uncertainties in critical dimension (CD) metrology. One final goal will be the realisation of a wafer based reference standard material for calibration of scatterometers.
The Lyapunov dimension and its estimation via the Leonov method
NASA Astrophysics Data System (ADS)
Kuznetsov, N. V.
2016-06-01
Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, proposed by G.A. Leonov in 1991. The Leonov method is based on the direct Lyapunov method with special Lyapunov-like functions. The advantage of the method is that it allows one to estimate the Lyapunov dimension of invariant sets without localization of the set in the phase space and, in many cases, to get effectively an exact Lyapunov dimension formula. In this work the invariance of the Lyapunov dimension with respect to diffeomorphisms and its connection with the Leonov method are discussed. For discrete-time dynamical systems an analog of Leonov method is suggested. In a simple but rigorous way, here it is presented the connection between the Leonov method and the key related works: Kaplan and Yorke (the concept of the Lyapunov dimension, 1979), Douady and Oesterlé (upper bounds of the Hausdorff dimension via the Lyapunov dimension of maps, 1980), Constantin, Eden, Foiaş, and Temam (upper bounds of the Hausdorff dimension via the Lyapunov exponents and Lyapunov dimension of dynamical systems, 1985-90), and the numerical calculation of the Lyapunov exponents and dimension.
ODF Maxima Extraction in Spherical Harmonic Representation via Analytical Search Space Reduction
Aganj, Iman; Lenglet, Christophe; Sapiro, Guillermo
2015-01-01
By revealing complex fiber structure through the orientation distribution function (ODF), q-ball imaging has recently become a popular reconstruction technique in diffusion-weighted MRI. In this paper, we propose an analytical dimension reduction approach to ODF maxima extraction. We show that by expressing the ODF, or any antipodally symmetric spherical function, in the common fourth order real and symmetric spherical harmonic basis, the maxima of the two-dimensional ODF lie on an analytically derived one-dimensional space, from which we can detect the ODF maxima. This method reduces the computational complexity of the maxima detection, without compromising the accuracy. We demonstrate the performance of our technique on both artificial and human brain data. PMID:20879302
A trace ratio maximization approach to multiple kernel-based dimensionality reduction.
Jiang, Wenhao; Chung, Fu-lai
2014-01-01
Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.
Network embedding-based representation learning for single cell RNA-seq data.
Li, Xiangyu; Chen, Weizheng; Chen, Yang; Zhang, Xuegong; Gu, Jin; Zhang, Michael Q
2017-11-02
Single cell RNA-seq (scRNA-seq) techniques can reveal valuable insights of cell-to-cell heterogeneities. Projection of high-dimensional data into a low-dimensional subspace is a powerful strategy in general for mining such big data. However, scRNA-seq suffers from higher noise and lower coverage than traditional bulk RNA-seq, hence bringing in new computational difficulties. One major challenge is how to deal with the frequent drop-out events. The events, usually caused by the stochastic burst effect in gene transcription and the technical failure of RNA transcript capture, often render traditional dimension reduction methods work inefficiently. To overcome this problem, we have developed a novel Single Cell Representation Learning (SCRL) method based on network embedding. This method can efficiently implement data-driven non-linear projection and incorporate prior biological knowledge (such as pathway information) to learn more meaningful low-dimensional representations for both cells and genes. Benchmark results show that SCRL outperforms other dimensional reduction methods on several recent scRNA-seq datasets. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Genetic algorithm for the optimization of features and neural networks in ECG signals classification
NASA Astrophysics Data System (ADS)
Li, Hongqiang; Yuan, Danyang; Ma, Xiangdong; Cui, Dianyin; Cao, Lu
2017-01-01
Feature extraction and classification of electrocardiogram (ECG) signals are necessary for the automatic diagnosis of cardiac diseases. In this study, a novel method based on genetic algorithm-back propagation neural network (GA-BPNN) for classifying ECG signals with feature extraction using wavelet packet decomposition (WPD) is proposed. WPD combined with the statistical method is utilized to extract the effective features of ECG signals. The statistical features of the wavelet packet coefficients are calculated as the feature sets. GA is employed to decrease the dimensions of the feature sets and to optimize the weights and biases of the back propagation neural network (BPNN). Thereafter, the optimized BPNN classifier is applied to classify six types of ECG signals. In addition, an experimental platform is constructed for ECG signal acquisition to supply the ECG data for verifying the effectiveness of the proposed method. The GA-BPNN method with the MIT-BIH arrhythmia database achieved a dimension reduction of nearly 50% and produced good classification results with an accuracy of 97.78%. The experimental results based on the established acquisition platform indicated that the GA-BPNN method achieved a high classification accuracy of 99.33% and could be efficiently applied in the automatic identification of cardiac arrhythmias.
New Dimensions for the Multicultural Education Course
ERIC Educational Resources Information Center
Gay, Richard
2011-01-01
For the past sixteen years, the Five Dimensions of Multicultural Education, as proposed by James A. Banks (1995), have been accepted in many circles as the primary conceptual framework used in teaching multicultural education courses: content integration, the knowledge construction process, prejudice reduction, an equity pedagogy and an empowering…
[Montessori method applied to dementia - literature review].
Brandão, Daniela Filipa Soares; Martín, José Ignacio
2012-06-01
The Montessori method was initially applied to children, but now it has also been applied to people with dementia. The purpose of this study is to systematically review the research on the effectiveness of this method using Medical Literature Analysis and Retrieval System Online (Medline) with the keywords dementia and Montessori method. We selected lo studies, in which there were significant improvements in participation and constructive engagement, and reduction of negative affects and passive engagement. Nevertheless, systematic reviews about this non-pharmacological intervention in dementia rate this method as weak in terms of effectiveness. This apparent discrepancy can be explained because the Montessori method may have, in fact, a small influence on dimensions such as behavioral problems, or because there is no research about this method with high levels of control, such as the presence of several control groups or a double-blind study.
Ogawa, S; Kondo, M; Ino, K; Ii, T; Imai, R; Furukawa, T A; Akechi, T
2017-12-01
To examine the relationship of fear of fear and broad dimensions of psychopathology in panic disorder with agoraphobia over the course of cognitive behavioural therapy in Japan. A total of 177 Japanese patients with panic disorder with agoraphobia were treated with group cognitive behavioural therapy between 2001 and 2015. We examined associations between the change scores in Agoraphobic Cognitions Questionnaire or Body Sensations Questionnaire and the changes in subscales of Symptom Checklist-90 Revised during cognitive behavioural therapy controlling the change in panic disorder severity using multiple regression analysis. Reduction in Agoraphobic Cognitions Questionnaire score was related to a decrease in all Symptom Checklist-90 Revised (SCL-90-R) subscale scores. Reduction in Body Sensations Questionnaire score was associated with a decrease in anxiety. Reduction in Panic Disorder Severity Scale score was not related to any SCL-90-R subscale changes. Changes in fear of fear, especially maladaptive cognitions, may predict broad dimensions of psychopathology reductions in patients of panic disorder with agoraphobia over the course of cognitive behavioural therapy. For the sake of improving a broader range of psychiatric symptoms in patients of panic disorder with agoraphobia, more attention to maladaptive cognition changes during cognitive behavioural therapy is warranted.
Liu, Yang; Chiaromonte, Francesca; Li, Bing
2017-06-01
In many scientific and engineering fields, advanced experimental and computing technologies are producing data that are not just high dimensional, but also internally structured. For instance, statistical units may have heterogeneous origins from distinct studies or subpopulations, and features may be naturally partitioned based on experimental platforms generating them, or on information available about their roles in a given phenomenon. In a regression analysis, exploiting this known structure in the predictor dimension reduction stage that precedes modeling can be an effective way to integrate diverse data. To pursue this, we propose a novel Sufficient Dimension Reduction (SDR) approach that we call structured Ordinary Least Squares (sOLS). This combines ideas from existing SDR literature to merge reductions performed within groups of samples and/or predictors. In particular, it leads to a version of OLS for grouped predictors that requires far less computation than recently proposed groupwise SDR procedures, and provides an informal yet effective variable selection tool in these settings. We demonstrate the performance of sOLS by simulation and present a first application to genomic data. The R package "sSDR," publicly available on CRAN, includes all procedures necessary to implement the sOLS approach. © 2016, The International Biometric Society.
Construction of Optimally Reduced Empirical Model by Spatially Distributed Climate Data
NASA Astrophysics Data System (ADS)
Gavrilov, A.; Mukhin, D.; Loskutov, E.; Feigin, A.
2016-12-01
We present an approach to empirical reconstruction of the evolution operator in stochastic form by space-distributed time series. The main problem in empirical modeling consists in choosing appropriate phase variables which can efficiently reduce the dimension of the model at minimal loss of information about system's dynamics which consequently leads to more robust model and better quality of the reconstruction. For this purpose we incorporate in the model two key steps. The first step is standard preliminary reduction of observed time series dimension by decomposition via certain empirical basis (e. g. empirical orthogonal function basis or its nonlinear or spatio-temporal generalizations). The second step is construction of an evolution operator by principal components (PCs) - the time series obtained by the decomposition. In this step we introduce a new way of reducing the dimension of the embedding in which the evolution operator is constructed. It is based on choosing proper combinations of delayed PCs to take into account the most significant spatio-temporal couplings. The evolution operator is sought as nonlinear random mapping parameterized using artificial neural networks (ANN). Bayesian approach is used to learn the model and to find optimal hyperparameters: the number of PCs, the dimension of the embedding, the degree of the nonlinearity of ANN. The results of application of the method to climate data (sea surface temperature, sea level pressure) and their comparing with the same method based on non-reduced embedding are presented. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS).
NASA Astrophysics Data System (ADS)
Xiong, Charles Zhaoxi; Alexandradinata, A.
2018-03-01
It is demonstrated that fermionic/bosonic symmetry-protected topological (SPT) phases across different dimensions and symmetry classes can be organized using geometric constructions that increase dimensions and symmetry-reduction maps that change symmetry groups. Specifically, it is shown that the interacting classifications of SPT phases with and without glide symmetry fit into a short exact sequence, so that the classification with glide is constrained to be a direct sum of cyclic groups of order 2 or 4. Applied to fermionic SPT phases in the Wigner-Dyson class AII, this implies that the complete interacting classification in the presence of glide is Z4⊕Z2⊕Z2 in three dimensions. In particular, the hourglass-fermion phase recently realized in the band insulator KHgSb must be robust to interactions. Generalizations to spatiotemporal glide symmetries are discussed.
Huang, Tao; Li, Xiao-yu; Jin, Rui; Ku, Jing; Xu, Sen-miao; Xu, Meng-ling; Wu, Zhen-zhong; Kong, De-guo
2015-04-01
The present paper put forward a non-destructive detection method which combines semi-transmission hyperspectral imaging technology with manifold learning dimension reduction algorithm and least squares support vector machine (LSSVM) to recognize internal and external defects in potatoes simultaneously. Three hundred fifteen potatoes were bought in farmers market as research object, and semi-transmission hyperspectral image acquisition system was constructed to acquire the hyperspectral images of normal external defects (bud and green rind) and internal defect (hollow heart) potatoes. In order to conform to the actual production, defect part is randomly put right, side and back to the acquisition probe when the hyperspectral images of external defects potatoes are acquired. The average spectrums (390-1,040 nm) were extracted from the region of interests for spectral preprocessing. Then three kinds of manifold learning algorithm were respectively utilized to reduce the dimension of spectrum data, including supervised locally linear embedding (SLLE), locally linear embedding (LLE) and isometric mapping (ISOMAP), the low-dimensional data gotten by manifold learning algorithms is used as model input, Error Correcting Output Code (ECOC) and LSSVM were combined to develop the multi-target classification model. By comparing and analyzing results of the three models, we concluded that SLLE is the optimal manifold learning dimension reduction algorithm, and the SLLE-LSSVM model is determined to get the best recognition rate for recognizing internal and external defects potatoes. For test set data, the single recognition rate of normal, bud, green rind and hollow heart potato reached 96.83%, 86.96%, 86.96% and 95% respectively, and he hybrid recognition rate was 93.02%. The results indicate that combining the semi-transmission hyperspectral imaging technology with SLLE-LSSVM is a feasible qualitative analytical method which can simultaneously recognize the internal and external defects potatoes and also provide technical reference for rapid on-line non-destructive detecting of the internal and external defects potatoes.
Fang, Chunying; Li, Haifeng; Ma, Lin; Zhang, Mancai
2017-01-01
Pathological speech usually refers to speech distortion resulting from illness or other biological insults. The assessment of pathological speech plays an important role in assisting the experts, while automatic evaluation of speech intelligibility is difficult because it is usually nonstationary and mutational. In this paper, we carry out an independent innovation of feature extraction and reduction, and we describe a multigranularity combined feature scheme which is optimized by the hierarchical visual method. A novel method of generating feature set based on S -transform and chaotic analysis is proposed. There are BAFS (430, basic acoustics feature), local spectral characteristics MSCC (84, Mel S -transform cepstrum coefficients), and chaotic features (12). Finally, radar chart and F -score are proposed to optimize the features by the hierarchical visual fusion. The feature set could be optimized from 526 to 96 dimensions based on NKI-CCRT corpus and 104 dimensions based on SVD corpus. The experimental results denote that new features by support vector machine (SVM) have the best performance, with a recognition rate of 84.4% on NKI-CCRT corpus and 78.7% on SVD corpus. The proposed method is thus approved to be effective and reliable for pathological speech intelligibility evaluation.
Heat kernel and Weyl anomaly of Schrödinger invariant theory
NASA Astrophysics Data System (ADS)
Pal, Sridip; Grinstein, Benjamín
2017-12-01
We propose a method inspired by discrete light cone quantization to determine the heat kernel for a Schrödinger field theory (Galilean boost invariant with z =2 anisotropic scaling symmetry) living in d +1 dimensions, coupled to a curved Newton-Cartan background, starting from a heat kernel of a relativistic conformal field theory (z =1 ) living in d +2 dimensions. We use this method to show that the Schrödinger field theory of a complex scalar field cannot have any Weyl anomalies. To be precise, we show that the Weyl anomaly Ad+1 G for Schrödinger theory is related to the Weyl anomaly of a free relativistic scalar CFT Ad+2 R via Ad+1 G=2 π δ (m )Ad+2 R , where m is the charge of the scalar field under particle number symmetry. We provide further evidence of the vanishing anomaly by evaluating Feynman diagrams in all orders of perturbation theory. We present an explicit calculation of the anomaly using a regulated Schrödinger operator, without using the null cone reduction technique. We generalize our method to show that a similar result holds for theories with a single time-derivative and with even z >2 .
Detection and tracking of gas plumes in LWIR hyperspectral video sequence data
NASA Astrophysics Data System (ADS)
Gerhart, Torin; Sunu, Justin; Lieu, Lauren; Merkurjev, Ekaterina; Chang, Jen-Mei; Gilles, Jérôme; Bertozzi, Andrea L.
2013-05-01
Automated detection of chemical plumes presents a segmentation challenge. The segmentation problem for gas plumes is difficult due to the diffusive nature of the cloud. The advantage of considering hyperspectral images in the gas plume detection problem over the conventional RGB imagery is the presence of non-visual data, allowing for a richer representation of information. In this paper we present an effective method of visualizing hyperspectral video sequences containing chemical plumes and investigate the effectiveness of segmentation techniques on these post-processed videos. Our approach uses a combination of dimension reduction and histogram equalization to prepare the hyperspectral videos for segmentation. First, Principal Components Analysis (PCA) is used to reduce the dimension of the entire video sequence. This is done by projecting each pixel onto the first few Principal Components resulting in a type of spectral filter. Next, a Midway method for histogram equalization is used. These methods redistribute the intensity values in order to reduce icker between frames. This properly prepares these high-dimensional video sequences for more traditional segmentation techniques. We compare the ability of various clustering techniques to properly segment the chemical plume. These include K-means, spectral clustering, and the Ginzburg-Landau functional.
Local Intrinsic Dimension Estimation by Generalized Linear Modeling.
Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru
2017-07-01
We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.
Systems and methods for displaying data in split dimension levels
Stolte, Chris; Hanrahan, Patrick
2015-07-28
Systems and methods for displaying data in split dimension levels are disclosed. In some implementations, a method includes: at a computer, obtaining a dimensional hierarchy associated with a dataset, wherein the dimensional hierarchy includes at least one dimension and a sub-dimension of the at least one dimension; and populating information representing data included in the dataset into a visual table having a first axis and a second axis, wherein the first axis corresponds to the at least one dimension and the second axis corresponds to the sub-dimension of the at least one dimension.
Cai, Li
2015-06-01
Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.
Ogawa, Sei; Imai, Risa; Suzuki, Masako; Furukawa, Toshi A; Akechi, Tatsuo
2017-12-01
Social anxiety disorder (SAD) patients commonly have broad dimensions of psychopathology. This study investigated the relationship between a wide range of psychopathology and attention or cognitions during cognitive behavioral therapy (CBT) for SAD. We treated 96 SAD patients with group CBT. Using multiple regression analysis, we examined the associations between the changes in broad dimensions of psychopathology and the changes in self-focused attention or maladaptive cognitions in the course of CBT. The reduction in self-focused attention was related to the decreases in somatization, obsessive-compulsive, interpersonal sensitivity, anxiety, phobic anxiety, and global severity index. The reduction in maladaptive cognitions was associated with decreases in interpersonal sensitivity, depression, and global severity index. The present study suggests that changes in self-focused attention and maladaptive cognitions may predict broad dimensions of psychopathology changes in SAD patients over the course of CBT. For the purpose of improving a wide range of psychiatric symptoms with SAD patients in CBT, it may be useful to decrease self-focus attention and maladaptive cognitions.
Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.
Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J
2018-02-15
Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hamprecht, Fred A.; Peter, Christine; Daura, Xavier; Thiel, Walter; van Gunsteren, Wilfred F.
2001-02-01
We propose an approach for summarizing the output of long simulations of complex systems, affording a rapid overview and interpretation. First, multidimensional scaling techniques are used in conjunction with dimension reduction methods to obtain a low-dimensional representation of the configuration space explored by the system. A nonparametric estimate of the density of states in this subspace is then obtained using kernel methods. The free energy surface is calculated from that density, and the configurations produced in the simulation are then clustered according to the topography of that surface, such that all configurations belonging to one local free energy minimum form one class. This topographical cluster analysis is performed using basin spanning trees which we introduce as subgraphs of Delaunay triangulations. Free energy surfaces obtained in dimensions lower than four can be visualized directly using iso-contours and -surfaces. Basin spanning trees also afford a glimpse of higher-dimensional topographies. The procedure is illustrated using molecular dynamics simulations on the reversible folding of peptide analoga. Finally, we emphasize the intimate relation of density estimation techniques to modern enhanced sampling algorithms.
An accurate boundary element method for the exterior elastic scattering problem in two dimensions
NASA Astrophysics Data System (ADS)
Bao, Gang; Xu, Liwei; Yin, Tao
2017-11-01
This paper is concerned with a Galerkin boundary element method solving the two dimensional exterior elastic wave scattering problem. The original problem is first reduced to the so-called Burton-Miller [1] boundary integral formulation, and essential mathematical features of its variational form are discussed. In numerical implementations, a newly-derived and analytically accurate regularization formula [2] is employed for the numerical evaluation of hyper-singular boundary integral operator. A new computational approach is employed based on the series expansions of Hankel functions for the computation of weakly-singular boundary integral operators during the reduction of corresponding Galerkin equations into a discrete linear system. The effectiveness of proposed numerical methods is demonstrated using several numerical examples.
Infrared face recognition based on LBP histogram and KW feature selection
NASA Astrophysics Data System (ADS)
Xie, Zhihua
2014-07-01
The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).
VizieR Online Data Catalog: Outliers and similarity in APOGEE (Reis+, 2018)
NASA Astrophysics Data System (ADS)
Reis, I.; Poznanski, D.; Baron, D.; Zasowski, G.; Shahaf, S.
2017-11-01
t-SNE is a dimensionality reduction algorithm that is particularly well suited for the visualization of high-dimensional datasets. We use t-SNE to visualize our distance matrix. A-priori, these distances could define a space with almost as many dimensions as objects, i.e., tens of thousand of dimensions. Obviously, since many stars are quite similar, and their spectra are defined by a few physical parameters, the minimal spanning space might be smaller. By using t-SNE we can examine the structure of our sample projected into 2D. We use our distance matrix as input to the t-SNE algorithm and in return get a 2D map of the objects in our dataset. For each star in a sample of 183232 APOGEE stars, the APOGEE IDs of the 99 stars with most similar spectra (according to the method described in paper), ordered by similarity. (3 data files).
α '-corrected black holes in String Theory
NASA Astrophysics Data System (ADS)
Cano, Pablo A.; Meessen, Patrick; Ortín, Tomás; Ramírez, Pedro F.
2018-05-01
We consider the well-known solution of the Heterotic Superstring effective action to zeroth order in α ' that describes the intersection of a fundamental string with momentum and a solitonic 5-brane and which gives a 3-charge, static, extremal, supersymmetric black hole in 5 dimensions upon dimensional reduction on T5. We compute explicitly the first-order in α ' corrections to this solution, including SU(2) Yang-Mills fields which can be used to cancel some of these corrections and we study the main properties of this α '-corrected solution: supersymmetry, values of the near-horizon and asymptotic charges, behavior under α '-corrected T-duality, value of the entropy (using Wald formula directly in 10 dimensions), existence of small black holes etc. The value obtained for the entropy agrees, within the limits of approximation, with that obtained by microscopic methods. The α ' corrections coming from Wald's formula prove crucial for this result.
Examining social, physical, and environmental dimensions of tornado vulnerability in Texas.
Siebeneck, Laura
2016-01-01
To develop a vulnerability model that captures the social, physical, and environmental dimensions of tornado vulnerability of Texas counties. Guided by previous research and methodologies proposed in the hazards and emergency management literature, a principle components analysis is used to create a tornado vulnerability index. Data were gathered from open source information available through the US Census Bureau, American Community Surveys, and the Texas Natural Resources Information System. Texas counties. The results of the model yielded three indices that highlight geographic variability of social vulnerability, built environment vulnerability, and tornado hazard throughout Texas. Further analyses suggest that counties with the highest tornado vulnerability include those with high population densities and high tornado risk. This article demonstrates one method for assessing statewide tornado vulnerability and presents how the results of this type of analysis can be applied by emergency managers towards the reduction of tornado vulnerability in their communities.
NASA Astrophysics Data System (ADS)
Parsons, Todd L.; Rogers, Tim
2017-10-01
Systems composed of large numbers of interacting agents often admit an effective coarse-grained description in terms of a multidimensional stochastic dynamical system, driven by small-amplitude intrinsic noise. In applications to biological, ecological, chemical and social dynamics it is common for these models to posses quantities that are approximately conserved on short timescales, in which case system trajectories are observed to remain close to some lower-dimensional subspace. Here, we derive explicit and general formulae for a reduced-dimension description of such processes that is exact in the limit of small noise and well-separated slow and fast dynamics. The Michaelis-Menten law of enzyme-catalysed reactions, and the link between the Lotka-Volterra and Wright-Fisher processes are explored as a simple worked examples. Extensions of the method are presented for infinite dimensional systems and processes coupled to non-Gaussian noise sources.
NASA Astrophysics Data System (ADS)
Shahbeigi-Roodposhti, Peiman; Jordan, Eric; Shahbazmohamadi, Sina
2017-12-01
Three-dimensional behavior of NiCoCrAlY bond coat surface geometry change (known as rumpling) was characterized during 120 h of thermal cycling. The proposed scanning electron microscope (SEM)-based 3D imaging method allows for recording the change in both height and width at the same location during the heat treatment. Statistical analysis using both profile information [two dimensions (2D)] and surface information [three dimensions (3D)] demonstrated a typical nature of rumpling as increase in height and decrease in width. However, it also revealed an anomaly of height reduction between 40 and 80 cycles. Such behavior was further investigated by analyzing the bearing area ratio curve of the surface and attributed to filling of voids and valleys by the growth of thermally grown oxide.
Modeling Business Processes in Public Administration
NASA Astrophysics Data System (ADS)
Repa, Vaclav
During more than 10 years of its existence business process modeling became a regular part of organization management practice. It is mostly regarded. as a part of information system development or even as a way to implement some supporting technology (for instance workflow system). Although I do not agree with such reduction of the real meaning of a business process, it is necessary to admit that information technologies play an essential role in business processes (see [1] for more information), Consequently, an information system is inseparable from a business process itself because it is a cornerstone of the general basic infrastructure of a business. This fact impacts on all dimensions of business process management. One of these dimensions is the methodology that postulates that the information systems development provide the business process management with exact methods and tools for modeling business processes. Also the methodology underlying the approach presented in this paper has its roots in the information systems development methodology.
The Correlation Fractal Dimension of Complex Networks
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Liu, Zhenzhen; Wang, Mogei
2013-05-01
The fractality of complex networks is studied by estimating the correlation dimensions of the networks. Comparing with the previous algorithms of estimating the box dimension, our algorithm achieves a significant reduction in time complexity. For four benchmark cases tested, that is, the Escherichia coli (E. Coli) metabolic network, the Homo sapiens protein interaction network (H. Sapiens PIN), the Saccharomyces cerevisiae protein interaction network (S. Cerevisiae PIN) and the World Wide Web (WWW), experiments are provided to demonstrate the validity of our algorithm.
The maze and the minotaur: mental health in primary health care.
Hirdes, Alice; Scarparo, Helena Beatriz Kochenborger
2015-02-01
The article aims to discuss the issue of integration of mental health in primary care by matrix support in mental health. We point out the main barriers in the use of this work method, as well as the facilitating factors of the matrix support of mental health in primary care. The first are within the scope of epistemological specificities, professional issues and management in the political and ideological dimensions. Among the second, we highlight: the care for people with mental disorders in the territory; the reduction of stigma and discrimination; the development of new skills for professionals in primary care; reduction of costs; simultaneous treatment of physical and mental illness, which often overlap; the possibility of incorporating mental health care in a perspective of extended clinical service using an inter/transdisciplinary approach.
Adaptive simplification of complex multiscale systems.
Chiavazzo, Eliodoro; Karlin, Ilya
2011-03-01
A fully adaptive methodology is developed for reducing the complexity of large dissipative systems. This represents a significant step toward extracting essential physical knowledge from complex systems, by addressing the challenging problem of a minimal number of variables needed to exactly capture the system dynamics. Accurate reduced description is achieved, by construction of a hierarchy of slow invariant manifolds, with an embarrassingly simple implementation in any dimension. The method is validated with the autoignition of the hydrogen-air mixture where a reduction to a cascade of slow invariant manifolds is observed.
Cong, Fengyu; Leppänen, Paavo H T; Astikainen, Piia; Hämäläinen, Jarmo; Hietanen, Jari K; Ristaniemi, Tapani
2011-09-30
The present study addresses benefits of a linear optimal filter (OF) for independent component analysis (ICA) in extracting brain event-related potentials (ERPs). A filter such as the digital filter is usually considered as a denoising tool. Actually, in filtering ERP recordings by an OF, the ERP' topography should not be changed by the filter, and the output should also be able to be modeled by the linear transformation. Moreover, an OF designed for a specific ERP source or component may remove noise, as well as reduce the overlap of sources and even reject some non-targeted sources in the ERP recordings. The OF can thus accomplish both the denoising and dimension reduction (reducing the number of sources) simultaneously. We demonstrated these effects using two datasets, one containing visual and the other auditory ERPs. The results showed that the method including OF and ICA extracted much more reliable components than the sole ICA without OF did, and that OF removed some non-targeted sources and made the underdetermined model of EEG recordings approach to the determined one. Thus, we suggest designing an OF based on the properties of an ERP to filter recordings before using ICA decomposition to extract the targeted ERP component. Copyright © 2011 Elsevier B.V. All rights reserved.
Covariance, correlation matrix, and the multiscale community structure of networks.
Shen, Hua-Wei; Cheng, Xue-Qi; Fang, Bin-Xing
2010-07-01
Empirical studies show that real world networks often exhibit multiple scales of topological descriptions. However, it is still an open problem how to identify the intrinsic multiple scales of networks. In this paper, we consider detecting the multiscale community structure of network from the perspective of dimension reduction. According to this perspective, a covariance matrix of network is defined to uncover the multiscale community structure through the translation and rotation transformations. It is proved that the covariance matrix is the unbiased version of the well-known modularity matrix. We then point out that the translation and rotation transformations fail to deal with the heterogeneous network, which is very common in nature and society. To address this problem, a correlation matrix is proposed through introducing the rescaling transformation into the covariance matrix. Extensive tests on real world and artificial networks demonstrate that the correlation matrix significantly outperforms the covariance matrix, identically the modularity matrix, as regards identifying the multiscale community structure of network. This work provides a novel perspective to the identification of community structure and thus various dimension reduction methods might be used for the identification of community structure. Through introducing the correlation matrix, we further conclude that the rescaling transformation is crucial to identify the multiscale community structure of network, as well as the translation and rotation transformations.
Ovtchinnikov, Evgueni E.; Xanthis, Leonidas S.
2000-01-01
We present a methodology for the efficient numerical solution of eigenvalue problems of full three-dimensional elasticity for thin elastic structures, such as shells, plates and rods of arbitrary geometry, discretized by the finite element method. Such problems are solved by iterative methods, which, however, are known to suffer from slow convergence or even convergence failure, when the thickness is small. In this paper we show an effective way of resolving this difficulty by invoking a special preconditioning technique associated with the effective dimensional reduction algorithm (EDRA). As an example, we present an algorithm for computing the minimal eigenvalue of a thin elastic plate and we show both theoretically and numerically that it is robust with respect to both the thickness and discretization parameters, i.e. the convergence does not deteriorate with diminishing thickness or mesh refinement. This robustness is sine qua non for the efficient computation of large-scale eigenvalue problems for thin elastic structures. PMID:10655469
Correlation Dimension Estimates of Global and Local Temperature Data.
NASA Astrophysics Data System (ADS)
Wang, Qiang
1995-11-01
The author has attempted to detect the presence of low-dimensional deterministic chaos in temperature data by estimating the correlation dimension with the Hill estimate that has been recently developed by Mikosch and Wang. There is no convincing evidence of low dimensionality with either global dataset (Southern Hemisphere monthly average temperatures from 1858 to 1984) or local temperature dataset (daily minimums at Auckland, New Zealand). Any apparent reduction in the dimension estimates appears to be due large1y, if not entirely, to effects of statistical bias, but neither is it a purely random stochastic process. The dimension of the climatic attractor may be significantly larger than 10.
NASA Astrophysics Data System (ADS)
Sun, Xi-wan; Guo, Zhen-yun; Huang, Wei; Li, Shi-bin; Yan, Li
2017-02-01
The drag reduction and thermal protection system applied to hypersonic re-entry vehicles have attracted an increasing attention, and several novel concepts have been proposed by researchers. In the current study, the influences of performance parameters on drag and heat reduction efficiency of combinational novel cavity and opposing jet concept has been investigated numerically. The Reynolds-average Navier-Stokes (RANS) equations coupled with the SST k-ω turbulence model have been employed to calculate its surrounding flowfields, and the first-order spatially accurate upwind scheme appears to be more suitable for three-dimensional flowfields after grid independent analysis. Different cases of performance parameters, namely jet operating conditions, freestream angle of attack and physical dimensions, are simulated based on the verification of numerical method, and the effects on shock stand-off distance, drag force coefficient, surface pressure and heat flux distributions have been analyzed. This is the basic study for drag reduction and thermal protection by multi-objective optimization of the combinational novel cavity and opposing jet concept in hypersonic flows in the future.
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.
2016-10-01
In this paper, we propose a report about our examining of the validity of OCT in identifying changes using a skin cancer texture analysis compiled from Haralick texture features, fractal dimension, Markov random field method and the complex directional features from different tissues. Described features have been used to detect specific spatial characteristics, which can differentiate healthy tissue from diverse skin cancers in cross-section OCT images (B- and/or C-scans). In this work, we used an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images. The Haralick texture features as contrast, correlation, energy, and homogeneity have been calculated in various directions. A box-counting method is performed to evaluate fractal dimension of skin probes. Markov random field have been used for the quality enhancing of the classifying. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. Our results demonstrate that these texture features may present helpful information to discriminate tumor from healthy tissue. The experimental data set contains 488 OCT-images with normal skin and tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevus. All images were acquired from our laboratory SD-OCT setup based on broadband light source, delivering an output power of 20 mW at the central wavelength of 840 nm with a bandwidth of 25 nm. We obtained sensitivity about 97% and specificity about 73% for a task of discrimination between MM and Nevus.
NASA Astrophysics Data System (ADS)
Amato, Umberto; Antoniadis, Anestis; De Feis, Italia; Masiello, Guido; Matricardi, Marco; Serio, Carmine
2009-03-01
Remote sensing of atmosphere is changing rapidly thanks to the development of high spectral resolution infrared space-borne sensors. The aim is to provide more and more accurate information on the lower atmosphere, as requested by the World Meteorological Organization (WMO), to improve reliability and time span of weather forecasts plus Earth's monitoring. In this paper we show the results we have obtained on a set of Infrared Atmospheric Sounding Interferometer (IASI) observations using a new statistical strategy based on dimension reduction. Retrievals have been compared to time-space colocated ECMWF analysis for temperature, water vapor and ozone.
Fractal structures and fractal functions as disease indicators
Escos, J.M; Alados, C.L.; Emlen, J.M.
1995-01-01
Developmental instability is an early indicator of stress, and has been used to monitor the impacts of human disturbance on natural ecosystems. Here we investigate the use of different measures of developmental instability on two species, green peppers (Capsicum annuum), a plant, and Spanish ibex (Capra pyrenaica), an animal. For green peppers we compared the variance in allometric relationship between control plants, and a treatment group infected with the tomato spotted wilt virus. The results show that infected plants have a greater variance about the allometric regression line than the control plants. We also observed a reduction in complexity of branch structure in green pepper with a viral infection. Box-counting fractal dimension of branch architecture declined under stress infection. We also tested the reduction in complexity of behavioral patterns under stress situations in Spanish ibex (Capra pyrenaica). Fractal dimension of head-lift frequency distribution measures predator detection efficiency. This dimension decreased under stressful conditions, such as advanced pregnancy and parasitic infection. Feeding distribution activities reflect food searching efficiency. Power spectral analysis proves to be the most powerful tool for character- izing fractal behavior, revealing a reduction in complexity of time distribution activity under parasitic infection.
Shape component analysis: structure-preserving dimension reduction on biological shape spaces.
Lee, Hao-Chih; Liao, Tao; Zhang, Yongjie Jessica; Yang, Ge
2016-03-01
Quantitative shape analysis is required by a wide range of biological studies across diverse scales, ranging from molecules to cells and organisms. In particular, high-throughput and systems-level studies of biological structures and functions have started to produce large volumes of complex high-dimensional shape data. Analysis and understanding of high-dimensional biological shape data require dimension-reduction techniques. We have developed a technique for non-linear dimension reduction of 2D and 3D biological shape representations on their Riemannian spaces. A key feature of this technique is that it preserves distances between different shapes in an embedded low-dimensional shape space. We demonstrate an application of this technique by combining it with non-linear mean-shift clustering on the Riemannian spaces for unsupervised clustering of shapes of cellular organelles and proteins. Source code and data for reproducing results of this article are freely available at https://github.com/ccdlcmu/shape_component_analysis_Matlab The implementation was made in MATLAB and supported on MS Windows, Linux and Mac OS. geyang@andrew.cmu.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aliev, Alikram N.; Cebeci, Hakan; Dereli, Tekin
We present an exact solution describing a stationary and axisymmetric object with electromagnetic and dilaton fields. The solution generalizes the usual Kerr-Taub-NUT (Newman-Unti-Tamburino) spacetime in general relativity and is obtained by boosting this spacetime in the fifth dimension and performing a Kaluza-Klein reduction to four dimensions. We also discuss the physical parameters of this solution and calculate its gyromagnetic ratio.
Old Tails and New Trails in High Dimensions
ERIC Educational Resources Information Center
Halevy, Avner
2013-01-01
We discuss the motivation for dimension reduction in the context of the modern data revolution and introduce a key result in this field, the Johnson-Lindenstrauss flattening lemma. Then we leap into high-dimensional space for a glimpse of the phenomenon called concentration of measure, and use it to sketch a proof of the lemma. We end by tying…
Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division
NASA Astrophysics Data System (ADS)
Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano
2013-04-01
We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.
Some remarks on relativistic diffusion and the spectral dimension criterion
NASA Astrophysics Data System (ADS)
Muniz, C. R.; Cunha, M. S.; Filho, R. N. Costa; Bezerra, V. B.
2015-01-01
The spectral dimension ds for high energies is calculated using the Relativistic Schrödinger Equation Analytically Continued (RSEAC) instead of the so-called Telegraph's equation (TE), in both ultraviolet (UV) and infrared (IR) regimens. Regarding the TE, the recent literature presents difficulties related to its stochastic derivation and interpretation, advocating the use of the RSEAC to properly describe the relativistic diffusion phenomena. Taking into account that the Lorentz symmetry is broken in UV regime at Lifshitz point, we show that there exists a degeneracy in very high energies, meaning that both the RSEAC and TE correctly describe the diffusion processes at these energy scales, at least under the spectral dimension criterion. In fact, both the equations yield the same result, namely, ds=2 , a dimensional reduction that is compatible with several theories of quantum gravity. This result is reached even when one takes into account a cosmological model, as for example, the de Sitter universe. On the other hand, in the IR regimen, such degeneracy is lifted in favor of the approach via TE, due to the fact that only this equation provides the correct value for ds, which is equal to the actual number of spacetime dimensions, i.e., ds=4 , while RSEAC yields ds=3 , so that a diffusing particle described by this method experiences a three-dimensional spacetime.
Dimensions of integration in interdisciplinary explanations of the origin of evolutionary novelty.
Love, Alan C; Lugar, Gary L
2013-12-01
Many philosophers of biology have embraced a version of pluralism in response to the failure of theory reduction but overlook how concepts, methods, and explanatory resources are in fact coordinated, such as in interdisciplinary research where the aim is to integrate different strands into an articulated whole. This is observable for the origin of evolutionary novelty-a complex problem that requires a synthesis of intellectual resources from different fields to arrive at robust answers to multiple allied questions. It is an apt locus for exploring new dimensions of explanatory integration because it necessitates coordination among historical and experimental disciplines (e.g., geology and molecular biology). These coordination issues are widespread for the origin of novel morphologies observed in the Cambrian Explosion. Despite an explicit commitment to an integrated, interdisciplinary explanation, some potential disciplinary contributors are excluded. Notable among these exclusions is the physics of ontogeny. We argue that two different dimensions of integration-data and standards-have been insufficiently distinguished. This distinction accounts for why physics-based explanatory contributions to the origin of novelty have been resisted: they do not integrate certain types of data and differ in how they conceptualize the standard of uniformitarianism in historical, causal explanations. Our analysis of these different dimensions of integration contributes to the development of more adequate and integrated explanatory frameworks. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modal reduction in single crystal sapphire optical fiber
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Yujie; Hill, Cary; Liu, Bo
2015-10-12
A new type of single crystal sapphire optical fiber (SCSF) design is proposed to reduce the number of guided modes via a highly dispersive cladding with a periodic array of high and low index regions in the azimuthal direction. The structure retains a “core” region of pure single crystal (SC) sapphire in the center of the fiber and a “cladding” region of alternating layers of air and SC sapphire in the azimuthal direction that is uniform in the radial direction. The modal characteristics and confinement losses of the fundamental mode were analyzed via the finite element method by varying themore » effective core diameter and the dimensions of the “windmill” shaped cladding. The simulation results showed that the number of guided modes were significantly reduced in the “windmill” fiber design, as the radial dimension of the air and SC sapphire cladding regions increase with corresponding decrease in the azimuthal dimension. It is anticipated that the “windmill” SCSF will readily improve the performance of current fiber optic sensors in the harsh environment and potentially enable those that were limited by the extremely large modal volume of unclad SCSF.« less
[A research in speech endpoint detection based on boxes-coupling generalization dimension].
Wang, Zimei; Yang, Cuirong; Wu, Wei; Fan, Yingle
2008-06-01
In this paper, a new calculating method of generalized dimension, based on boxes-coupling principle, is proposed to overcome the edge effects and to improve the capability of the speech endpoint detection which is based on the original calculating method of generalized dimension. This new method has been applied to speech endpoint detection. Firstly, the length of overlapping border was determined, and through calculating the generalized dimension by covering the speech signal with overlapped boxes, three-dimension feature vectors including the box dimension, the information dimension and the correlation dimension were obtained. Secondly, in the light of the relation between feature distance and similarity degree, feature extraction was conducted by use of common distance. Lastly, bi-threshold method was used to classify the speech signals. The results of experiment indicated that, by comparison with the original generalized dimension (OGD) and the spectral entropy (SE) algorithm, the proposed method is more robust and effective for detecting the speech signals which contain different kinds of noise in different signal noise ratio (SNR), especially in low SNR.
Behavior Based Social Dimensions Extraction for Multi-Label Classification
Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin
2016-01-01
Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849
NASA Astrophysics Data System (ADS)
Tadokoro, Masahide; Shinozuka, Shinichi; Ogata, Kunie; Morimoto, Tamotsu
2008-03-01
Semiconductor manufacturing technology has shifted towards finer design rules, and demands for critical dimension uniformity (CDU) of resist patterns have become greater than ever. One of the methods for improving CDU of resist pattern is to control the temperature of post-exposure bake (PEB). When ArF resist is used, there is a certain relationship between critical dimension (CD) and PEB temperature. By utilizing this relationship, Resist Pattern CDU can be improved through control of within-wafer temperature distribution in the PEB process. We have already applied this method to Resist Pattern CDU improvement and have achieved these results. In this evaluation, we aim at: 1. Clarifying the relationship between the improvement in Resist Pattern CDU through PEB temperature control and the improvement in Etching Pattern CDU. 2. Verifying whether Resist Pattern CDU improvement through PEB temperature control has any effect on the reduction in wiring resistance variation. The evaluation procedure is: 1. Preparation of wafers with base film of doped Poly-Si (D-Poly). 2. Creation of two sets of samples on the base, a set of samples with good Resist Pattern CDU and a set of samples with poor Resist Pattern CDU. 3. Etching of the two sets under the same conditions. 4. Measurements of CD and wiring resistance. We used Optical CD Measurement (OCD) for measurement of resist pattern and etching pattern for the reason that OCD is minimally affected by Line Edge Roughness (LER). As a result, we found that; 1. The improvement in Resist Pattern CDU leads to the improvement in Etching Pattern CDU . 2. The improvement in Resist Pattern CDU has an effect on the reduction in wiring resistance variation. There is a cause-and-effect relationship between wiring resistance variation and transistor characteristics. From this relationship, we expect that the improvement in Resist Pattern CDU through PEB temperature control can contribute to device performance improvement.
a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution
NASA Astrophysics Data System (ADS)
Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin
Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.
CDU improvement technology of etching pattern using photo lithography
NASA Astrophysics Data System (ADS)
Tadokoro, Masahide; Shinozuka, Shinichi; Jyousaka, Megumi; Ogata, Kunie; Morimoto, Tamotsu; Konishi, Yoshitaka
2008-03-01
Semiconductor manufacturing technology has shifted towards finer design rules, and demands for critical dimension uniformity (CDU) of resist patterns have become greater than ever. One of the methods for improving Resist Pattern CDU is to control post-exposure bake (PEB) temperature. When ArF resist is used, there is a certain relationship between critical dimension (CD) and PEB temperature. By utilizing this relationship, Resist Pattern CDU can be improved through control of within-wafer temperature distribution in the PEB process. Resist Pattern CDU improvement contributes to Etching Pattern CDU improvement to a certain degree. To further improve Etching Pattern CDU, etcher-specific CD variation needs to be controlled. In this evaluation, 1. We verified whether etcher-specific CD variation can be controlled and consequently Etching Pattern CDU can be further improved by controlling resist patterns through PEB control. 2. Verifying whether Etching Pattern CDU improvement through has any effect on the reduction in wiring resistance variation. The evaluation procedure is as follows.1. Wafers with base film of Doped Poly-Si (D-Poly) were prepared. 2. Resist patterns were created on them. 3. To determine etcher-specific characteristics, the first etching was performed, and after cleaning off the resist and BARC, CD of etched D-Poly was measured. 4. Using the obtained within-wafer CD distribution of the etching patterns, within-wafer temperature distribution in the PEB process was modified. 5. Resist patterns were created again, followed by the second etching and cleaning, which was followed by CD measurement. We used Optical CD Measurement (OCD) for measurement of resist patterns and etching patterns as OCD is minimally affected by Line Edge Roughness (LER). As a result, 1. We confirmed the effect of Resist Pattern CD control through PEB control on the reduction in etcher-specific CD variation and the improvement in Etching Pattern CDU. 2. The improvement in Etching Pattern CDU has an effect on the reduction in wiring resistance variation. The method for Etching Pattern CDU improvement through PEB control reduces within-wafer variation of MOS transistor's gate length. Therefore, with this method, we can expect to observe uniform within-wafer MOS transistor characteristics.
Inflation from extra dimensions
NASA Astrophysics Data System (ADS)
Levin, Janna J.
1995-02-01
A gravity-driven inflation is shown to arise from a simple higher-dimensional universe. In vacuum, the shear of n > 1 contracting dimensions is able to inflate the remaining three spatial dimensions. Said another way, the expansion of the 3-volume is accelerated by the contraction of the n-volume. Upon dimensional reduction, the theory is equivalent to a four-dimensional cosmology with a dynamical Planck mass. A connection can therefore be made to recent examples of inflation powered by a dilaton kinetic energy. Unfortunately, the graceful exit problem encountered in dilaton cosmologies will haunt this cosmology as well.
2.5D multi-view gait recognition based on point cloud registration.
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-03-28
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM.
Clustering of Variables for Mixed Data
NASA Astrophysics Data System (ADS)
Saracco, J.; Chavent, M.
2016-05-01
This chapter presents clustering of variables which aim is to lump together strongly related variables. The proposed approach works on a mixed data set, i.e. on a data set which contains numerical variables and categorical variables. Two algorithms of clustering of variables are described: a hierarchical clustering and a k-means type clustering. A brief description of PCAmix method (that is a principal component analysis for mixed data) is provided, since the calculus of the synthetic variables summarizing the obtained clusters of variables is based on this multivariate method. Finally, the R packages ClustOfVar and PCAmixdata are illustrated on real mixed data. The PCAmix and ClustOfVar approaches are first used for dimension reduction (step 1) before applying in step 2 a standard clustering method to obtain groups of individuals.
Fuss, Franz Konstantin
2013-01-01
Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.
2013-01-01
Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522
Lennon, Olive C; Doody, Catherine; Ni Choisdealbh, Cliodhna; Blake, Catherine
2013-12-01
The aim of the study was to explore community-dwelling stroke patients' perceived barriers to healthy-lifestyle participation for secondary disease prevention, as well as their preferred means for risk-reduction information dissemination and motivators to participation in healthy-lifestyle interventions. Four focus groups (5-6 stroke survivors per group) were defined from community support groups. Key questions addressed barriers to healthy-lifestyle adoption, preferred methods for receiving information and factors that would engage participants in a risk-reduction programme. Groups were audiotaped, transcribed verbatim and analysed for thematic content using a framework approach. Twenty-two participants, 12 men, 10 women, mean age 71.4 (53-87) years, were included in the study. Three overarching themes emerged as barriers to healthy-lifestyle participation: physical, mental and environmental. Exercise participation difficulties spread across all three themes; healthy eating and smoking cessation concentrated in environmental and mental dimensions. Talks (discussions) were noted as participants' preferred method of information provision. Risk-reduction programmes considered attractive were stroke specific, convenient and delivered by healthcare professionals and involved both social and exercise components. Many stroke patients appear unable to adopt healthy-lifestyle changes through advice alone because of physical, mental and environmental barriers. Risk-reduction programmes including interactive education should be specifically tailored to address barriers currently experienced and extend beyond the stroke survivor to others in their environment who influence lifestyle choices.
NASA Astrophysics Data System (ADS)
Hiremath, Varun; Pope, Stephen B.
2013-04-01
The Rate-Controlled Constrained-Equilibrium (RCCE) method is a thermodynamic based dimension reduction method which enables representation of chemistry involving n s species in terms of fewer n r constraints. Here we focus on the application of the RCCE method to Lagrangian particle probability density function based computations. In these computations, at every reaction fractional step, given the initial particle composition (represented using RCCE), we need to compute the reaction mapping, i.e. the particle composition at the end of the time step. In this work we study three different implementations of RCCE for computing this reaction mapping, and compare their relative accuracy and efficiency. These implementations include: (1) RCCE/TIFS (Trajectory In Full Space): this involves solving a system of n s rate-equations for all the species in the full composition space to obtain the reaction mapping. The other two implementations obtain the reaction mapping by solving a reduced system of n r rate-equations obtained by projecting the n s rate-equations for species evaluated in the full space onto the constrained subspace. These implementations include (2) RCCE: this is the classical implementation of RCCE which uses a direct projection of the rate-equations for species onto the constrained subspace; and (3) RCCE/RAMP (Reaction-mixing Attracting Manifold Projector): this is a new implementation introduced here which uses an alternative projector obtained using the RAMP approach. We test these three implementations of RCCE for methane/air premixed combustion in the partially-stirred reactor with chemistry represented using the n s=31 species GRI-Mech 1.2 mechanism with n r=13 to 19 constraints. We show that: (a) the classical RCCE implementation involves an inaccurate projector which yields large errors (over 50%) in the reaction mapping; (b) both RCCE/RAMP and RCCE/TIFS approaches yield significantly lower errors (less than 2%); and (c) overall the RCCE/TIFS approach is the most accurate, efficient (by orders of magnitude) and robust implementation.
Error reduction program: A progress report
NASA Technical Reports Server (NTRS)
Syed, S. A.
1984-01-01
Five finite differences schemes were evaluated for minimum numerical diffusion in an effort to identify and incorporate the best error reduction scheme into a 3D combustor performance code. Based on this evaluated, two finite volume method schemes were selected for further study. Both the quadratic upstream differencing scheme (QUDS) and the bounded skew upstream differencing scheme two (BSUDS2) were coded into a two dimensional computer code and their accuracy and stability determined by running several test cases. It was found that BSUDS2 was more stable than QUDS. It was also found that the accuracy of both schemes is dependent on the angle that the streamline make with the mesh with QUDS being more accurate at smaller angles and BSUDS2 more accurate at larger angles. The BSUDS2 scheme was selected for extension into three dimensions.
Critical behavior and dimension crossover of pion superfluidity
NASA Astrophysics Data System (ADS)
Wang, Ziyue; Zhuang, Pengfei
2016-09-01
We investigate the critical behavior of pion superfluidity in the framework of the functional renormalization group (FRG). By solving the flow equations in the SU(2) linear sigma model at finite temperature and isospin density, and making comparison with the fixed point analysis of a general O (N ) system with continuous dimension, we find that the pion superfluidity is a second order phase transition subject to an O (2 ) universality class with a dimension crossover from dc=4 to dc=3 . This phenomenon provides a concrete example of dimension reduction in thermal field theory. The large-N expansion gives a temperature independent critical exponent β and agrees with the FRG result only at zero temperature.
Development of a refractive error quality of life scale for Thai adults (the REQ-Thai).
Sukhawarn, Roongthip; Wiratchai, Nonglak; Tatsanavivat, Pyatat; Pitiyanuwat, Somwung; Kanato, Manop; Srivannaboon, Sabong; Guyatt, Gordon H
2011-08-01
To develop a scale for measuring refractive error quality of life (QOL) for Thai adults. The full survey comprised 424 respondents from 5 medical centers in Bangkok and from 3 medical centers in Chiangmai, Songkla and KhonKaen provinces. Participants were emmetropes and persons with refractive correction with visual acuity of 20/30 or better An item reduction process was employed by combining 3 methods-expert opinion, impact method and item-total correlation methods. The classical reliability testing and the validity testing including convergent, discriminative and construct validity was performed. The developed questionnaire comprised 87 items in 6 dimensions: 1) quality of vision, 2) visual function, 3) social function, 4) psychological function, 5) symptoms and 6) refractive correction problems. It is the 5-level Likert scale type. The Cronbach's Alpha coefficients of its dimensions ranged from 0.756 to 0. 979. All validity testing were shown to be valid. The construct validity was validated by the confirmatory factor analysis. A short version questionnaire comprised 48 items with good reliability and validity was also developed. This is the first validated instrument for measuring refractive error quality of life for Thai adults that was developed with strong research methodology and large sample size.
Some Aspects of Essentially Nonoscillatory (ENO) Formulations for the Euler Equations, Part 3
NASA Technical Reports Server (NTRS)
Chakravarthy, Sukumar R.
1990-01-01
An essentially nonoscillatory (ENO) formulation is described for hyperbolic systems of conservation laws. ENO approaches are based on smart interpolation to avoid spurious numerical oscillations. ENO schemes are a superset of Total Variation Diminishing (TVD) schemes. In the recent past, TVD formulations were used to construct shock capturing finite difference methods. At extremum points of the solution, TVD schemes automatically reduce to being first-order accurate discretizations locally, while away from extrema they can be constructed to be of higher order accuracy. The new framework helps construct essentially non-oscillatory finite difference methods without recourse to local reductions of accuracy to first order. Thus arbitrarily high orders of accuracy can be obtained. The basic general ideas of the new approach can be specialized in several ways and one specific implementation is described based on: (1) the integral form of the conservation laws; (2) reconstruction based on the primitive functions; (3) extension to multiple dimensions in a tensor product fashion; and (4) Runge-Kutta time integration. The resulting method is fourth-order accurate in time and space and is applicable to uniform Cartesian grids. The construction of such schemes for scalar equations and systems in one and two space dimensions is described along with several examples which illustrate interesting aspects of the new approach.
Multilayer Extreme Learning Machine With Subnetwork Nodes for Representation Learning.
Yang, Yimin; Wu, Q M Jonathan
2016-11-01
The extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feedforward neural networks, provides efficient unified learning solutions for the applications of clustering, regression, and classification. It presents competitive accuracy with superb efficiency in many applications. However, ELM with subnetwork nodes architecture has not attracted much research attentions. Recently, many methods have been proposed for supervised/unsupervised dimension reduction or representation learning, but these methods normally only work for one type of problem. This paper studies the general architecture of multilayer ELM (ML-ELM) with subnetwork nodes, showing that: 1) the proposed method provides a representation learning platform with unsupervised/supervised and compressed/sparse representation learning and 2) experimental results on ten image datasets and 16 classification datasets show that, compared to other conventional feature learning methods, the proposed ML-ELM with subnetwork nodes performs competitively or much better than other feature learning methods.
Ahmed, Shakeel; Annu; Chaudhry, Saif Ali; Ikram, Saiqa
2017-01-01
Nanotechnology is emerging as an important area of research with its tremendous applications in all fields of science, engineering, medicine, pharmacy, etc. It involves the materials and their applications having one dimension in the range of 1-100nm. Generally, various techniques are used for syntheses of nanoparticles (NPs) viz. laser ablation, chemical reduction, milling, sputtering, etc. These conventional techniques e.g. chemical reduction method, in which various hazardous chemicals are used for the synthesis of NPs later become liable for innumerable health risks due to their toxicity and endangering serious concerns for environment, while other approaches are expensive, need high energy for the synthesis of NPs. However, biogenic synthesis method to produce NPs is eco-friendly and free of chemical contaminants for biological applications where purity is of concerns. In biological method, different biological entities such as extract, enzymes or proteins of a natural product are used to reduce and stabilised formation of NPs. The nature of these biological entities also influence the structure, shape, size and morphology of synthesized NPs. In this review, biogenic synthesis of zinc oxide (ZnO) NPs, procedures of syntheses, mechanism of formation and their various applications have been discussed. Various entities such as proteins, enzymes, phytochemicals, etc. available in the natural reductants are responsible for synthesis of ZnO NPs. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Sawant, A; Ruan, D
Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less
Reduction theorems for optimal unambiguous state discrimination of density matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raynal, Philippe; Luetkenhaus, Norbert; Enk, Steven J. van
2003-08-01
We present reduction theorems for the problem of optimal unambiguous state discrimination of two general density matrices. We show that this problem can be reduced to that of two density matrices that have the same rank n and are described in a Hilbert space of dimensions 2n. We also show how to use the reduction theorems to discriminate unambiguously between N mixed states (N{>=}2)
NASA Astrophysics Data System (ADS)
Zuo, Quan; Zhao, Pingping; Luo, Wei; Cheng, Gongzhen
2016-07-01
Developing high-performance non-precious catalysts to replace platinum as oxygen reduction reaction (ORR) catalysts is still a big scientific and technological challenge. Herein, we report a simple method for the synthesis of a FeNC catalyst with a 3D hierarchically micro/meso/macro porous network and high surface area through a simple carbonization method by taking the advantages of a high specific surface area and diverse pore dimensions in 3D porous covalent-organic material. The resulting FeNC-900 electrocatalyst with improved reactant/electrolyte transport and sufficient active site exposure, exhibits outstanding ORR activity with a half-wave potential of 0.878 V, ca. 40 mV more positive than Pt/C for ORR in alkaline solution, and a half-wave potential of 0.72 V, which is comparable to that of Pt/C in acidic solution. In particular, the resulting FeNC-900 exhibits a much higher stability and methanol tolerance than those of Pt/C, which makes it among the best non-precious catalysts ever reported for ORR.Developing high-performance non-precious catalysts to replace platinum as oxygen reduction reaction (ORR) catalysts is still a big scientific and technological challenge. Herein, we report a simple method for the synthesis of a FeNC catalyst with a 3D hierarchically micro/meso/macro porous network and high surface area through a simple carbonization method by taking the advantages of a high specific surface area and diverse pore dimensions in 3D porous covalent-organic material. The resulting FeNC-900 electrocatalyst with improved reactant/electrolyte transport and sufficient active site exposure, exhibits outstanding ORR activity with a half-wave potential of 0.878 V, ca. 40 mV more positive than Pt/C for ORR in alkaline solution, and a half-wave potential of 0.72 V, which is comparable to that of Pt/C in acidic solution. In particular, the resulting FeNC-900 exhibits a much higher stability and methanol tolerance than those of Pt/C, which makes it among the best non-precious catalysts ever reported for ORR. Electronic supplementary information (ESI) available: Fig. S1-S12 and Tables S1 and S2. See DOI: 10.1039/c6nr03273g
How many invariant polynomials are needed to decide local unitary equivalence of qubit states?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maciążek, Tomasz; Faculty of Physics, University of Warsaw, ul. Hoża 69, 00-681 Warszawa; Oszmaniec, Michał
2013-09-15
Given L-qubit states with the fixed spectra of reduced one-qubit density matrices, we find a formula for the minimal number of invariant polynomials needed for solving local unitary (LU) equivalence problem, that is, problem of deciding if two states can be connected by local unitary operations. Interestingly, this number is not the same for every collection of the spectra. Some spectra require less polynomials to solve LU equivalence problem than others. The result is obtained using geometric methods, i.e., by calculating the dimensions of reduced spaces, stemming from the symplectic reduction procedure.
Losa, Gabriele A; Castelli, Christian
2005-11-01
An analytical strategy combining fractal geometry and grey-level co-occurrence matrix (GLCM) statistics was devised to investigate ultrastructural changes in oestrogen-insensitive SK-BR3 human breast cancer cells undergoing apoptosis in vitro. Apoptosis was induced by 1 microM calcimycin (A23187 Ca(2+) ionophore) and assessed by measuring conventional cellular parameters during the culture period. SK-BR3 cells entered the early stage of apoptosis within 24 h of treatment with calcimycin, which induced detectable changes in nuclear components, as documented by increased values of most GLCM parameters and by the general reduction of the fractal dimensions. In these affected cells, morphonuclear traits were accompanied by the reduction of distinct gangliosides and loss of unidentifiable glycolipid molecules at the cell surface. All these changes were shown to be involved in apoptosis before the detection of conventional markers, which were only measurable during the active phases of apoptotic cell death. In overtly apoptotic cells treated with 1 microM calcimycin for 72 h, most nuclear components underwent dramatic ultrastructural changes, including marginalisation and condensation of chromatin, as reflected in a significant reduction of their fractal dimensions. Hence, both fractal and GLCM analyses confirm that the morphological reorganisation of nuclei, attributable to a loss of structural complexity, occurs early in apoptosis.
Synthetic dimensions for cold atoms from shaking a harmonic trap
NASA Astrophysics Data System (ADS)
Price, Hannah M.; Ozawa, Tomoki; Goldman, Nathan
2017-02-01
We introduce a simple scheme to implement synthetic dimensions in ultracold atomic gases, which only requires two basic and ubiquitous ingredients: the harmonic trap, which confines the atoms, combined with a periodic shaking. In our approach, standard harmonic oscillator eigenstates are reinterpreted as lattice sites along a synthetic dimension, while the coupling between these lattice sites is controlled by the applied time modulation. The phase of this modulation enters as a complex hopping phase, leading straightforwardly to an artificial magnetic field upon adding a second dimension. We show that this artificial gauge field has important consequences, such as the counterintuitive reduction of average energy under resonant driving, or the realization of quantum Hall physics. Our approach offers significant advantages over previous implementations of synthetic dimensions, providing an intriguing route towards higher-dimensional topological physics and strongly-correlated states.
Agha, Salah R; Alnahhal, Mohammed J
2012-11-01
The current study investigates the possibility of obtaining the anthropometric dimensions, critical to school furniture design, without measuring all of them. The study first selects some anthropometric dimensions that are easy to measure. Two methods are then used to check if these easy-to-measure dimensions can predict the dimensions critical to the furniture design. These methods are multiple linear regression and neural networks. Each dimension that is deemed necessary to ergonomically design school furniture is expressed as a function of some other measured anthropometric dimensions. Results show that out of the five dimensions needed for chair design, four can be related to other dimensions that can be measured while children are standing. Therefore, the method suggested here would definitely save time and effort and avoid the difficulty of dealing with students while measuring these dimensions. In general, it was found that neural networks perform better than multiple linear regression in the current study. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
A Numerical Study on the Screening of Blast-Induced Waves for Reducing Ground Vibration
NASA Astrophysics Data System (ADS)
Park, Dohyun; Jeon, Byungkyu; Jeon, Seokwon
2009-06-01
Blasting is often a necessary part of mining and construction operations, and is the most cost-effective way to break rock, but blasting generates both noise and ground vibration. In urban areas, noise and vibration have an environmental impact, and cause structural damage to nearby structures. Various wave-screening methods have been used for many years to reduce blast-induced ground vibration. However, these methods have not been quantitatively studied for their reduction effect of ground vibration. The present study focused on the quantitative assessment of the effectiveness in vibration reduction of line-drilling as a screening method using a numerical method. Two numerical methods were used to analyze the reduction effect toward ground vibration, namely, the “distinct element method” and the “non-linear hydrocode.” The distinct element method, by particle flow code in two dimensions (PFC 2D), was used for two-dimensional parametric analyses, and some cases of two-dimensional analyses were analyzed three-dimensionally using AUTODYN 3D, the program of the non-linear hydrocode. To analyze the screening effectiveness of line-drilling, parametric analyses were carried out under various conditions, with the spacing, diameter of drill holes, distance between the blasthole and line-drilling, and the number of rows of drill holes, including their arrangement, used as parameters. The screening effectiveness was assessed via a comparison of the vibration amplitude between cases both with and without screening. Also, the frequency distribution of ground motion of the two cases was investigated through fast Fourier transform (FFT), with the differences also examined. From our study, it was concluded that line-drilling as a screening method of blast-induced waves was considerably effective under certain design conditions. The design details for field application have also been proposed.
NASA Astrophysics Data System (ADS)
Adi-Kusumo, Fajar; Gunardi, Utami, Herni; Nurjani, Emilya; Sopaheluwakan, Ardhasena; Aluicius, Irwan Endrayanto; Christiawan, Titus
2016-02-01
We consider the Empirical Orthogonal Function (EOF) to study the rainfall pattern in Daerah Istimewa Yogyakarta (DIY) Province, Indonesia. The EOF is one of the important methods to study the dominant pattern of the data using dimension reduction technique. EOF makes possible to reduce the huge dimension of observed data into a smaller one without losing its significant information in order to figures the whole data. The methods is also known as Principal Components Analysis (PCA) which is conducted to find the pattern of the data. DIY Province is one of the province in Indonesia which has special characteristics related to the rainfall pattern. This province has an active volcano, karst, highlands, and also some lower area including beach. This province is bounded by the Indonesian ocean which is one of the important factor to provide the rainfall. We use at least ten years rainfall monthly data of all stations in this area and study the rainfall characteristics based on the four regencies of the province. EOF analysis is conducted to analyze data in order to decide the station groups which have similar characters.
Support vector machine and principal component analysis for microarray data classification
NASA Astrophysics Data System (ADS)
Astuti, Widi; Adiwijaya
2018-03-01
Cancer is a leading cause of death worldwide although a significant proportion of it can be cured if it is detected early. In recent decades, technology called microarray takes an important role in the diagnosis of cancer. By using data mining technique, microarray data classification can be performed to improve the accuracy of cancer diagnosis compared to traditional techniques. The characteristic of microarray data is small sample but it has huge dimension. Since that, there is a challenge for researcher to provide solutions for microarray data classification with high performance in both accuracy and running time. This research proposed the usage of Principal Component Analysis (PCA) as a dimension reduction method along with Support Vector Method (SVM) optimized by kernel functions as a classifier for microarray data classification. The proposed scheme was applied on seven data sets using 5-fold cross validation and then evaluation and analysis conducted on term of both accuracy and running time. The result showed that the scheme can obtained 100% accuracy for Ovarian and Lung Cancer data when Linear and Cubic kernel functions are used. In term of running time, PCA greatly reduced the running time for every data sets.
Machine Learning: A Crucial Tool for Sensor Design
Zhao, Weixiang; Bhushan, Abhinav; Santamaria, Anthony D.; Simon, Melinda G.; Davis, Cristina E.
2009-01-01
Sensors have been widely used for disease diagnosis, environmental quality monitoring, food quality control, industrial process analysis and control, and other related fields. As a key tool for sensor data analysis, machine learning is becoming a core part of novel sensor design. Dividing a complete machine learning process into three steps: data pre-treatment, feature extraction and dimension reduction, and system modeling, this paper provides a review of the methods that are widely used for each step. For each method, the principles and the key issues that affect modeling results are discussed. After reviewing the potential problems in machine learning processes, this paper gives a summary of current algorithms in this field and provides some feasible directions for future studies. PMID:20191110
The moduli space of vacua of $$ \\mathcal{N}=2 $$ class $$ \\mathcal{S} $$ theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Dan; Yonekura, Kazuya
We develop a systematic method to describe the moduli space of vacua of four dimensional N=2 class S theories including Coulomb branch, Higgs branch and mixed branches. In particular, we determine the Higgs and mixed branch roots, and the dimensions of the Coulomb and Higgs components of mixed branches. They are derived by using generalized Hitchin’s equations obtained from twisted compactification of 5d maximal Super-Yang-Mills, with local degrees of freedom at punctures given by (nilpotent) orbits. The crucial thing is the holomorphic factorization of the Seiberg-Witten curve and reduction of singularity at punctures. We illustrate our method by many examplesmore » including N=2 SQCD, T N theory and Argyres-Douglas theories.« less
ERIC Educational Resources Information Center
Department for International Development, London (England).
The Department for International Development (DFID) is the British government department responsible for promoting development and the reduction of poverty in sites in developing and transition countries around the world. This paper focuses on the education dimension of poverty reduction, and specifically the attainment of the International…
An, Yan; Zou, Zhihong; Li, Ranran
2014-01-01
A large number of parameters are acquired during practical water quality monitoring. If all the parameters are used in water quality assessment, the computational complexity will definitely increase. In order to reduce the input space dimensions, a fuzzy rough set was introduced to perform attribute reduction. Then, an attribute recognition theoretical model and entropy method were combined to assess water quality in the Harbin reach of the Songhuajiang River in China. A dataset consisting of ten parameters was collected from January to October in 2012. Fuzzy rough set was applied to reduce the ten parameters to four parameters: BOD5, NH3-N, TP, and F. coli (Reduct A). Considering that DO is a usual parameter in water quality assessment, another reduct, including DO, BOD5, NH3-N, TP, TN, F, and F. coli (Reduct B), was obtained. The assessment results of Reduct B show a good consistency with those of Reduct A, and this means that DO is not always necessary to assess water quality. The results with attribute reduction are not exactly the same as those without attribute reduction, which can be attributed to the α value decided by subjective experience. The assessment results gained by the fuzzy rough set obviously reduce computational complexity, and are acceptable and reliable. The model proposed in this paper enhances the water quality assessment system. PMID:24675643
Thermodynamic properties of triangle-well fluids in two dimensions: MC and MD simulations.
Reyes, Yuri; Bárcenas, Mariana; Odriozola, Gerardo; Orea, Pedro
2016-11-07
With the aim of providing complementary data of the thermodynamics properties of the triangular well potential, the vapor/liquid phase diagrams for such potential with different interaction ranges were calculated in two dimensions by Monte Carlo and molecular dynamics simulations; also, the vapor/liquid interfacial tension was calculated. As reported for other interaction potentials, it was observed that the reduction of the dimensionality makes the phase diagram to shrink. Finally, with the aid of reported data for the same potential in three dimensions, it was observed that this potential does not follow the principle of corresponding states.
Shah, Sohil Atul
2017-01-01
Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank. PMID:28851838
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172
2.5D Multi-View Gait Recognition Based on Point Cloud Registration
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-01-01
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727
A basket two-part model to analyze medical expenditure on interdependent multiple sectors.
Sugawara, Shinya; Wu, Tianyi; Yamanishi, Kenji
2018-05-01
This study proposes a novel statistical methodology to analyze expenditure on multiple medical sectors using consumer data. Conventionally, medical expenditure has been analyzed by two-part models, which separately consider purchase decision and amount of expenditure. We extend the traditional two-part models by adding the step of basket analysis for dimension reduction. This new step enables us to analyze complicated interdependence between multiple sectors without an identification problem. As an empirical application for the proposed method, we analyze data of 13 medical sectors from the Medical Expenditure Panel Survey. In comparison with the results of previous studies that analyzed the multiple sector independently, our method provides more detailed implications of the impacts of individual socioeconomic status on the composition of joint purchases from multiple medical sectors; our method has a better prediction performance.
Locally linear embedding: dimension reduction of massive protostellar spectra
NASA Astrophysics Data System (ADS)
Ward, J. L.; Lumsden, S. L.
2016-09-01
We present the results of the application of locally linear embedding (LLE) to reduce the dimensionality of dereddened and continuum subtracted near-infrared spectra using a combination of models and real spectra of massive protostars selected from the Red MSX Source survey data base. A brief comparison is also made with two other dimension reduction techniques; principal component analysis (PCA) and Isomap using the same set of spectra as well as a more advanced form of LLE, Hessian locally linear embedding. We find that whilst LLE certainly has its limitations, it significantly outperforms both PCA and Isomap in classification of spectra based on the presence/absence of emission lines and provides a valuable tool for classification and analysis of large spectral data sets.
NASA Astrophysics Data System (ADS)
Akarsu, Özgür; Dereli, Tekin; Katırcı, Nihan; Sheftel, Mikhail B.
2015-05-01
In a recent study Akarsu and Dereli (Gen. Relativ. Gravit. 45:1211, 2013) discussed the dynamical reduction of a higher dimensional cosmological model which is augmented by a kinematical constraint characterized by a single real parameter, correlating and controlling the expansion of both the external (physical) and internal spaces. In that paper explicit solutions were found only for the case of three dimensional internal space (). Here we derive a general solution of the system using Lie group symmetry properties, in parametric form for arbitrary number of internal dimensions. We also investigate the dynamical reduction of the model as a function of cosmic time for various values of and generate parametric plots to discuss cosmologically relevant results.
Nanocoaxes for Optical and Electronic Devices
Rizal, Binod; Merlo, Juan M.; Burns, Michael J.; Chiles, Thomas C.; Naughton, Michael J.
2014-01-01
The evolution of micro/nanoelectronics technology, including the shrinking of devices and integrated circuit components, has included the miniaturization of linear and coaxial structures to micro/nanoscale dimensions. This reduction in the size of coaxial structures may offer advantages to existing technologies and benefit the exploration and development of new technologies. The reduction in the size of coaxial structures has been realized with various permutations between metals, semiconductors and dielectrics for the core, shield, and annulus. This review will focus on fabrication schemes of arrays of metal – nonmetal – metal nanocoax structures using non-template and template methods, followed by possible applications. The performance and scientific advantages associated with nanocoax-based optical devices including waveguides, negative refractive index materials, light emitting diodes, and photovoltaics are presented. In addition, benefits and challenges that accrue from the application of novel nanocoax structures in energy storage, electronic and sensing devices are summarized. PMID:25279400
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciapina, Eduardo G.; Lopes, Pietro P.; Subbaraman, Ram
2015-11-01
We use the rotating ring disk (RRDE) method to study activity-selectivity relationships for the oxygen reduction reaction (ORR) on Pt(111) modified by various surface coverages of adsorbed CNad (ΘCNad). The results demonstrate that small variations in ΘCNad have dramatic effect on the ORR activity and peroxide production, resulting in “volcano-like” dependence with an optimal surface coverage of ΘCNad = 0.3 ML. These relationships can be simply explained by balancing electronic and ensemble effects of co-adsorbed CNad and adsorbed spectator species from the supporting electrolytes, without the need for intermediate adsorption energy arguments. Although this study has focused on the Pt(111)-CNad/H2SO4more » interface, the results and insight gained here are invaluable for controlling another dimension in the properties of electrochemical interfaces.« less
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
Constructive methods of invariant manifolds for kinetic problems
NASA Astrophysics Data System (ADS)
Gorban, Alexander N.; Karlin, Iliya V.; Zinovyev, Andrei Yu.
2004-06-01
The concept of the slow invariant manifold is recognized as the central idea underpinning a transition from micro to macro and model reduction in kinetic theories. We present the Constructive Methods of Invariant Manifolds for model reduction in physical and chemical kinetics, developed during last two decades. The physical problem of reduced description is studied in the most general form as a problem of constructing the slow invariant manifold. The invariance conditions are formulated as the differential equation for a manifold immersed in the phase space ( the invariance equation). The equation of motion for immersed manifolds is obtained ( the film extension of the dynamics). Invariant manifolds are fixed points for this equation, and slow invariant manifolds are Lyapunov stable fixed points, thus slowness is presented as stability. A collection of methods to derive analytically and to compute numerically the slow invariant manifolds is presented. Among them, iteration methods based on incomplete linearization, relaxation method and the method of invariant grids are developed. The systematic use of thermodynamics structures and of the quasi-chemical representation allow to construct approximations which are in concordance with physical restrictions. The following examples of applications are presented: nonperturbative deviation of physically consistent hydrodynamics from the Boltzmann equation and from the reversible dynamics, for Knudsen numbers Kn∼1; construction of the moment equations for nonequilibrium media and their dynamical correction (instead of extension of list of variables) to gain more accuracy in description of highly nonequilibrium flows; determination of molecules dimension (as diameters of equivalent hard spheres) from experimental viscosity data; model reduction in chemical kinetics; derivation and numerical implementation of constitutive equations for polymeric fluids; the limits of macroscopic description for polymer molecules, etc.
A Reduced Dimension Static, Linearized Kalman Filter and Smoother
NASA Technical Reports Server (NTRS)
Fukumori, I.
1995-01-01
An approximate Kalman filter and smoother, based on approximations of the state estimation error covariance matrix, is described. Approximations include a reduction of the effective state dimension, use of a static asymptotic error limit, and a time-invariant linearization of the dynamic model for error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. Examples of use come from TOPEX/POSEIDON.
Hossen, Md Mir; Bendickson, Lee; Palo, Pierre E; Yao, Zhiqi; Nilsen-Hamilton, Marit; Hillier, Andrew C
2018-08-31
DNA origami can be used to create a variety of complex and geometrically unique nanostructures that can be further modified to produce building blocks for applications such as in optical metamaterials. We describe a method for creating metal-coated nanostructures using DNA origami templates and a photochemical metallization technique. Triangular DNA origami forms were fabricated and coated with a thin metal layer by photochemical silver reduction while in solution or supported on a surface. The DNA origami template serves as a localized photosensitizer to facilitate reduction of silver ions directly from solution onto the DNA surface. The metallizing process is shown to result in a conformal metal coating, which grows in height to a self-limiting value with increasing photoreduction steps. Although this coating process results in a slight decrease in the triangle dimensions, the overall template shape is retained. Notably, this coating method exhibits characteristics of self-limiting and defect-filling growth, which results in a metal nanostructure that maps the shape of the original DNA template with a continuous and uniform metal layer and stops growing once all available DNA sites are exhausted.
Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images
Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung
2013-01-01
Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human–computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713
Brain gray matter phenotypes across the psychosis dimension
Ivleva, Elena I.; Bidesi, Anup S.; Thomas, Binu P.; Meda, Shashwath A.; Francis, Alan; Moates, Amanda F.; Witte, Bradley; Keshavan, Matcheri S.; Tamminga, Carol A.
2013-01-01
This study sought to examine whole brain and regional gray matter (GM) phenotypes across the schizophrenia (SZ)–bipolar disorder psychosis dimension using voxel-based morphometry (VBM 8.0 with DARTEL segmentation/normalization) and semi-automated regional parcellation, FreeSurfer (FS 4.3.1/64 bit). 3T T1 MPRAGE images were acquired from 19 volunteers with schizophrenia (SZ), 16 with schizoaffective disorder (SAD), 17 with psychotic bipolar I disorder (BD-P) and 10 healthy controls (HC). Contrasted with HC, SZ showed extensive cortical GM reductions, most pronounced in fronto-temporal regions; SAD had GM reductions overlapping with SZ, albeit less extensive; and BD-P demonstrated no GM differences from HC. Within the psychosis dimension, BD-P showed larger volumes in fronto-temporal and other cortical/subcortical regions compared with SZ, whereas SAD showed intermediate GM volumes. The two volumetric methodologies, VBM and FS, revealed highly overlapping results for cortical GM, but partially divergent results for subcortical volumes (basal ganglia, amygdala). Overall, these findings suggest that individuals across the psychosis dimension show both overlapping and unique GM phenotypes: decreased GM, predominantly in fronto-temporal regions, is characteristic of SZ but not of psychotic BD-P, whereas SAD display GM deficits overlapping with SZ, albeit less extensive. PMID:23177922
Brain gray matter phenotypes across the psychosis dimension.
Ivleva, Elena I; Bidesi, Anup S; Thomas, Binu P; Meda, Shashwath A; Francis, Alan; Moates, Amanda F; Witte, Bradley; Keshavan, Matcheri S; Tamminga, Carol A
2012-10-30
This study sought to examine whole brain and regional gray matter (GM) phenotypes across the schizophrenia (SZ)-bipolar disorder psychosis dimension using voxel-based morphometry (VBM 8.0 with DARTEL segmentation/normalization) and semi-automated regional parcellation, FreeSurfer (FS 4.3.1/64 bit). 3T T1 MPRAGE images were acquired from 19 volunteers with schizophrenia (SZ), 16 with schizoaffective disorder (SAD), 17 with psychotic bipolar I disorder (BD-P) and 10 healthy controls (HC). Contrasted with HC, SZ showed extensive cortical GM reductions, most pronounced in fronto-temporal regions; SAD had GM reductions overlapping with SZ, albeit less extensive; and BD-P demonstrated no GM differences from HC. Within the psychosis dimension, BD-P showed larger volumes in fronto-temporal and other cortical/subcortical regions compared with SZ, whereas SAD showed intermediate GM volumes. The two volumetric methodologies, VBM and FS, revealed highly overlapping results for cortical GM, but partially divergent results for subcortical volumes (basal ganglia, amygdala). Overall, these findings suggest that individuals across the psychosis dimension show both overlapping and unique GM phenotypes: decreased GM, predominantly in fronto-temporal regions, is characteristic of SZ but not of psychotic BD-P, whereas SAD display GM deficits overlapping with SZ, albeit less extensive. Published by Elsevier Ireland Ltd.
Continuous spin representations from group contraction
NASA Astrophysics Data System (ADS)
Khan, Abu M.; Ramond, Pierre
2005-05-01
We consider how the continuous spin representation (CSR) of the Poincaré group in four dimensions can be generated by dimensional reduction. The analysis uses the front-form little group in five dimensions, which must yield the Euclidean group E(2), the little group of the CSR. We consider two cases, one is the single spin massless representation of the Poincaré group in five dimensions, the other is the infinite component Majorana equation, which describes an infinite tower of massive states in five dimensions. In the first case, the double singular limit j, R →∞, with j /R fixed, where R is the Kaluza-Klein radius of the fifth dimension, and j is the spin of the particle in five dimensions, yields the CSR in four dimensions. It amounts to the Inönü-Wigner contraction, with the inverse Kaluza-Klein radius as contraction parameter. In the second case, the CSR appears only by taking a triple singular limit, where an internal coordinate of the Majorana theory goes to infinity, while leaving its ratio to the Kaluza-Klein radius fixed.
PCA-HOG symmetrical feature based diseased cell detection
NASA Astrophysics Data System (ADS)
Wan, Min-jie
2016-04-01
A histogram of oriented gradient (HOG) feature is applied to the field of diseased cell detection, which can detect diseased cells in high resolution tissue images rapidly, accurately and efficiently. Firstly, motivated by symmetrical cellular forms, a new HOG symmetrical feature based on the traditional HOG feature is proposed to meet the condition of cell detection. Secondly, considering the high feature dimension of traditional HOG feature leads to plenty of memory resources and long runtime in practical applications, a classical dimension reduction method called principal component analysis (PCA) is used to reduce the dimension of high-dimensional HOG descriptor. Because of that, computational speed is increased greatly, and the accuracy of detection can be controlled in a proper range at the same time. Thirdly, support vector machine (SVM) classifier is trained with PCA-HOG symmetrical features proposed above. At last, practical tissue images is detected and analyzed by SVM classifier. In order to verify the effectiveness of this new algorithm, it is practically applied to conduct diseased cell detection which takes 200 pieces of H&E (hematoxylin & eosin) high resolution staining histopathological images collected from 20 breast cancer patients as a sample. The experiment shows that the average processing rate can be 25 frames per second and the detection accuracy can be 92.1%.
Gagnon, B.; Murphy, J.; Eades, M.; Lemoignan, J.; Jelowicki, M.; Carney, S.; Amdouni, S.; Di Dio, P.; Chasen, M.; MacDonald, N.
2013-01-01
Background Cancer can affect many dimensions of a patient’s life, and in turn, it should be targeted using a multimodal approach. We tested the extent to which an interdisciplinary nutrition–rehabilitation program can improve the well-being of patients with advanced cancer. Methods Between January 10, 2007, and September 29, 2010, 188 patients with advanced cancer enrolled in the 10–12-week program. Body weight, physical function, symptom severity, fatigue dimensions, distress level, coping ability, and overall quality of life were assessed at the start and end of the program. Results Of the enrolled patients, 70% completed the program. Patients experienced strong improvements in the physical and activity dimensions of fatigue (effect sizes: 0.8–1.1). They also experienced moderate reductions in the severity of weakness, depression, nervousness, shortness of breath, and distress (effect sizes: 0.5–0.7), and moderate improvements in Six Minute Walk Test distance, maximal gait speed, coping ability, and quality of life (effect sizes: 0.5–0.7) Furthermore, 77% of patients either maintained or increased their body weight. Conclusions Interdisciplinary nutrition–rehabilitation can be advantageous for patients with advanced cancer and should be considered an integrated part of standard palliative care. PMID:24311946
NASA Astrophysics Data System (ADS)
Ababei, G.; Gaburici, M.; Budeanu, L.-C.; Grigoras, M.; Porcescu, M.; Lupu, N.; Chiriac, H.
2018-04-01
Co-Fe-B particles present a high potential for applications in microwave domain (electromagnetic shielding, toroidal transformer, etc.) due to their special soft magnetic properties like high saturation magnetization, low coercivity, large anisotropy and high magnetic permeability. However, their microwave applications are limited to about few gigahertzes due to the eddy current losses if the size of the particles is larger than few hundred of nanometers. Chemical synthesis method gives the possibility to obtain nanoparticles with diameters from few nanometers to tens of nanometers by varying the parameters of the chemical synthesis. One way to avoids the agglomeration of the particles in the utilization of the polyvinyl-pyrrolidone (PVP) which is acting as dispersant and dimensions controlling agent for nanoparticles. The aim of this paper is to study the influence of the synthesis conditions on the magnetic properties and microstructure of Co-Fe-B nanoparticles prepared by chemical reduction method in order to obtains nanoparticles with magnetic properties suitable for high frequency applications in the 0.1 ÷ 12 GHz frequency range. Co-Fe-B nanoparticles were prepared by chemical reduction of CoCl2·6H2O and FeSO4·7H2O salts in aqueous solution of sodium borohydride (NaBH4) in presence of the polyvinyl-pirrolydone (PVP). The experimental results indicate that the amount of PVP, Fe/Co ratio and the temperature of the chemical synthesis are important parameters which have to be controlled in order to obtain nanoparticles with desired dimensions, nanostructure and soft magnetic properties with suitable properties for high frequency applications.
Model Order Reduction Algorithm for Estimating the Absorption Spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Beeumen, Roel; Williams-Young, David B.; Kasper, Joseph M.
The ab initio description of the spectral interior of the absorption spectrum poses both a theoretical and computational challenge for modern electronic structure theory. Due to the often spectrally dense character of this domain in the quantum propagator’s eigenspectrum for medium-to-large sized systems, traditional approaches based on the partial diagonalization of the propagator often encounter oscillatory and stagnating convergence. Electronic structure methods which solve the molecular response problem through the solution of spectrally shifted linear systems, such as the complex polarization propagator, offer an alternative approach which is agnostic to the underlying spectral density or domain location. This generality comesmore » at a seemingly high computational cost associated with solving a large linear system for each spectral shift in some discretization of the spectral domain of interest. In this work, we present a novel, adaptive solution to this high computational overhead based on model order reduction techniques via interpolation. Model order reduction reduces the computational complexity of mathematical models and is ubiquitous in the simulation of dynamical systems and control theory. The efficiency and effectiveness of the proposed algorithm in the ab initio prediction of X-ray absorption spectra is demonstrated using a test set of challenging water clusters which are spectrally dense in the neighborhood of the oxygen K-edge. On the basis of a single, user defined tolerance we automatically determine the order of the reduced models and approximate the absorption spectrum up to the given tolerance. We also illustrate that, for the systems studied, the automatically determined model order increases logarithmically with the problem dimension, compared to a linear increase of the number of eigenvalues within the energy window. Furthermore, we observed that the computational cost of the proposed algorithm only scales quadratically with respect to the problem dimension.« less
Veliz-Cuba, Alan; Aguilar, Boris; Hinkelmann, Franziska; Laubenbacher, Reinhard
2014-06-26
A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for large Boolean networks with high average connectivity remains an open problem.
2014-01-01
Background A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. Results This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. Conclusions The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for large Boolean networks with high average connectivity remains an open problem. PMID:24965213
Comparison between cylindrical and prismatic lithium-ion cell costs using a process based cost model
NASA Astrophysics Data System (ADS)
Ciez, Rebecca E.; Whitacre, J. F.
2017-02-01
The relative size and age of the US electric vehicle market means that a few vehicles are able to drive market-wide trends in the battery chemistries and cell formats on the road today. Three lithium-ion chemistries account for nearly all of the storage capacity, and half of the cells are cylindrical. However, no specific model exists to examine the costs of manufacturing these cylindrical cells. Here we present a process-based cost model tailored to the cylindrical lithium-ion cells currently used in the EV market. We examine the costs for varied cell dimensions, electrode thicknesses, chemistries, and production volumes. Although cost savings are possible from increasing cell dimensions and electrode thicknesses, economies of scale have already been reached, and future cost reductions from increased production volumes are minimal. Prismatic cells, which are able to further capitalize on the cost reduction from larger formats, can offer further reductions than those possible for cylindrical cells.
The human dimensions of energy use in buildings: A review
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Oca, Simona; Hong, Tianzhen; Langevin, Jared
The “human dimensions” of energy use in buildings refer to the energy-related behaviors of key stakeholders that affect energy use over the building life cycle. Stakeholders include building designers, operators, managers, engineers, occupants, industry, vendors, and policymakers, who directly or indirectly influence the acts of designing, constructing, living, operating, managing, and regulating the built environments, from individual building up to the urban scale. Among factors driving high-performance buildings, human dimensions play a role that is as significant as that of technological advances. However, this factor is not well understood, and, as a result, human dimensions are often ignored or simplifiedmore » by stakeholders. This work presents a review of the literature on human dimensions of building energy use to assess the state-of-the-art in this topic area. The paper highlights research needs for fully integrating human dimensions into the building design and operation processes with the goal of reducing energy use in buildings while enhancing occupant comfort and productivity. This research focuses on identifying key needs for each stakeholder involved in a building's life cycle and takes an interdisciplinary focus that spans the fields of architecture and engineering design, sociology, data science, energy policy, codes, and standards to provide targeted insights. Greater understanding of the human dimensions of energy use has several potential benefits including reductions in operating cost for building owners; enhanced comfort conditions and productivity for building occupants; more effective building energy management and automation systems for building operators and energy managers; and the integration of more accurate control logic into the next generation of human-in-the-loop technologies. The review concludes by summarizing recommendations for policy makers and industry stakeholders for developing codes, standards, and technologies that can leverage the human dimensions of energy use to reliably predict and achieve energy use reductions in the residential and commercial buildings sectors.« less
The human dimensions of energy use in buildings: A review
D'Oca, Simona; Hong, Tianzhen; Langevin, Jared
2017-08-19
The “human dimensions” of energy use in buildings refer to the energy-related behaviors of key stakeholders that affect energy use over the building life cycle. Stakeholders include building designers, operators, managers, engineers, occupants, industry, vendors, and policymakers, who directly or indirectly influence the acts of designing, constructing, living, operating, managing, and regulating the built environments, from individual building up to the urban scale. Among factors driving high-performance buildings, human dimensions play a role that is as significant as that of technological advances. However, this factor is not well understood, and, as a result, human dimensions are often ignored or simplifiedmore » by stakeholders. This work presents a review of the literature on human dimensions of building energy use to assess the state-of-the-art in this topic area. The paper highlights research needs for fully integrating human dimensions into the building design and operation processes with the goal of reducing energy use in buildings while enhancing occupant comfort and productivity. This research focuses on identifying key needs for each stakeholder involved in a building's life cycle and takes an interdisciplinary focus that spans the fields of architecture and engineering design, sociology, data science, energy policy, codes, and standards to provide targeted insights. Greater understanding of the human dimensions of energy use has several potential benefits including reductions in operating cost for building owners; enhanced comfort conditions and productivity for building occupants; more effective building energy management and automation systems for building operators and energy managers; and the integration of more accurate control logic into the next generation of human-in-the-loop technologies. The review concludes by summarizing recommendations for policy makers and industry stakeholders for developing codes, standards, and technologies that can leverage the human dimensions of energy use to reliably predict and achieve energy use reductions in the residential and commercial buildings sectors.« less
García-Herraiz, Ariadna; Silvestre, Francisco Javier; Leiva-García, Rafael; Crespo-Abril, Fortunato; García-Antón, José
2017-05-01
The aim of this 3-month follow-up study is to quantify the reduction in the mesio-distal gap dimension (MDGD) that occurs after tooth extraction through image analysis of three-dimensional images obtained with the confocal laser scanning microscopy (CLSM) technique. Following tooth extraction, impressions of 79 patients 1 month and 72 patients 3 months after tooth extraction were obtained. Cast models were processed by CLSM, and MDGD changes between time points were measured. The mean mesio-distal gap reduction 1 month after tooth extraction was 343.4 μm and 3 months after tooth extraction was 672.3 μm. The daily mean gap reduction rate during the first term (between baseline and 1 month post-extraction measurements) was 10.3 μm/day and during the second term (between 1 and 3 months) was 5.4 μm/day. The mesio-distal gap reduction is higher during the first month following the extraction and continues in time, but to a lesser extent. When the inter-dental contacts were absent, the mesio-distal gap reduction is lower. When a molar tooth is extracted or the distal tooth to the edentulous space does not occlude with an antagonist, the mesio-distal gap reduction is larger. The consideration of mesio-distal gap dimension changes can help improve dental treatment planning. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.
2012-01-01
The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.
An efficient classification method based on principal component and sparse representation.
Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang
2016-01-01
As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.
Effect of curvature on the backscattering from leaves
NASA Technical Reports Server (NTRS)
Sarabandi, K.; Senior, T. B. A.; Ulaby, F. T.
1988-01-01
Using a model previously developed for the backscattering cross section of a planar leaf at X-band frequencies and above, the effect of leaf curvature is examined. For normal incidence on a rectangular section of a leaf curved in one and two dimensions, an integral expression for the backscattered field is evaluated numerically and by a stationary phase approximation, leading to a simple analytical expression for the cross section reduction produced by the curvature. Numerical results based on the two methods are virtually identical, and in excellent agreement with measured data for rectangular sections of coleus leaves applied to the surfaces of styrofoam cylinders and spheres of different radii.
Effect of curvature on the backscattering from a leaf
NASA Technical Reports Server (NTRS)
Sarabandi, K.; Senior, T. B. A.; Ulaby, F. T.
1988-01-01
Using a model previously developed for the backscattering cross section of a planar leaf at X-band frequencies and above, the effect of leaf curvature is examined. For normal incidence on a rectangular section of a leaf curved in one and two dimensions, an integral expression for the backscattered field is evaluated numerically and by a stationary phase approximation, leading to a simple analytical expression for the cross-section reduction produced by the curvature. Numerical results based on the two methods are virtually identical, and in excellent agreement with measured data for rectangular sections of coleus leaves applied to the surfaces of styrofoam cylinders and spheres of different radii.
Fleury, Julie; Sedikides, Constantine
2007-08-01
Understanding the factors that motivate behavioral change is central to health promotion efforts. We used qualitative descriptive methods in an effort to understand the role of self-knowledge in the process of risk factor modification. The sample consisted of 17 men and 7 women with diagnosed coronary heart disease, who were attempting to initiate and sustain programs of cardiovascular risk modification. Participants described self-knowledge in terms of three contextually situated patterns: representational, evaluative, and behavioral action. Results reinforce the motivational role of the self and highlight the importance of understanding dimensions of self-knowledge relevant to cardiovascular risk reduction.
An information dimension of weighted complex networks
NASA Astrophysics Data System (ADS)
Wen, Tao; Jiang, Wen
2018-07-01
The fractal and self-similarity are important properties in complex networks. Information dimension is a useful dimension for complex networks to reveal these properties. In this paper, an information dimension is proposed for weighted complex networks. Based on the box-covering algorithm for weighted complex networks (BCANw), the proposed method can deal with the weighted complex networks which appear frequently in the real-world, and it can get the influence of the number of nodes in each box on the information dimension. To show the wide scope of information dimension, some applications are illustrated, indicating that the proposed method is effective and feasible.
Comparing 3D foot scanning with conventional measurement methods.
Lee, Yu-Chi; Lin, Gloria; Wang, Mao-Jiun J
2014-01-01
Foot dimension information on different user groups is important for footwear design and clinical applications. Foot dimension data collected using different measurement methods presents accuracy problems. This study compared the precision and accuracy of the 3D foot scanning method with conventional foot dimension measurement methods including the digital caliper, ink footprint and digital footprint. Six commonly used foot dimensions, i.e. foot length, ball of foot length, outside ball of foot length, foot breadth diagonal, foot breadth horizontal and heel breadth were measured from 130 males and females using four foot measurement methods. Two-way ANOVA was performed to evaluate the sex and method effect on the measured foot dimensions. In addition, the mean absolute difference values and intra-class correlation coefficients (ICCs) were used for precision and accuracy evaluation. The results were also compared with the ISO 20685 criteria. The participant's sex and the measurement method were found (p < 0.05) to exert significant effects on the measured six foot dimensions. The precision of the 3D scanning measurement method with mean absolute difference values between 0.73 to 1.50 mm showed the best performance among the four measurement methods. The 3D scanning measurements showed better measurement accuracy performance than the other methods (mean absolute difference was 0.6 to 4.3 mm), except for measuring outside ball of foot length and foot breadth horizontal. The ICCs for all six foot dimension measurements among the four measurement methods were within the 0.61 to 0.98 range. Overall, the 3D foot scanner is recommended for collecting foot anthropometric data because it has relatively higher precision, accuracy and robustness. This finding suggests that when comparing foot anthropometric data among different references, it is important to consider the differences caused by the different measurement methods.
Risović, Dubravko; Pavlović, Zivko
2013-01-01
Processing of gray scale images in order to determine the corresponding fractal dimension is very important due to widespread use of imaging technologies and application of fractal analysis in many areas of science, technology, and medicine. To this end, many methods for estimation of fractal dimension from gray scale images have been developed and routinely used. Unfortunately different methods (dimension estimators) often yield significantly different results in a manner that makes interpretation difficult. Here, we report results of comparative assessment of performance of several most frequently used algorithms/methods for estimation of fractal dimension. To that purpose, we have used scanning electron microscope images of aluminum oxide surfaces with different fractal dimensions. The performance of algorithms/methods was evaluated using the statistical Z-score approach. The differences between performances of six various methods are discussed and further compared with results obtained by electrochemical impedance spectroscopy on the same samples. The analysis of results shows that the performance of investigated algorithms varies considerably and that systematically erroneous fractal dimensions could be estimated using certain methods. The differential cube counting, triangulation, and box counting algorithms showed satisfactory performance in the whole investigated range of fractal dimensions. Difference statistic is proved to be less reliable generating 4% of unsatisfactory results. The performances of the Power spectrum, Partitioning and EIS were unsatisfactory in 29%, 38%, and 75% of estimations, respectively. The results of this study should be useful and provide guidelines to researchers using/attempting fractal analysis of images obtained by scanning microscopy or atomic force microscopy. © Wiley Periodicals, Inc.
Catalan speakers' perception of word stress in unaccented contexts.
Ortega-Llebaria, Marta; del Mar Vanrell, Maria; Prieto, Pilar
2010-01-01
In unaccented contexts, formant frequency differences related to vowel reduction constitute a consistent cue to word stress in English, whereas in languages such as Spanish that have no systematic vowel reduction, stress perception is based on duration and intensity cues. This article examines the perception of word stress by speakers of Central Catalan, in which, due to its vowel reduction patterns, words either alternate stressed open vowels with unstressed mid-central vowels as in English or contain no vowel quality cues to stress, as in Spanish. Results show that Catalan listeners perceive stress based mainly on duration cues in both word types. Other cues pattern together with duration to make stress perception more robust. However, no single cue is absolutely necessary and trading effects compensate for a lack of differentiation in one dimension by changes in another dimension. In particular, speakers identify longer mid-central vowels as more stressed than shorter open vowels. These results and those obtained in other stress-accent languages provide cumulative evidence that word stress is perceived independently of pitch accents by relying on a set of cues with trading effects so that no single cue, including formant frequency differences related to vowel reduction, is absolutely necessary for stress perception.
Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan
2017-07-01
High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.
Using Betweenness Centrality to Identify Manifold Shortcuts
Cukierski, William J.; Foran, David J.
2010-01-01
High-dimensional data presents a challenge to tasks of pattern recognition and machine learning. Dimensionality reduction (DR) methods remove the unwanted variance and make these tasks tractable. Several nonlinear DR methods, such as the well known ISOMAP algorithm, rely on a neighborhood graph to compute geodesic distances between data points. These graphs can contain unwanted edges which connect disparate regions of one or more manifolds. This topological sensitivity is well known [1], [2], [3], yet handling high-dimensional, noisy data in the absence of a priori manifold knowledge, remains an open and difficult problem. This work introduces a divisive, edge-removal method based on graph betweenness centrality which can robustly identify manifold-shorting edges. The problem of graph construction in high dimension is discussed and the proposed algorithm is fit into the ISOMAP workflow. ROC analysis is performed and the performance is tested on synthetic and real datasets. PMID:20607142
Ching, Travers; Zhu, Xun; Garmire, Lana X
2018-04-01
Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet.
Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach
NASA Astrophysics Data System (ADS)
Liu, Wenyang; Sawant, Amit; Ruan, Dan
2016-07-01
The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.
Modal ring method for the scattering of sound
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1993-01-01
The modal element method for acoustic scattering can be simplified when the scattering body is rigid. In this simplified method, called the modal ring method, the scattering body is represented by a ring of triangular finite elements forming the outer surface. The acoustic pressure is calculated at the element nodes. The pressure in the infinite computational region surrounding the body is represented analytically by an eigenfunction expansion. The two solution forms are coupled by the continuity of pressure and velocity on the body surface. The modal ring method effectively reduces the two-dimensional scattering problem to a one-dimensional problem capable of handling very high frequency scattering. In contrast to the boundary element method or the method of moments, which perform a similar reduction in problem dimension, the model line method has the added advantage of having a highly banded solution matrix requiring considerably less computer storage. The method shows excellent agreement with analytic results for scattering from rigid circular cylinders over a wide frequency range (1 is equal to or less than ka is less than or equal to 100) in the near and far fields.
Religiosity and Sexual Risk Behaviors Among African American Cocaine Users in the Rural South
Montgomery, Brooke E.E.; Stewart, Katharine E.; Yeary, Karen H.K.; Cornell, Carol E.; Pulley, LeaVonne; Corwyn, Robert; Ounpraseuth, Songthip T.
2014-01-01
Purpose Racial and geographic disparities in human immunodeficency virus (HIV) are dramatic and drug use is a significant contributor to HIV risk. Within the rural South, African Americans who use drugs are at extremely high risk. Due to the importance of religion within African American and rural Southern communities, it can be a key element of culturally-targeted health promotion with these populations. Studies have examined religion’s relationship with sexual risk in adolescent populations, but few have examined specific religious behaviors and sexual risk behaviors among drug-using African American adults. This study examined the relationship between well-defined dimensions of religion and specific sexual behaviors among African Americans who use cocaine living in the rural southern United States. Methods Baseline data from a sexual risk reduction intervention for African Americans who use cocaine living in rural Arkansas (N = 205) were used to conduct bivariate and multivariate analyses examining the association between multiple sexual risk behaviors and key dimensions of religion including religious preference, private and public religious participation, religious coping, and God-based, congregation-based, and church leader-based religious support. Findings After adjusting individualized network estimator weights based on the recruitment strategy, different dimensions of religion had inverse relationships with sexual risk behavior, including church leadership support with number of unprotected vaginal/anal sexual encounter and positive religious coping with number of sexual partners and with total number of vaginal/anal sexual encounters. Conclusion Results suggest that specific dimensions of religion may have protective effects on certain types of sexual behavior, which may have important research implications. PMID:24575972
Linear dimension reduction and Bayes classification
NASA Technical Reports Server (NTRS)
Decell, H. P., Jr.; Odell, P. L.; Coberly, W. A.
1978-01-01
An explicit expression for a compression matrix T of smallest possible left dimension K consistent with preserving the n variate normal Bayes assignment of X to a given one of a finite number of populations and the K variate Bayes assignment of TX to that population was developed. The Bayes population assignment of X and TX were shown to be equivalent for a compression matrix T explicitly calculated as a function of the means and covariances of the given populations.
The Relation between Perceived Social Support and Anxiety in Patients under Hemodialysis.
Davaridolatabadi, Elham; Abdeyazdan, Gholamhossein
2016-03-01
The increase in the number of patients under hemodialysis treatment is a universal problem. With regard to the fact that there have been few social-psychological studies conducted on patients under hemodialysis treatment, the current study was conducted to investigate anxiety and perceived social support and the relation between them among these patients. This cross-sectional study was conducted on 126 patients under hemodialysis treatment in Isfahan in 2012. After randomly selecting a hospital with a hemodialysis ward, purposive sampling was conducted. Data collection tools included state-trait anxiety and perceived social support inventory. The data were analyzed using the Spearman correlation coefficient. Among the participants, 68.3% received average perceived social support. In addition, perceiving the tangible dimension of support was lower compared to other dimensions (Mean 40.02). Level of trait and state anxiety (65 and 67.5%) of over half of the participants was average. There was in inverse relationship between state and trait anxiety and total perceived social support and emotional and information dimensions (r = -0.340, r = -0.229). State and trait anxiety had the highest relation with emotional and information dimension of social support, respectively. Patients under hemodialysis treatment suffer from numerous psychological and social problems. Low awareness and emotional problems result in the increase of anxiety and reduction of perceived social support. Reduction of social support has negative effect on treatment outcomes.
Martins, Jumara; Vaz, Ana Francisca; Grion, Regina Celia; Esteves, Sérgio Carlos Barros; Costa-Paiva, Lúcia; Baccaro, Luiz Francisco
2017-12-01
This study reports the incidence and factors associated with vaginal stenosis and changes in vaginal dimensions after pelvic radiotherapy for cervical cancer. A descriptive longitudinal study with 139 women with cervical cancer was conducted from January 2013 to November 2015. The outcome variables were vaginal stenosis assessed using the Common Terminology Criteria for Adverse Events (CTCAE v3.0) and changes in vaginal diameter and length after the end of radiotherapy. Independent variables were the characteristics of the neoplasm, clinical and sociodemographic data. Bivariate analysis was carried out using χ 2 , Kruskal-Wallis and Mann-Whitney's test. Multiple analysis was carried out using Poisson regression and a generalized linear model. Most women (50.4%) had stage IIIB tumors. According to CTCAE v3.0 scale, 30.2% had no stenosis, 69.1% had grade 1 and 0.7% had grade 2 stenosis after radiotherapy. Regarding changes in vaginal measures, the mean variation in diameter was - 0.6 (± 1.7) mm and the mean variation in length was - 0.6 (± 1.3) cm. In the final statistical model, having tumoral invasion of the vaginal walls (coefficient + 0.73, p < 0.01) and diabetes (coefficient + 1.16; p < 0.01) were associated with lower vaginal stenosis and lower reduction of vaginal dimensions. Advanced clinical stage (coefficient + 1.44; p = 0.02) and receiving brachytherapy/teletherapy (coefficient - 1.17, p < 0.01) were associated with higher reduction of vaginal dimensions. Most women had mild vaginal stenosis with slight reductions in both diameter and length of the vaginal canal. Women with tumoral invasion of the vagina have an increase in vaginal length soon after radiotherapy due to a reduction in tumoral volume.
Effects of Mindfulness-Based Stress Reduction on employees’ mental health: A systematic review
Heerkens, Yvonne; Kuijer, Wietske; van der Heijden, Beatrice; Engels, Josephine
2018-01-01
Objectives The purpose of this exploratory study was to obtain greater insight into the effects of Mindfulness-Based Stress Reduction (MBSR) and Mindfulness-Based Cognitive Therapy (MBCT) on the mental health of employees. Methods Using PsycINFO, PubMed, and CINAHL, we performed a systematic review in October 2015 of studies investigating the effects of MBSR and MBCT on various aspects of employees’ mental health. Studies with a pre-post design (i.e. without a control group) were excluded. Results 24 articles were identified, describing 23 studies: 22 on the effects of MBSR and 1 on the effects of MBSR in combination with some aspects of MBCT. Since no study focused exclusively on MBCT, its effects are not described in this systematic review. Of the 23 studies, 2 were of high methodological quality, 15 were of medium quality and 6 were of low quality. A meta-analysis was not performed due to the emergent and relatively uncharted nature of the topic of investigation, the exploratory character of this study, and the diversity of outcomes in the studies reviewed. Based on our analysis, the strongest outcomes were reduced levels of emotional exhaustion (a dimension of burnout), stress, psychological distress, depression, anxiety, and occupational stress. Improvements were found in terms of mindfulness, personal accomplishment (a dimension of burnout), (occupational) self-compassion, quality of sleep, and relaxation. Conclusion The results of this systematic review suggest that MBSR may help to improve psychological functioning in employees. PMID:29364935
Barkan, Tessa; Gallegos, Autumn M.; Turiano, Nicholas A.; Duberstein, Paul R.; Moynihan, Jan A.
2016-01-01
Abstract Objectives: Mindfulness-based stress reduction (MBSR) is a promising intervention for older adults seeking to improve quality of life. More research is needed, however, to determine who is most willing to use the four techniques taught in the program (yoga, sitting meditation, informal meditation, and body scanning). This study evaluated the relationship between the Big Five personality dimensions (neuroticism, extraversion, openness to experience, conscientiousness, and agreeableness) and use of MBSR techniques both during the intervention and at a 6-month follow-up. The hypothesis was that those with higher levels of openness and agreeableness would be more likely to use the techniques. Methods: Participants were a community sample of 100 older adults who received an 8-week manualized MBSR intervention. Personality was assessed at baseline by using the 60-item NEO Five-Factor Inventory. Use of MBSR techniques was assessed through weekly practice logs during the intervention and a 6-month follow-up survey. Regression analyses were used to examine the association between each personality dimension and each indicator of MBSR use both during and after the intervention. Results: As hypothesized, openness and agreeableness predicted greater use of MBSR both during and after the intervention, while controlling for demographic differences in age, educational level, and sex. Openness was related to use of a variety of MBSR techniques during and after the intervention, while agreeableness was related to use of meditation techniques during the intervention. Mediation analysis suggested that personality explained postintervention MBSR use, both directly and by fostering initial uptake of MBSR during treatment. Conclusions: Personality dimensions accounted for individual differences in the use of MBSR techniques during and 6 months after the intervention. Future studies should consider how mental health practitioners would use these findings to target and tailor MBSR interventions to appeal to broader segments of the population. PMID:27031734
Nohara, Ryuki; Endo, Yui; Murai, Akihiko; Takemura, Hiroshi; Kouchi, Makiko; Tada, Mitsunori
2016-08-01
Individual human models are usually created by direct 3D scanning or deforming a template model according to the measured dimensions. In this paper, we propose a method to estimate all the necessary dimensions (full set) for the human model individualization from a small number of measured dimensions (subset) and human dimension database. For this purpose, we solved multiple regression equation from the dimension database given full set dimensions as the objective variable and subset dimensions as the explanatory variables. Thus, the full set dimensions are obtained by simply multiplying the subset dimensions to the coefficient matrix of the regression equation. We verified the accuracy of our method by imputing hand, foot, and whole body dimensions from their dimension database. The leave-one-out cross validation is employed in this evaluation. The mean absolute errors (MAE) between the measured and the estimated dimensions computed from 4 dimensions (hand length, breadth, middle finger breadth at proximal, and middle finger depth at proximal) in the hand, 2 dimensions (foot length, breadth, and lateral malleolus height) in the foot, and 1 dimension (height) and weight in the whole body are computed. The average MAE of non-measured dimensions were 4.58% in the hand, 4.42% in the foot, and 3.54% in the whole body, while that of measured dimensions were 0.00%.
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Inelastic behavior of cold-formed braced walls under monotonic and cyclic loading
NASA Astrophysics Data System (ADS)
Gerami, Mohsen; Lotfi, Mohsen; Nejat, Roya
2015-06-01
The ever-increasing need for housing generated the search for new and innovative building methods to increase speed and efficiency and enhance quality. One method is the use of light thin steel profiles as load-bearing elements having different solutions for interior and exterior cladding. Due to the increase in CFS construction in low-rise residential structures in the modern construction industry, there is an increased demand for performance inelastic analysis of CFS walls. In this study, the nonlinear behavior of cold-formed steel frames with various bracing arrangements including cross, chevron and k-shape straps was evaluated under cyclic and monotonic loading and using nonlinear finite element analysis methods. In total, 68 frames with different bracing arrangements and different ratios of dimensions were studied. Also, seismic parameters including resistance reduction factor, ductility and force reduction factor due to ductility were evaluated for all samples. On the other hand, the seismic response modification factor was calculated for these systems. It was concluded that the highest response modification factor would be obtained for walls with bilateral cross bracing systems with a value of 3.14. In all samples, on increasing the distance of straps from each other, shear strength increased and shear strength of the wall with bilateral bracing system was 60 % greater than that with lateral bracing system.
Davidson, Shaun M; Docherty, Paul D; Murray, Rua
2017-03-01
Parameter identification is an important and widely used process across the field of biomedical engineering. However, it is susceptible to a number of potential difficulties, such as parameter trade-off, causing premature convergence at non-optimal parameter values. The proposed Dimensional Reduction Method (DRM) addresses this issue by iteratively reducing the dimension of hyperplanes where trade off occurs, and running subsequent identification processes within these hyperplanes. The DRM was validated using clinical data to optimize 4 parameters of the widely used Bergman Minimal Model of glucose and insulin kinetics, as well as in-silico data to optimize 5 parameters of the Pulmonary Recruitment (PR) Model. Results were compared with the popular Levenberg-Marquardt (LMQ) Algorithm using a Monte-Carlo methodology, with both methods afforded equivalent computational resources. The DRM converged to a lower or equal residual value in all tests run using the Bergman Minimal Model and actual patient data. For the PR model, the DRM attained significantly lower overall median parameter error values and lower residuals in the vast majority of tests. This shows the DRM has potential to provide better resolution of optimum parameter values for the variety of biomedical models in which significant levels of parameter trade-off occur. Copyright © 2017 Elsevier Inc. All rights reserved.
2013-01-01
Background The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. Results One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to “filter” redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. Conclusion We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the summary-statistic based approach. We also implement the summary-statistic test using Z-statistics from an already-published GWAS of Chronic Obstructive Pulmonary Disorder (COPD) and correlation structure obtained from HapMap. We experiment with the modification of this test because the correlation structure is assumed imperfectly known. PMID:24199751
Swanson, David M; Blacker, Deborah; Alchawa, Taofik; Ludwig, Kerstin U; Mangold, Elisabeth; Lange, Christoph
2013-11-07
The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to "filter" redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the summary-statistic based approach. We also implement the summary-statistic test using Z-statistics from an already-published GWAS of Chronic Obstructive Pulmonary Disorder (COPD) and correlation structure obtained from HapMap. We experiment with the modification of this test because the correlation structure is assumed imperfectly known.
Relationship between mental workload and musculoskeletal disorders among Alzahra Hospital nurses
Habibi, Ehsanollah; Taheri, Mohamad Reza; Hasanzadeh, Akbar
2015-01-01
Background: Musculoskeletal disorders (MSDs) are a serious problem among the nursing staff. Mental workload is the major cause of MSDs among nursing staff. The aim of this study was to investigate the mental workload dimensions and their association with MSDs among nurses of Alzahra Hospital, affiliated to Isfahan University of Medical Sciences. Materials and Methods: This descriptive cross-sectional study was conducted on 247 randomly selected nurses who worked in the Alzahra Hospital in Isfahan, Iran in the summer of 2013. The Persian version of National Aeronautics and Space Administration Task Load Index (NASA-TLX) (measuring mental load) specialized questionnaire and Cornell Musculoskeletal Discomfort Questionnaire (CMDQ) was used for data collection. Data were collected and analyzed by Pearson correlation coefficient and Spearman correlation coefficient tests in SPSS 20. Results: Pearson and Spearman correlation tests showed a significant association between the nurses’ MSDs and the dimensions of workload frustration, total workload, temporal demand, effort, and physical demand (r = 0.304, 0.277, 0.277, 0.216, and 0.211, respectively). However, there was no significant association between the nurses’ MSDs and the dimensions of workload performance and mental demand (P > 0.05). Conclusions: The nurses’ frustration had a direct correlation with MSDs. This shows that stress is an inseparable component in hospital workplace. Thus, reduction of stress in nursing workplace should be one of the main priorities of hospital managers. PMID:25709683
Scaling of energy absorbing composite plates
NASA Technical Reports Server (NTRS)
Jackson, Karen; Morton, John; Traffanstedt, Catherine; Boitnott, Richard
1992-01-01
The energy absorption response and crushing characteristics of geometrically scaled graphite-Kevlar epoxy composite plates were investigated. Three different trigger mechanisms including chamfer, notch, and steeple geometries were incorporated into the plate specimens to initiate crushing. Sustained crushing was achieved with a simple test fixture which provided lateral support to prevent global buckling. Values of specific sustained crushing stress (SSCS) were obtained which were comparable to values reported for tube specimens from previously published data. Two sizes of hybrid plates were fabricated; a baseline or model plate, and a full-scale plate with in-plane dimensions scaled by a factor of two. The thickness dimension of the full-scale plates was increased using two different techniques; the ply-level method in which each ply orientation in the baseline laminate stacking sequence is doubled, and the sublaminate technique in which the baseline laminate stacking sequence is repeated as a group. Results indicated that the SSCS is independent of trigger mechanism geometry. However, a reduction in the SSCS of 10-25 percent was observed for the full-scale plates as compared with the baseline specimens, indicating a scaling effect in the crushing response.
Scaling of energy absorbing composite plates
NASA Technical Reports Server (NTRS)
Jackson, Karen; Lavoie, J. Andre; Morton, John
1994-01-01
The energy absorption response and crushing characteristics of geometrically scaled graphite-Kevlar epoxy composite plates were investigated. Two different trigger mechanisms including notch, and steeple geometries were incorporated into the plate specimens to initiate crushing. Sustained crushing was achieved with a new test fixture which provided lateral support to prevent global buckling. Values of specific sustained crushing stress (SSCS) were obtained which were lower than values reported for tube specimens from previously published data. Two sizes of hybrid plates were fabricated; a baseline or model plate, and a full-scale plate with inplane dimensions scaled by a factor of two. The thickness dimension of the full-scale plates was increased using two different techniques: the ply-level method in which each ply orientation in the baseline laminate stacking sequence is doubled, and the sublaminate technique in which the baseline laminate stacking sequence is repeated as a group. Results indicated that the SSCS has a small dependence on trigger mechanism geometry. However, a reduction in the SSCS of 10-25% was observed for the full-scale plates as compared with the baseline specimens, indicating a scaling effect in the crushing response.
Scaling of energy absorbing composite plates
NASA Astrophysics Data System (ADS)
Jackson, Karen; Morton, John; Traffanstedt, Catherine; Boitnott, Richard
The energy absorption response and crushing characteristics of geometrically scaled graphite-Kevlar epoxy composite plates were investigated. Three different trigger mechanisms including chamfer, notch, and steeple geometries were incorporated into the plate specimens to initiate crushing. Sustained crushing was achieved with a simple test fixture which provided lateral support to prevent global buckling. Values of specific sustained crushing stress (SSCS) were obtained which were comparable to values reported for tube specimens from previously published data. Two sizes of hybrid plates were fabricated; a baseline or model plate, and a full-scale plate with in-plane dimensions scaled by a factor of two. The thickness dimension of the full-scale plates was increased using two different techniques; the ply-level method in which each ply orientation in the baseline laminate stacking sequence is doubled, and the sublaminate technique in which the baseline laminate stacking sequence is repeated as a group. Results indicated that the SSCS is independent of trigger mechanism geometry. However, a reduction in the SSCS of 10-25 percent was observed for the full-scale plates as compared with the baseline specimens, indicating a scaling effect in the crushing response.
Taub-NUT Spacetime in the (A)dS/CFT and M-Theory [electronic resource
NASA Astrophysics Data System (ADS)
Clarkson, Richard
In the following thesis, I will conduct a thermodynamic analysis of the Taub-NUT spacetime in various dimensions, as well as show uses for Taub-NUT and other Hyper-Kahler spacetimes. Thermodynamic analysis (by which I mean the calculation of the entropy and other thermodynamic quantities, and the analysis of these quantities) has in the past been done by use of background subtraction. The recent derivation of the (A)dS/CFT correspondences from String theory has allowed for easier and quicker analysis. I will use Taub-NUT space as a template to test these correspondences against the standard thermodynamic calculations (via the N?ether method), with (in the Taub-NUT-dS case especially) some very interesting results. There is also interest in obtaining metrics in eleven dimensions that can be reduced down to ten dimensional string theory metrics. Taub-NUT and other Hyper-Kahler metrics already possess the form to easily facilitate the Kaluza-Klein reduction, and embedding such metricsinto eleven dimensional metrics containing M2 or M5 branes produces metrics with interesting Dp-brane results.
Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J
2015-01-01
In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.
Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J.
2015-01-01
In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron. PMID:25849483
Based on user interest level of modeling scenarios and browse content
NASA Astrophysics Data System (ADS)
Zhao, Yang
2017-08-01
User interest modeling is the core of personalized service, taking into account the impact of situational information on user preferences, the user behavior days of financial information. This paper proposes a method of user interest modeling based on scenario information, which is obtained by calculating the similarity of the situation. The user's current scene of the approximate scenario set; on the "user - interest items - scenarios" three-dimensional model using the situation pre-filtering method of dimension reduction processing. View the content of the user interested in the theme, the analysis of the page content to get each topic of interest keywords, based on the level of vector space model user interest. The experimental results show that the user interest model based on the scenario information is within 9% of the user's interest prediction, which is effective.
Discrete approach to stochastic parametrization and dimension reduction in nonlinear dynamics.
Chorin, Alexandre J; Lu, Fei
2015-08-11
Many physical systems are described by nonlinear differential equations that are too complicated to solve in full. A natural way to proceed is to divide the variables into those that are of direct interest and those that are not, formulate solvable approximate equations for the variables of greater interest, and use data and statistical methods to account for the impact of the other variables. In the present paper we consider time-dependent problems and introduce a fully discrete solution method, which simplifies both the analysis of the data and the numerical algorithms. The resulting time series are identified by a NARMAX (nonlinear autoregression moving average with exogenous input) representation familiar from engineering practice. The connections with the Mori-Zwanzig formalism of statistical physics are discussed, as well as an application to the Lorenz 96 system.
NASA Astrophysics Data System (ADS)
Nishino, Hitoshi; Rajpoot, Subhash
2016-05-01
We present electric-magnetic (EM)-duality formulations for non-Abelian gauge groups with N =1 supersymmetry in D =3 +3 and 5 +5 space-time dimensions. We show that these systems generate self-dual N =1 supersymmetric Yang-Mills (SDSYM) theory in D =2 +2 . For a N =2 supersymmetric EM-dual system in D =3 +3 , we have the Yang-Mills multiplet (Aμ I,λA I) and a Hodge-dual multiplet (Bμν ρ I,χA I) , with an auxiliary tensors Cμν ρ σ I and Kμ ν. Here, I is the adjoint index, while A is for the doublet of S p (1 ). The EM-duality conditions are Fμν I=(1 /4 !)ɛμν ρ σ τ λGρσ τ λ I with its superpartner duality condition λA I=-χA I . Upon appropriate dimensional reduction, this system generates SDSYM in D =2 +2 . This system is further generalized to D =5 +5 with the EM-duality condition Fμν I=(1 /8 !)ɛμν ρ1⋯ρ8Gρ1⋯ρ8 I with its superpartner condition λI=-χI . Upon appropriate dimensional reduction, this theory also generates SDSYM in D =2 +2 . As long as we maintain Lorentz covariance, D =5 +5 dimensions seems to be the maximal space-time dimensions that generate SDSYM in D =2 +2 . Namely, EM-dual system in D =5 +5 serves as the Master Theory of all supersymmetric integrable models in dimensions 1 ≤D ≤3 .
NASA Astrophysics Data System (ADS)
Faber, Cornelius; Pracht, Eberhard; Haase, Axel
2003-04-01
Intermolecular zero-quantum coherences are insensitive to magnetic field inhomogeneities. For this reason we have applied the HOMOGENIZED sequence [Vathyam et al., Science 272 (1996) 92] to phantoms containing metabolites at low concentrations, phantoms with air inclusions, an intact grape, and the head of a rat in vivo at 750 MHz. In the 1H-spectra, the water signal is efficiently suppressed and line broadening due to susceptibility gradients is effectively removed along the indirectly detected dimension. We have obtained a 1H-spectrum of a 2.5 mM solution of γ-aminobutyric acid in 12 min scan time. In the phantom with air inclusions a reduction of line widths from 0.48 ppm in the direct dimension to 0.07 ppm in the indirect dimension was observed, while in a deshimmed grape the reduction was from 1.4 to 0.07 ppm. In a spectrum of the grape we were able to resolve glucose resonances at 0.3 ppm from the water in 6 min scan time. J-coupling information was partly retained. In the in vivo spectra of the rat brain five major metabolites were observed.
Decoding-Accuracy-Based Sequential Dimensionality Reduction of Spatio-Temporal Neural Activities
NASA Astrophysics Data System (ADS)
Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu
Performance of a brain machine interface (BMI) critically depends on selection of input data because information embedded in the neural activities is highly redundant. In addition, properly selected input data with a reduced dimension leads to improvement of decoding generalization ability and decrease of computational efforts, both of which are significant advantages for the clinical applications. In the present paper, we propose an algorithm of sequential dimensionality reduction (SDR) that effectively extracts motor/sensory related spatio-temporal neural activities. The algorithm gradually reduces input data dimension by dropping neural data spatio-temporally so as not to undermine the decoding accuracy as far as possible. Support vector machine (SVM) was used as the decoder, and tone-induced neural activities in rat auditory cortices were decoded into the test tone frequencies. SDR reduced the input data dimension to a quarter and significantly improved the accuracy of decoding of novel data. Moreover, spatio-temporal neural activity patterns selected by SDR resulted in significantly higher accuracy than high spike rate patterns or conventionally used spatial patterns. These results suggest that the proposed algorithm can improve the generalization ability and decrease the computational effort of decoding.
Dimensions of the scala tympani in the human and cat with reference to cochlear implants.
Hatsushika, S; Shepherd, R K; Tong, Y C; Clark, G M; Funasaka, S
1990-11-01
The width, height, and cross-sectional area of the scala tympani in both the human and cat were measured to provide dimensional information relevant to the design of scala tympani electrode arrays. Both the height and width of the human scala tympani decreased rapidly within the first 1.5 mm from the round window. Thereafter, they exhibit a gradual reduction in their dimension with increasing distance from the round window. The cross-sectional area of the human scala tympani reflects the changes observed in both the height and width. In contrast, the cat scala tympani exhibits a rapid decrease in its dimensions over the first 6 to 8 mm from the round window. However, beyond this point the cat scala tympani also exhibits a more gradual decrease in its dimensions. Finally, the width of the scala tympani, in both human and cat, is consistently greater than the height.
Generalization of the photo process window and its application to OPC test pattern design
NASA Astrophysics Data System (ADS)
Eisenmann, Hans; Peter, Kai; Strojwas, Andrzej J.
2003-07-01
From the early development phase up to the production phase, test pattern play a key role for microlithography. The requirement for test pattern is to represent the design well and to cover the space of all process conditions, e.g. to investigate the full process window and all other process parameters. This paper shows that the current state-of-the-art test pattern do not address these requirements sufficiently and makes suggestions for a better selection of test pattern. We present a new methodology to analyze an existing layout (e.g. logic library, test pattern or full chip) for critical layout situations which does not need precise process data. We call this method "process space decomposition", because it is aimed at decomposing the process impact to a layout feature into a sum of single independent contributions, the dimensions of the process space. This is a generalization of the classical process window, which examines defocus and exposure dependency of given test pattern, e.g. CD value of dense and isolated lines. In our process space we additionally define the dimensions resist effects, etch effects, mask error and misalignment, which describe the deviation of the printed silicon pattern from its target. We further extend it by the pattern space using a product based layout (library, full chip or synthetic test pattern). The criticality of pattern is defined by their deviation due to aerial image, their sensitivity to the respective dimension or several combinations of these. By exploring the process space for a given design, the method allows to find the most critical patterns independent of specific process parameters. The paper provides examples for different applications of the method: (1) selection of design oriented test pattern for lithography development (2) test pattern reduction in process characterization (3) verification/optimization of printability and performance of post processing procedures (like OPC) (4) creation of a sensitive process monitor.
Reducing queues: demand and capacity variations.
Eriksson, Henrik; Bergbrant, Ing-Marie; Berrum, Ingela; Mörck, Boel
2011-01-01
The aim of this paper is to investigate how waiting lists or queues could be reduced without adding more resources; and to describe what factors sustain reduced waiting-times. Cases were selected according to successful and sustained queue reduction. The approach in this study is action research. Accessibility improved as out-patient waiting lists for two clinics were reduced. The main success was working towards matching demand and capacity. It has been possible to sustain the improvements. Results should be viewed cautiously. Transferring and generalizing outcomes from this study is for readers to consider. However, accessible healthcare may be possible by paying more attention to existing solutions. The study indicates that queue reduction activities should include acquiring knowledge about theories and methods to improve accessibility, finding ways to monitor varying demand and capacity, and to improve patient processing by reducing variations. Accessibility is considered an important dimension when measuring service quality. However, there are few articles on how clinic staff sustain reduces waiting lists. This paper contributes accessible knowledge to the field.
Dynamic reduction of dimensions of a document vector in a document search and retrieval system
Jiao, Yu; Potok, Thomas E.
2011-05-03
The method and system of the invention involves processing each new document (20) coming into the system into a document vector (16), and creating a document vector with reduced dimensionality (17) for comparison with the data model (15) without recomputing the data model (15). These operations are carried out by a first computer (11) while a second computer (12) updates the data model (18), which can be comprised of an initial large group of documents (19) and is premised on the computing an initial data model (13, 14, 15) to provide a reference point for determining document vectors from documents processed from the data stream (20).
Identification of research hypotheses and new knowledge from scientific literature.
Shardlow, Matthew; Batista-Navarro, Riza; Thompson, Paul; Nawaz, Raheel; McNaught, John; Ananiadou, Sophia
2018-06-25
Text mining (TM) methods have been used extensively to extract relations and events from the literature. In addition, TM techniques have been used to extract various types or dimensions of interpretative information, known as Meta-Knowledge (MK), from the context of relations and events, e.g. negation, speculation, certainty and knowledge type. However, most existing methods have focussed on the extraction of individual dimensions of MK, without investigating how they can be combined to obtain even richer contextual information. In this paper, we describe a novel, supervised method to extract new MK dimensions that encode Research Hypotheses (an author's intended knowledge gain) and New Knowledge (an author's findings). The method incorporates various features, including a combination of simple MK dimensions. We identify previously explored dimensions and then use a random forest to combine these with linguistic features into a classification model. To facilitate evaluation of the model, we have enriched two existing corpora annotated with relations and events, i.e., a subset of the GENIA-MK corpus and the EU-ADR corpus, by adding attributes to encode whether each relation or event corresponds to Research Hypothesis or New Knowledge. In the GENIA-MK corpus, these new attributes complement simpler MK dimensions that had previously been annotated. We show that our approach is able to assign different types of MK dimensions to relations and events with a high degree of accuracy. Firstly, our method is able to improve upon the previously reported state of the art performance for an existing dimension, i.e., Knowledge Type. Secondly, we also demonstrate high F1-score in predicting the new dimensions of Research Hypothesis (GENIA: 0.914, EU-ADR 0.802) and New Knowledge (GENIA: 0.829, EU-ADR 0.836). We have presented a novel approach for predicting New Knowledge and Research Hypothesis, which combines simple MK dimensions to achieve high F1-scores. The extraction of such information is valuable for a number of practical TM applications.
A new method for calculating differential distributions directly in Mellin space
NASA Astrophysics Data System (ADS)
Mitov, Alexander
2006-12-01
We present a new method for the calculation of differential distributions directly in Mellin space without recourse to the usual momentum-fraction (or z-) space. The method is completely general and can be applied to any process. It is based on solving the integration-by-parts identities when one of the powers of the propagators is an abstract number. The method retains the full dependence on the Mellin variable and can be implemented in any program for solving the IBP identities based on algebraic elimination, like Laporta. General features of the method are: (1) faster reduction, (2) smaller number of master integrals compared to the usual z-space approach and (3) the master integrals satisfy difference instead of differential equations. This approach generalizes previous results related to fully inclusive observables like the recently calculated three-loop space-like anomalous dimensions and coefficient functions in inclusive DIS to more general processes requiring separate treatment of the various physical cuts. Many possible applications of this method exist, the most notable being the direct evaluation of the three-loop time-like splitting functions in QCD.
Tripathi, Vandana; Stanton, Cynthia; Strobino, Donna; Bartlett, Linda
2015-01-01
Background High quality care is crucial in ensuring that women and newborns receive interventions that may prevent and treat birth-related complications. As facility deliveries increase in developing countries, there are concerns about service quality. Observation is the gold standard for clinical quality assessment, but existing observation-based measures of obstetric quality of care are lengthy and difficult to administer. There is a lack of consensus on quality indicators for routine intrapartum and immediate postpartum care, including essential newborn care. This study identified key dimensions of the quality of the process of intrapartum and immediate postpartum care (QoPIIPC) in facility deliveries and developed a quality assessment measure representing these dimensions. Methods and Findings Global maternal and neonatal care experts identified key dimensions of QoPIIPC through a modified Delphi process. Experts also rated indicators of these dimensions from a comprehensive delivery observation checklist used in quality surveys in sub-Saharan African countries. Potential QoPIIPC indices were developed from combinations of highly-rated indicators. Face, content, and criterion validation of these indices was conducted using data from observations of 1,145 deliveries in Kenya, Madagascar, and Tanzania (including Zanzibar). A best-performing index was selected, composed of 20 indicators of intrapartum/immediate postpartum care, including essential newborn care. This index represented most dimensions of QoPIIPC and effectively discriminated between poorly and well-performed deliveries. Conclusions As facility deliveries increase and the global community pays greater attention to the role of care quality in achieving further maternal and newborn mortality reduction, the QoPIIPC index may be a valuable measure. This index complements and addresses gaps in currently used quality assessment tools. Further evaluation of index usability and reliability is needed. The availability of a streamlined, comprehensive, and validated index may enable ongoing and efficient observation-based assessment of care quality during labor and delivery in sub-Saharan Africa, facilitating targeted quality improvement. PMID:26107655
Bittman, Barry B; Snyder, Cherie; Bruhn, Karl T; Liebfreid, Fran; Stevens, Christine K; Westengard, James; Umbach, Paul O
2004-01-01
The challenges of providing exemplary undergraduate nursing education cannot be underestimated in an era when burnout and negative mood states predictably lead to alarming rates of academic as well as career attrition. While the multi-dimensional nature of this complex issue has been extensively elucidated, few rational strategies exist to reverse a disheartening trend recognizable early in the educational process that subsequently threatens to undermine the future viability of quality healthcare. This controlled prospective crossover study examined the impact of a 6-session Recreational Music-making (RMM) protocol on burnout and mood dimensions as well as Total Mood Disturbance (TMD) in first year associate level nursing students. A total of 75 first year associate degree nursing students from Allegany College of Maryland (ACM) participated in a 6-session RMM protocol focusing on group support and stress reduction utilizing a specific group drumming protocol. Burnout and mood dimensions were assessed with the Maslach Burnout Inventory and the Profile of Mood States respectively. Statistically significant reductions of multiple burnout and mood dimensions as well as TMD scores were noted. Potential annual cost savings for the typical associate degree nursing program (16,800 dollars) and acute care hospital (322,000 dollars) were projected by an independent economic analysis firm. A cost-effective 6-session RMM protocol reduces burnout and mood dimensions as well as TMD in associate degree nursing students.
Comprehensive Fractal Description of Porosity of Coal of Different Ranks
Ren, Jiangang; Zhang, Guocheng; Song, Zhimin; Liu, Gaofeng; Li, Bing
2014-01-01
We selected, as the objects of our research, lignite from the Beizao Mine, gas coal from the Caiyuan Mine, coking coal from the Xiqu Mine, and anthracite from the Guhanshan Mine. We used the mercury intrusion method and the low-temperature liquid nitrogen adsorption method to analyze the structure and shape of the coal pores and calculated the fractal dimensions of different aperture segments in the coal. The experimental results show that the fractal dimension of the aperture segment of lignite, gas coal, and coking coal with an aperture of greater than or equal to 10 nm, as well as the fractal dimension of the aperture segment of anthracite with an aperture of greater than or equal to 100 nm, can be calculated using the mercury intrusion method; the fractal dimension of the coal pore, with an aperture range between 2.03 nm and 361.14 nm, can be calculated using the liquid nitrogen adsorption method, of which the fractal dimensions bounded by apertures of 10 nm and 100 nm are different. Based on these findings, we defined and calculated the comprehensive fractal dimensions of the coal pores and achieved the unity of fractal dimensions for full apertures of coal pores, thereby facilitating, overall characterization for the heterogeneity of the coal pore structure. PMID:24955407
Fractal analysis as a potential tool for surface morphology of thin films
NASA Astrophysics Data System (ADS)
Soumya, S.; Swapna, M. S.; Raj, Vimal; Mahadevan Pillai, V. P.; Sankararaman, S.
2017-12-01
Fractal geometry developed by Mandelbrot has emerged as a potential tool for analyzing complex systems in the diversified fields of science, social science, and technology. Self-similar objects having the same details in different scales are referred to as fractals and are analyzed using the mathematics of non-Euclidean geometry. The present work is an attempt to correlate fractal dimension for surface characterization by Atomic Force Microscopy (AFM). Taking the AFM images of zinc sulphide (ZnS) thin films prepared by pulsed laser deposition (PLD) technique, under different annealing temperatures, the effect of annealing temperature and surface roughness on fractal dimension is studied. The annealing temperature and surface roughness show a strong correlation with fractal dimension. From the regression equation set, the surface roughness at a given annealing temperature can be calculated from the fractal dimension. The AFM images are processed using Photoshop and fractal dimension is calculated by box-counting method. The fractal dimension decreases from 1.986 to 1.633 while the surface roughness increases from 1.110 to 3.427, for a change of annealing temperature 30 ° C to 600 ° C. The images are also analyzed by power spectrum method to find the fractal dimension. The study reveals that the box-counting method gives better results compared to the power spectrum method.
Fractal Analysis of Rock Joint Profiles
NASA Astrophysics Data System (ADS)
Audy, Ondřej; Ficker, Tomáš
2017-10-01
Surface reliefs of rock joints are analyzed in geotechnics when shear strength of rocky slopes is estimated. The rock joint profiles actually are self-affine fractal curves and computations of their fractal dimensions require special methods. Many papers devoted to the fractal properties of these profiles were published in the past but only a few of those papers employed a convenient computational method that would have guaranteed a sound value of that dimension. As a consequence, anomalously low dimensions were presented. This contribution deals with two computational modifications that lead to sound fractal dimensions of the self-affine rock joint profiles. These are the modified box-counting method and the modified yard-stick method sometimes called the compass method. Both these methods are frequently applied to self-similar fractal curves but the self-affine profile curves due to their self-affine nature require modified computational procedures implemented in computer programs.
Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method
Huh, Kyung-Hoe; Baik, Jee-Seon; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo
2011-01-01
Purpose This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Materials and Methods Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. Results The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. Conclusion The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm. PMID:21977478
Automatic Black-Box Model Order Reduction using Radial Basis Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stephanson, M B; Lee, J F; White, D A
Finite elements methods have long made use of model order reduction (MOR), particularly in the context of fast freqeucny sweeps. In this paper, we discuss a black-box MOR technique, applicable to a many solution methods and not restricted only to spectral responses. We also discuss automated methods for generating a reduced order model that meets a given error tolerance. Numerical examples demonstrate the effectiveness and wide applicability of the method. With the advent of improved computing hardware and numerous fast solution techniques, the field of computational electromagnetics are progressed rapidly in terms of the size and complexity of problems thatmore » can be solved. Numerous applications, however, require the solution of a problem for many different configurations, including optimization, parameter exploration, and uncertainly quantification, where the parameters that may be changed include frequency, material properties, geometric dimensions, etc. In such cases, thousands of solutions may be needed, so solve times of even a few minutes can be burdensome. Model order reduction (MOR) may alleviate this difficulty by creating a small model that can be evaluated quickly. Many MOR techniques have been applied to electromagnetic problems over the past few decades, particularly in the context of fast frequency sweeps. Recent works have extended these methods to allow more than one parameter and to allow the parameters to represent material and geometric properties. There are still limitations with these methods, however. First, they almost always assume that the finite element method is used to solve the problem, so that the system matrix is a known function of the parameters. Second, although some authors have presented adaptive methods (e.g., [2]), the order of the model is often determined before the MOR process begins, with little insight about what order is actually needed to reach the desired accuracy. Finally, it not clear how to efficiently extend most methods to the multiparameter case. This paper address the above shortcomings be developing a method that uses a block-box approach to the solution method, is adaptive, and is easily extensible to many parameters.« less
NASA Astrophysics Data System (ADS)
Ayatollahy Tafti, Tayeb
We develop a new method for integrating information and data from different sources. We also construct a comprehensive workflow for characterizing and modeling a fracture network in unconventional reservoirs, using microseismic data. The methodology is based on combination of several mathematical and artificial intelligent techniques, including geostatistics, fractal analysis, fuzzy logic, and neural networks. The study contributes to scholarly knowledge base on the characterization and modeling fractured reservoirs in several ways; including a versatile workflow with a novel objective functions. Some the characteristics of the methods are listed below: 1. The new method is an effective fracture characterization procedure estimates different fracture properties. Unlike the existing methods, the new approach is not dependent on the location of events. It is able to integrate all multi-scaled and diverse fracture information from different methodologies. 2. It offers an improved procedure to create compressional and shear velocity models as a preamble for delineating anomalies and map structures of interest and to correlate velocity anomalies with fracture swarms and other reservoir properties of interest. 3. It offers an effective way to obtain the fractal dimension of microseismic events and identify the pattern complexity, connectivity, and mechanism of the created fracture network. 4. It offers an innovative method for monitoring the fracture movement in different stages of stimulation that can be used to optimize the process. 5. Our newly developed MDFN approach allows to create a discrete fracture network model using only microseismic data with potential cost reduction. It also imposes fractal dimension as a constraint on other fracture modeling approaches, which increases the visual similarity between the modeled networks and the real network over the simulated volume.
Scalable direct Vlasov solver with discontinuous Galerkin method on unstructured mesh.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J.; Ostroumov, P. N.; Mustapha, B.
2010-12-01
This paper presents the development of parallel direct Vlasov solvers with discontinuous Galerkin (DG) method for beam and plasma simulations in four dimensions. Both physical and velocity spaces are in two dimesions (2P2V) with unstructured mesh. Contrary to the standard particle-in-cell (PIC) approach for kinetic space plasma simulations, i.e., solving Vlasov-Maxwell equations, direct method has been used in this paper. There are several benefits to solving a Vlasov equation directly, such as avoiding noise associated with a finite number of particles and the capability to capture fine structure in the plasma. The most challanging part of a direct Vlasov solvermore » comes from higher dimensions, as the computational cost increases as N{sup 2d}, where d is the dimension of the physical space. Recently, due to the fast development of supercomputers, the possibility has become more realistic. Many efforts have been made to solve Vlasov equations in low dimensions before; now more interest has focused on higher dimensions. Different numerical methods have been tried so far, such as the finite difference method, Fourier Spectral method, finite volume method, and spectral element method. This paper is based on our previous efforts to use the DG method. The DG method has been proven to be very successful in solving Maxwell equations, and this paper is our first effort in applying the DG method to Vlasov equations. DG has shown several advantages, such as local mass matrix, strong stability, and easy parallelization. These are particularly suitable for Vlasov equations. Domain decomposition in high dimensions has been used for parallelization; these include a highly scalable parallel two-dimensional Poisson solver. Benchmark results have been shown and simulation results will be reported.« less
Auerbach, Benjamin M
2011-05-01
One of the greatest limitations to the application of the revised Fully anatomical stature estimation method is the inability to measure some of the skeletal elements required in its calculation. These element dimensions cannot be obtained due to taphonomic factors, incomplete excavation, or disease processes, and result in missing data. This study examines methods of imputing these missing dimensions using observable Fully measurements from the skeleton and the accuracy of incorporating these missing element estimations into anatomical stature reconstruction. These are further assessed against stature estimations obtained from mathematical regression formulae for the lower limb bones (femur and tibia). Two thousand seven hundred and seventeen North and South American indigenous skeletons were measured, and subsets of these with observable Fully dimensions were used to simulate missing elements and create estimation methods and equations. Comparisons were made directly between anatomically reconstructed statures and mathematically derived statures, as well as with anatomically derived statures with imputed missing dimensions. These analyses demonstrate that, while mathematical stature estimations are more accurate, anatomical statures incorporating missing dimensions are not appreciably less accurate and are more precise. The anatomical stature estimation method using imputed missing dimensions is supported. Missing element estimation, however, is limited to the vertebral column (only when lumbar vertebrae are present) and to talocalcaneal height (only when femora and tibiae are present). Crania, entire vertebral columns, and femoral or tibial lengths cannot be reliably estimated. Further discussion of the applicability of these methods is discussed. Copyright © 2011 Wiley-Liss, Inc.
Higuchi Dimension of Digital Images
Ahammer, Helmut
2011-01-01
There exist several methods for calculating the fractal dimension of objects represented as 2D digital images. For example, Box counting, Minkowski dilation or Fourier analysis can be employed. However, there appear to be some limitations. It is not possible to calculate only the fractal dimension of an irregular region of interest in an image or to perform the calculations in a particular direction along a line on an arbitrary angle through the image. The calculations must be made for the whole image. In this paper, a new method to overcome these limitations is proposed. 2D images are appropriately prepared in order to apply 1D signal analyses, originally developed to investigate nonlinear time series. The Higuchi dimension of these 1D signals is calculated using Higuchi's algorithm, and it is shown that both regions of interests and directional dependencies can be evaluated independently of the whole picture. A thorough validation of the proposed technique and a comparison of the new method to the Fourier dimension, a common two dimensional method for digital images, are given. The main result is that Higuchi's algorithm allows a direction dependent as well as direction independent analysis. Actual values for the fractal dimensions are reliable and an effective treatment of regions of interests is possible. Moreover, the proposed method is not restricted to Higuchi's algorithm, as any 1D method of analysis, can be applied. PMID:21931854
Conductivity of higher dimensional holographic superconductors with nonlinear electrodynamics
NASA Astrophysics Data System (ADS)
Sheykhi, Ahmad; Hashemi Asl, Doa; Dehyadegari, Amin
2018-06-01
We investigate analytically as well as numerically the properties of s-wave holographic superconductors in d-dimensional spacetime and in the presence of Logarithmic nonlinear electrodynamics. We study three aspects of this kind of superconductors. First, we obtain, by employing analytical Sturm-Liouville method as well as numerical shooting method, the relation between critical temperature and charge density, ρ, and disclose the effects of both nonlinear parameter b and the dimensions of spacetime, d, on the critical temperature Tc. We find that in each dimension, Tc /ρ 1 / (d - 2) decreases with increasing the nonlinear parameter b while it increases with increasing the dimension of spacetime for a fixed value of b. Then, we calculate the condensation value and critical exponent of the system analytically and numerically and observe that in each dimension, the dimensionless condensation get larger with increasing the nonlinear parameter b. Besides, for a fixed value of b, it increases with increasing the spacetime dimension. We confirm that the results obtained from our analytical method are in agreement with the results obtained from numerical shooting method. This fact further supports the correctness of our analytical method. Finally, we explore the holographic conductivity of this system and find out that the superconducting gap increases with increasing either the nonlinear parameter or the spacetime dimension.
NASA Astrophysics Data System (ADS)
Lin, Liangjie; Wei, Zhiliang; Yang, Jian; Lin, Yanqin; Chen, Zhong
2014-11-01
The spatial encoding technique can be used to accelerate the acquisition of multi-dimensional nuclear magnetic resonance spectra. However, with this technique, we have to make trade-offs between the spectral width and the resolution in the spatial encoding dimension (F1 dimension), resulting in the difficulty of covering large spectral widths while preserving acceptable resolutions for spatial encoding spectra. In this study, a selective shifting method is proposed to overcome the aforementioned drawback. This method is capable of narrowing spectral widths and improving spectral resolutions in spatial encoding dimensions by selectively shifting certain peaks in spectra of the ultrafast version of spin echo correlated spectroscopy (UFSECSY). This method can also serve as a powerful tool to obtain high-resolution correlated spectra in inhomogeneous magnetic fields for its resistance to any inhomogeneity in the F1 dimension inherited from UFSECSY. Theoretical derivations and experiments have been carried out to demonstrate performances of the proposed method. Results show that the spectral width in spatial encoding dimension can be reduced by shortening distances between cross peaks and axial peaks with the proposed method and the expected resolution improvement can be achieved. Finally, the shifting-absent spectrum can be recovered readily by post-processing.
Scholte, Marijn; Calsbeek, Hilly; Nijhuis-van der Sanden, Maria W G; Braspenning, Jozé
2014-06-18
Assessing quality of care from the patient's perspective has changed from patient satisfaction to the more general term patient experience, as satisfaction measures turned out to be less discriminative due to high scores. Literature describes four to ten dimensions of patient experience, tailored to specific conditions or types of care. Given the administrative burden on patients, less dimensions and items could increase feasibility. Ten dimensions of patient experiences with physical therapy (PT) were proposed in the Netherlands in a consensus-based process with patients, physical therapists, health insurers, and policy makers. The aim of this paper is to detect the number of dimensions from data of a field study using factor analysis at item level. A web-based survey yielded data of 2,221 patients from 52 PT practices on 41 items. Principal component factor analysis at item level was used to assess the proposed distinction between the ten dimensions. Factor analysis revealed two dimensions: 'personal interaction' and 'practice organisation'. The dimension 'patient reported outcome' was artificially established. The three dimensions 'personal interaction' (14 items) (median(practice level) = 91.1; IQR = 2.4), 'practice organisation' (9 items) (median(practice level) = 88.9; IQR = 6.0) and 'outcome' (3 items) (median(practice level) = 80.6; IQR = 19.5) reduced the number of dimensions from ten to three and the number of items by more than a third. Factor analysis revealed three dimensions and achieved an item reduction of more than a third. It is a relevant step in the development process of a quality measurement tool to reduce respondent burden, increase clarity, and promote feasibility.
Gradient optimization of finite projected entangled pair states
NASA Astrophysics Data System (ADS)
Liu, Wen-Yuan; Dong, Shao-Jun; Han, Yong-Jian; Guo, Guang-Can; He, Lixin
2017-05-01
Projected entangled pair states (PEPS) methods have been proven to be powerful tools to solve strongly correlated quantum many-body problems in two dimensions. However, due to the high computational scaling with the virtual bond dimension D , in a practical application, PEPS are often limited to rather small bond dimensions, which may not be large enough for some highly entangled systems, for instance, frustrated systems. Optimization of the ground state using the imaginary time evolution method with a simple update scheme may go to a larger bond dimension. However, the accuracy of the rough approximation to the environment of the local tensors is questionable. Here, we demonstrate that by combining the imaginary time evolution method with a simple update, Monte Carlo sampling techniques and gradient optimization will offer an efficient method to calculate the PEPS ground state. By taking advantage of massive parallel computing, we can study quantum systems with larger bond dimensions up to D =10 without resorting to any symmetry. Benchmark tests of the method on the J1-J2 model give impressive accuracy compared with exact results.
Exploring multicollinearity using a random matrix theory approach.
Feher, Kristen; Whelan, James; Müller, Samuel
2012-01-01
Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.
Operationalizing clean development mechanism baselines: A case study of China's electrical sector
NASA Astrophysics Data System (ADS)
Steenhof, Paul A.
The global carbon market is rapidly developing as the first commitment period of the Kyoto Protocol draws closer and Parties to the Protocol with greenhouse gas (GHG) emission reduction targets seek alternative ways to reduce their emissions. The Protocol includes the Clean Development Mechanism (CDM), a tool that encourages project-based investments to be made in developing nations that will lead to an additional reduction in emissions. Due to China's economic size and rate of growth, technological characteristics, and its reliance on coal, it contains a large proportion of the global CDM potential. As China's economy modernizes, more technologies and processes are requiring electricity and demand for this energy source is accelerating rapidly. Relatively inefficient technology to generate electricity in China thereby results in the electrical sector having substantial GHG emission reduction opportunities as related to the CDM. In order to ensure the credibility of the CDM in leading to a reduction in GHG emissions, it is important that the baseline method used in the CDM approval process is scientifically sound and accessible for both others to use and for evaluation purposes. Three different methods for assessing CDM baselines and environmental additionality are investigated in the context of China's electrical sector: a method based on a historical perspective of the electrical sector (factor decomposition), a method structured upon a current perspective (operating and build margins), and a simulation of the future (dispatch analysis). Assessing future emission levels for China's electrical sector is a very challenging task given the complexity of the system, its dynamics, and that it is heavily influenced by internal and external forces, but of the different baseline methods investigated, dispatch modelling is best suited for the Chinese context as it is able to consider the important regional and temporal dimensions of its economy and its future development. For China, the most promising options for promoting sustainable development, one of the goals of the Kyoto Protocol, appear to be tied to increasing electrical end-use and generation efficiency, particularly clean coal technology for electricity generation since coal will likely continue to be a dominant primary fuel.
NASA Astrophysics Data System (ADS)
Schmidt, Burkhard; Hartmann, Carsten
2018-07-01
WavePacket is an open-source program package for numeric simulations in quantum dynamics. It can solve time-independent or time-dependent linear Schrödinger and Liouville-von Neumann-equations in one or more dimensions. Also coupled equations can be treated, which allows, e.g., to simulate molecular quantum dynamics beyond the Born-Oppenheimer approximation. Optionally accounting for the interaction with external electric fields within the semi-classical dipole approximation, WavePacket can be used to simulate experiments involving tailored light pulses in photo-induced physics or chemistry. Being highly versatile and offering visualization of quantum dynamics 'on the fly', WavePacket is well suited for teaching or research projects in atomic, molecular and optical physics as well as in physical or theoretical chemistry. Building on the previous Part I [Comp. Phys. Comm. 213, 223-234 (2017)] which dealt with closed quantum systems and discrete variable representations, the present Part II focuses on the dynamics of open quantum systems, with Lindblad operators modeling dissipation and dephasing. This part also describes the WavePacket function for optimal control of quantum dynamics, building on rapid monotonically convergent iteration methods. Furthermore, two different approaches to dimension reduction implemented in WavePacket are documented here. In the first one, a balancing transformation based on the concepts of controllability and observability Gramians is used to identify states that are neither well controllable nor well observable. Those states are either truncated or averaged out. In the other approach, the H2-error for a given reduced dimensionality is minimized by H2 optimal model reduction techniques, utilizing a bilinear iterative rational Krylov algorithm. The present work describes the MATLAB version of WavePacket 5.3.0 which is hosted and further developed at the Sourceforge platform, where also extensive Wiki-documentation as well as numerous worked-out demonstration examples with animated graphics can be found.
AKSZ construction from reduction data
NASA Astrophysics Data System (ADS)
Bonechi, Francesco; Cabrera, Alejandro; Zabzine, Maxim
2012-07-01
We discuss a general procedure to encode the reduction of the target space geometry into AKSZ sigma models. This is done by considering the AKSZ construction with target the BFV model for constrained graded symplectic manifolds. We investigate the relation between this sigma model and the one with the reduced structure. We also discuss several examples in dimension two and three when the symmetries come from Lie group actions and systematically recover models already proposed in the literature.
Roland Hernandez; Jerrold E. Winandy
2005-01-01
A quantitative model is presented for evaluating the effects of incising on the bending strength and stiffness of structural dimension lumber. This model is based on the premise that bending strength and stiffness are reduced when lumber is incised, and the extent of this reduction is related to the reduction in moment of inertia of the bending members. Measurements of...
A massive Feynman integral and some reduction relations for Appell functions
NASA Astrophysics Data System (ADS)
Shpot, M. A.
2007-12-01
New explicit expressions are derived for the one-loop two-point Feynman integral with arbitrary external momentum and masses m12 and m22 in D dimensions. The results are given in terms of Appell functions, manifestly symmetric with respect to the masses mi2. Equating our expressions with previously known results in terms of Gauss hypergeometric functions yields reduction relations for the involved Appell functions that are apparently new mathematical results.
Topological electronic liquids: Electronic physics of one dimension beyond the one spatial dimension
NASA Astrophysics Data System (ADS)
Wiegmann, P. B.
1999-06-01
There is a class of electronic liquids in dimensions greater than 1 that shows all essential properties of one-dimensional electronic physics. These are topological liquids-correlated electronic systems with a spectral flow. Compressible topological electronic liquids are superfluids. In this paper we present a study of a conventional model of a topological superfluid in two spatial dimensions. This model is thought to be relevant to a doped Mott insulator. We show how the spectral flow leads to the superfluid hydrodynamics and how the orthogonality catastrophe affects off-diagonal matrix elements. We also compute the major electronic correlation functions. Among them are the spectral function, the pair wave function, and various tunneling amplitudes. To compute correlation functions we develop a method of current algebra-an extension of the bosonization technique of one spatial dimension. In order to emphasize a similarity between electronic liquids in one dimension and topological liquids in dimensions greater than 1, we first review the Fröhlich-Peierls mechanism of ideal conductivity in one dimension and then extend the physics and the methods into two spatial dimensions.
Diffusion maps for high-dimensional single-cell analysis of differentiation data.
Haghverdi, Laleh; Buettner, Florian; Theis, Fabian J
2015-09-15
Single-cell technologies have recently gained popularity in cellular differentiation studies regarding their ability to resolve potential heterogeneities in cell populations. Analyzing such high-dimensional single-cell data has its own statistical and computational challenges. Popular multivariate approaches are based on data normalization, followed by dimension reduction and clustering to identify subgroups. However, in the case of cellular differentiation, we would not expect clear clusters to be present but instead expect the cells to follow continuous branching lineages. Here, we propose the use of diffusion maps to deal with the problem of defining differentiation trajectories. We adapt this method to single-cell data by adequate choice of kernel width and inclusion of uncertainties or missing measurement values, which enables the establishment of a pseudotemporal ordering of single cells in a high-dimensional gene expression space. We expect this output to reflect cell differentiation trajectories, where the data originates from intrinsic diffusion-like dynamics. Starting from a pluripotent stage, cells move smoothly within the transcriptional landscape towards more differentiated states with some stochasticity along their path. We demonstrate the robustness of our method with respect to extrinsic noise (e.g. measurement noise) and sampling density heterogeneities on simulated toy data as well as two single-cell quantitative polymerase chain reaction datasets (i.e. mouse haematopoietic stem cells and mouse embryonic stem cells) and an RNA-Seq data of human pre-implantation embryos. We show that diffusion maps perform considerably better than Principal Component Analysis and are advantageous over other techniques for non-linear dimension reduction such as t-distributed Stochastic Neighbour Embedding for preserving the global structures and pseudotemporal ordering of cells. The Matlab implementation of diffusion maps for single-cell data is available at https://www.helmholtz-muenchen.de/icb/single-cell-diffusion-map. fbuettner.phys@gmail.com, fabian.theis@helmholtz-muenchen.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Detecting the subtle shape differences in hemodynamic responses at the group level
Chen, Gang; Saad, Ziad S.; Adleman, Nancy E.; Leibenluft, Ellen; Cox, Robert W.
2015-01-01
The nature of the hemodynamic response (HDR) is still not fully understood due to the multifaceted processes involved. Aside from the overall amplitude, the response may vary across cognitive states, tasks, brain regions, and subjects with respect to characteristics such as rise and fall speed, peak duration, undershoot shape, and overall duration. Here we demonstrate that the fixed-shape (FSM) or adjusted-shape (ASM) methods may fail to detect some shape subtleties (e.g., speed of rise or recovery, or undershoot). In contrast, the estimated-shape method (ESM) through multiple basis functions can provide the opportunity to identify some subtle shape differences and achieve higher statistical power at both individual and group levels. Previously, some dimension reduction approaches focused on the peak magnitude, or made inferences based on the area under the curve (AUC) or interaction, which can lead to potential misidentifications. By adopting a generic framework of multivariate modeling (MVM), we showcase a hybrid approach that is validated by simulations and real data. With the whole HDR shape integrity maintained as input at the group level, the approach allows the investigator to substantiate these more nuanced effects through the unique HDR shape features. Unlike the few analyses that were limited to main effect, two- or three-way interactions, we extend the modeling approach to an inclusive platform that is more adaptable than the conventional GLM. With multiple effect estimates from ESM for each condition, linear mixed-effects (LME) modeling should be used at the group level when there is only one group of subjects without any other explanatory variables. Under other situations, an approximate approach through dimension reduction within the MVM framework can be adopted to achieve a practical equipoise among representation, false positive control, statistical power, and modeling flexibility. The associated program 3dMVM is publicly available as part of the AFNI suite. PMID:26578853
Alternative method for variable aspect ratio vias using a vortex mask
NASA Astrophysics Data System (ADS)
Schepis, Anthony R.; Levinson, Zac; Burbine, Andrew; Smith, Bruce W.
2014-03-01
Historically IC (integrated circuit) device scaling has bridged the gap between technology nodes. Device size reduction is enabled by increased pattern density, enhancing functionality and effectively reducing cost per chip. Exemplifying this trend are aggressive reductions in memory cell sizes that have resulted in systems with diminishing area between bit/word lines. This affords an even greater challenge in the patterning of contact level features that are inherently difficult to resolve because of their relatively small area and complex aerial image. To accommodate these trends, semiconductor device design has shifted toward the implementation of elliptical contact features. This empowers designers to maximize the use of free device space, preserving contact area and effectively reducing the via dimension just along a single axis. It is therefore critical to provide methods that enhance the resolving capacity of varying aspect ratio vias for implementation in electronic design systems. Vortex masks, characterized by their helically induced propagation of light and consequent dark core, afford great potential for the patterning of such features when coupled with a high resolution negative tone resist system. This study investigates the integration of a vortex mask in a 193nm immersion (193i) lithography system and qualifies its ability to augment aspect ratio through feature density using aerial image vector simulation. It was found that vortex fabricated vias provide a distinct resolution advantage over traditionally patterned contact features employing a 6% attenuated phase shift mask (APM). 1:1 features were resolvable at 110nm pitch with a 38nm critical dimension (CD) and 110nm depth of focus (DOF) at 10% exposure latitude (EL). Furthermore, iterative source-mask optimization was executed as means to augment aspect ratio. By employing mask asymmetries and directionally biased sources aspect ratios ranging between 1:1 and 2:1 were achievable, however, this range is ultimately dictated by pitch employed.
Incremental online learning in high dimensions.
Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan
2005-12-01
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.
Exploring dimensions of access to medical care.
Andersen, R M; McCutcheon, A; Aday, L A; Chiu, G Y; Bell, R
1983-01-01
This paper examines the dimensions of the access concept with particular attention to the extent to which more parsimonious indicators of access can be developed. This process is especially useful to health policy makers, planners and researchers in need of cost-effective social indicators of access to monitor the need for and impact of innovative health care programs. Three stages of data reduction are used in the analysis, resulting in a reduced set of key indicators of the concept. Implication for subsequent data collection and measurement of access are discussed. PMID:6841113
Gravity and the Spin-2 Planar Schrödinger Equation
NASA Astrophysics Data System (ADS)
Bergshoeff, Eric A.; Rosseel, Jan; Townsend, Paul K.
2018-04-01
A Schrödinger equation proposed for the Girvin-MacDonald-Platzman gapped spin-2 mode of fractional quantum Hall states is found from a novel nonrelativistic limit, applicable only in 2 +1 dimensions, of the massive spin-2 Fierz-Pauli field equations. It is also found from a novel null reduction of the linearized Einstein field equations in 3 +1 dimensions, and in this context a uniform distribution of spin-2 particles implies, via a Brinkmann-wave solution of the nonlinear Einstein equations, a confining harmonic oscillator potential for the individual particles.
Model reduction for Space Station Freedom
NASA Technical Reports Server (NTRS)
Williams, Trevor
1992-01-01
Model reduction is an important practical problem in the control of flexible spacecraft, and a considerable amount of work has been carried out on this topic. Two of the best known methods developed are modal truncation and internal balancing. Modal truncation is simple to implement but can give poor results when the structure possesses clustered natural frequencies, as often occurs in practice. Balancing avoids this problem but has the disadvantages of high computational cost, possible numerical sensitivity problems, and no physical interpretation for the resulting balanced 'modes'. The purpose of this work is to examine the performance of the subsystem balancing technique developed by the investigator when tested on a realistic flexible space structure, in this case a model of the Permanently Manned Configuration (PMC) of Space Station Freedom. This method retains the desirable properties of standard balancing while overcoming the three difficulties listed above. It achieves this by first decomposing the structural model into subsystems of highly correlated modes. Each subsystem is approximately uncorrelated from all others, so balancing them separately and then combining yields comparable results to balancing the entire structure directly. The operation count reduction obtained by the new technique is considerable: a factor of roughly r(exp 2) if the system decomposes into r equal subsystems. Numerical accuracy is also improved significantly, as the matrices being operated on are of reduced dimension, and the modes of the reduced-order model now have a clear physical interpretation; they are, to first order, linear combinations of repeated-frequency modes.
Generalized Full-Information Item Bifactor Analysis
Cai, Li; Yang, Ji Seung; Hansen, Mark
2011-01-01
Full-information item bifactor analysis is an important statistical method in psychological and educational measurement. Current methods are limited to single group analysis and inflexible in the types of item response models supported. We propose a flexible multiple-group item bifactor analysis framework that supports a variety of multidimensional item response theory models for an arbitrary mixing of dichotomous, ordinal, and nominal items. The extended item bifactor model also enables the estimation of latent variable means and variances when data from more than one group are present. Generalized user-defined parameter restrictions are permitted within or across groups. We derive an efficient full-information maximum marginal likelihood estimator. Our estimation method achieves substantial computational savings by extending Gibbons and Hedeker’s (1992) bifactor dimension reduction method so that the optimization of the marginal log-likelihood only requires two-dimensional integration regardless of the dimensionality of the latent variables. We use simulation studies to demonstrate the flexibility and accuracy of the proposed methods. We apply the model to study cross-country differences, including differential item functioning, using data from a large international education survey on mathematics literacy. PMID:21534682
Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhbardeh, Alireza; Jacobs, Michael A.; Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205
2012-04-15
Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), andmore » diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both synthetic and clinical data. In the synthetic data, the authors demonstrated the performance of the NLDR method compared with conventional linear DR methods. The NLDR approach enabled successful segmentation of the structures, whereas, in most cases, PCA and MDS failed. The NLDR approach was able to segment different breast tissue types with a high accuracy and the embedded image of the breast MRI data demonstrated fuzzy boundaries between the different types of breast tissue, i.e., fatty, glandular, and tissue with lesions (>86%). Conclusions: The proposed hybrid NLDR methods were able to segment clinical breast data with a high accuracy and construct an embedded image that visualized the contribution of different radiological parameters.« less
Dimensional Stabilization of Wood In Use
R. M. Rowell; R. L. Youngs
1981-01-01
Many techniques have been devised to reduce the tendency of wood to change dimensions in contact with moisture. Treatments such as cross-lamination, water-resistant coatings, hygroscopicity reduction, crosslinking, and bulking are reviewed and recommendations for future research are given.
Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method.
Huh, Kyung-Hoe; Baik, Jee-Seon; Yi, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo
2011-06-01
This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm.
Identifying influential nodes in complex networks: A node information dimension approach
NASA Astrophysics Data System (ADS)
Bian, Tian; Deng, Yong
2018-04-01
In the field of complex networks, how to identify influential nodes is a significant issue in analyzing the structure of a network. In the existing method proposed to identify influential nodes based on the local dimension, the global structure information in complex networks is not taken into consideration. In this paper, a node information dimension is proposed by synthesizing the local dimensions at different topological distance scales. A case study of the Netscience network is used to illustrate the efficiency and practicability of the proposed method.
Dimension from covariance matrices.
Carroll, T L; Byers, J M
2017-02-01
We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.
Geometric mean for subspace selection.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2009-02-01
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.
Compressed learning and its applications to subcellular localization.
Zheng, Zhong-Long; Guo, Li; Jia, Jiong; Xie, Chen-Mao; Zeng, Wen-Cai; Yang, Jie
2011-09-01
One of the main challenges faced by biological applications is to predict protein subcellular localization in automatic fashion accurately. To achieve this in these applications, a wide variety of machine learning methods have been proposed in recent years. Most of them focus on finding the optimal classification scheme and less of them take the simplifying the complexity of biological systems into account. Traditionally, such bio-data are analyzed by first performing a feature selection before classification. Motivated by CS (Compressed Sensing) theory, we propose the methodology which performs compressed learning with a sparseness criterion such that feature selection and dimension reduction are merged into one analysis. The proposed methodology decreases the complexity of biological system, while increases protein subcellular localization accuracy. Experimental results are quite encouraging, indicating that the aforementioned sparse methods are quite promising in dealing with complicated biological problems, such as predicting the subcellular localization of Gram-negative bacterial proteins.
Martin, Bryan D.; Wolfson, Julian; Adomavicius, Gediminas; Fan, Yingling
2017-01-01
We propose and compare combinations of several methods for classifying transportation activity data from smartphone GPS and accelerometer sensors. We have two main objectives. First, we aim to classify our data as accurately as possible. Second, we aim to reduce the dimensionality of the data as much as possible in order to reduce the computational burden of the classification. We combine dimension reduction and classification algorithms and compare them with a metric that balances accuracy and dimensionality. In doing so, we develop a classification algorithm that accurately classifies five different modes of transportation (i.e., walking, biking, car, bus and rail) while being computationally simple enough to run on a typical smartphone. Further, we use data that required no behavioral changes from the smartphone users to collect. Our best classification model uses the random forest algorithm to achieve 96.8% accuracy. PMID:28885550
Martin, Bryan D; Addona, Vittorio; Wolfson, Julian; Adomavicius, Gediminas; Fan, Yingling
2017-09-08
We propose and compare combinations of several methods for classifying transportation activity data from smartphone GPS and accelerometer sensors. We have two main objectives. First, we aim to classify our data as accurately as possible. Second, we aim to reduce the dimensionality of the data as much as possible in order to reduce the computational burden of the classification. We combine dimension reduction and classification algorithms and compare them with a metric that balances accuracy and dimensionality. In doing so, we develop a classification algorithm that accurately classifies five different modes of transportation (i.e., walking, biking, car, bus and rail) while being computationally simple enough to run on a typical smartphone. Further, we use data that required no behavioral changes from the smartphone users to collect. Our best classification model uses the random forest algorithm to achieve 96.8% accuracy.
Functional feature embedded space mapping of fMRI data.
Hu, Jin; Tian, Jie; Yang, Lei
2006-01-01
We have proposed a new method for fMRI data analysis which is called Functional Feature Embedded Space Mapping (FFESM). Our work mainly focuses on the experimental design with periodic stimuli which can be described by a number of Fourier coefficients in the frequency domain. A nonlinear dimension reduction technique Isomap is applied to the high dimensional features obtained from frequency domain of the fMRI data for the first time. Finally, the presence of activated time series is identified by the clustering method in which the information theoretic criterion of minimum description length (MDL) is used to estimate the number of clusters. The feasibility of our algorithm is demonstrated by real human experiments. Although we focus on analyzing periodic fMRI data, the approach can be extended to analyze non-periodic fMRI data (event-related fMRI) by replacing the Fourier analysis with a wavelet analysis.
Data-based adjoint and H2 optimal control of the Ginzburg-Landau equation
NASA Astrophysics Data System (ADS)
Banks, Michael; Bodony, Daniel
2017-11-01
Equation-free, reduced-order methods of control are desirable when the governing system of interest is of very high dimension or the control is to be applied to a physical experiment. Two-phase flow optimal control problems, our target application, fit these criteria. Dynamic Mode Decomposition (DMD) is a data-driven method for model reduction that can be used to resolve the dynamics of very high dimensional systems and project the dynamics onto a smaller, more manageable basis. We evaluate the effectiveness of DMD-based forward and adjoint operator estimation when applied to H2 optimal control approaches applied to the linear and nonlinear Ginzburg-Landau equation. Perspectives on applying the data-driven adjoint to two phase flow control will be given. Office of Naval Research (ONR) as part of the Multidisciplinary University Research Initiatives (MURI) Program, under Grant Number N00014-16-1-2617.
Quantitative imaging of aggregated emulsions.
Penfold, Robert; Watson, Andrew D; Mackie, Alan R; Hibberd, David J
2006-02-28
Noise reduction, restoration, and segmentation methods are developed for the quantitative structural analysis in three dimensions of aggregated oil-in-water emulsion systems imaged by fluorescence confocal laser scanning microscopy. Mindful of typical industrial formulations, the methods are demonstrated for concentrated (30% volume fraction) and polydisperse emulsions. Following a regularized deconvolution step using an analytic optical transfer function and appropriate binary thresholding, novel application of the Euclidean distance map provides effective discrimination of closely clustered emulsion droplets with size variation over at least 1 order of magnitude. The a priori assumption of spherical nonintersecting objects provides crucial information to combat the ill-posed inverse problem presented by locating individual particles. Position coordinates and size estimates are recovered with sufficient precision to permit quantitative study of static geometrical features. In particular, aggregate morphology is characterized by a novel void distribution measure based on the generalized Apollonius problem. This is also compared with conventional Voronoi/Delauney analysis.
NASA Astrophysics Data System (ADS)
Fragkoulis, Alexandros; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2015-03-01
We propose a method for the fair and efficient allocation of wireless resources over a cognitive radio system network to transmit multiple scalable video streams to multiple users. The method exploits the dynamic architecture of the Scalable Video Coding extension of the H.264 standard, along with the diversity that OFDMA networks provide. We use a game-theoretic Nash Bargaining Solution (NBS) framework to ensure that each user receives the minimum video quality requirements, while maintaining fairness over the cognitive radio system. An optimization problem is formulated, where the objective is the maximization of the Nash product while minimizing the waste of resources. The problem is solved by using a Swarm Intelligence optimizer, namely Particle Swarm Optimization. Due to the high dimensionality of the problem, we also introduce a dimension-reduction technique. Our experimental results demonstrate the fairness imposed by the employed NBS framework.
Mulhern, Brendan; Shah, Koonal; Janssen, Mathieu F Bas; Longworth, Louise; Ibbotson, Rachel
2016-01-01
Health states defined by multiattribute instruments such as the EuroQol five-dimensional questionnaire with five response levels (EQ-5D-5L) can be valued using time trade-off (TTO) or discrete choice experiment (DCE) methods. A key feature of the tasks is the order in which the health state dimensions are presented. Respondents may use various heuristics to complete the tasks, and therefore the order of the dimensions may impact on the importance assigned to particular states. To assess the impact of different EQ-5D-5L dimension orders on health state values. Preferences for EQ-5D-5L health states were elicited from a broadly representative sample of members of the UK general public. Respondents valued EQ-5D-5L health states using TTO and DCE methods across one of three dimension orderings via face-to-face computer-assisted personal interviews. Differences in mean values and the size of the health dimension coefficients across the arms were compared using difference testing and regression analyses. Descriptive analysis suggested some differences between the mean TTO health state values across the different dimension orderings, but these were not systematic. Regression analysis suggested that the magnitude of the dimension coefficients differs across the different dimension orderings (for both TTO and DCE), but there was no clear pattern. There is some evidence that the order in which the dimensions are presented impacts on the coefficients, which may impact on the health state values provided. The order of dimensions is a key consideration in the design of health state valuation studies. Copyright © 2016. Published by Elsevier Inc.
Epistemic uncertainty propagation in energy flows between structural vibrating systems
NASA Astrophysics Data System (ADS)
Xu, Menghui; Du, Xiaoping; Qiu, Zhiping; Wang, Chong
2016-03-01
A dimension-wise method for predicting fuzzy energy flows between structural vibrating systems coupled by joints with epistemic uncertainties is established. Based on its Legendre polynomial approximation at α=0, both the minimum and maximum point vectors of the energy flow of interest are calculated dimension by dimension within the space spanned by the interval parameters determined by fuzzy those at α=0 and the resulted interval bounds are used to assemble the concerned fuzzy energy flows. Besides the proposed method, vertex method as well as two current methods is also applied. Comparisons among results by different methods are accomplished by two numerical examples and the accuracy of all methods is simultaneously verified by Monte Carlo simulation.
[Lithology feature extraction of CASI hyperspectral data based on fractal signal algorithm].
Tang, Chao; Chen, Jian-Ping; Cui, Jing; Wen, Bo-Tao
2014-05-01
Hyperspectral data is characterized by combination of image and spectrum and large data volume dimension reduction is the main research direction. Band selection and feature extraction is the primary method used for this objective. In the present article, the authors tested methods applied for the lithology feature extraction from hyperspectral data. Based on the self-similarity of hyperspectral data, the authors explored the application of fractal algorithm to lithology feature extraction from CASI hyperspectral data. The "carpet method" was corrected and then applied to calculate the fractal value of every pixel in the hyperspectral data. The results show that fractal information highlights the exposed bedrock lithology better than the original hyperspectral data The fractal signal and characterized scale are influenced by the spectral curve shape, the initial scale selection and iteration step. At present, research on the fractal signal of spectral curve is rare, implying the necessity of further quantitative analysis and investigation of its physical implications.
Basak, Subhash C; Majumdar, Subhabrata
2015-01-01
Variation in high-dimensional data is often caused by a few latent factors, and hence dimension reduction or variable selection techniques are often useful in gathering useful information from the data. In this paper we consider two such recent methods: Interrelated two-way clustering and envelope models. We couple these methods with traditional statistical procedures like ridge regression and linear discriminant analysis, and apply them on two data sets which have more predictors than samples (i.e. n < p scenario) and several types of molecular descriptors. One of these datasets consists of a congeneric group of Amines while the other has a much diverse collection compounds. The difference of prediction results between these two datasets for both the methods supports the hypothesis that for a congeneric set of compounds, descriptors of a certain type are enough to provide good QSAR models, but as the data set grows diverse including a variety of descriptors can improve model quality considerably.
NASA Astrophysics Data System (ADS)
Ehler, Martin; Rajapakse, Vinodh; Zeeberg, Barry; Brooks, Brian; Brown, Jacob; Czaja, Wojciech; Bonner, Robert F.
The gene networks underlying closure of the optic fissure during vertebrate eye development are poorly understood. We used a novel clustering method based on Laplacian Eigenmaps, a nonlinear dimension reduction method, to analyze microarray data from laser capture microdissected (LCM) cells at the site and developmental stages (days 10.5 to 12.5) of optic fissure closure. Our new method provided greater biological specificity than classical clustering algorithms in terms of identifying more biological processes and functions related to eye development as defined by Gene Ontology at lower false discovery rates. This new methodology builds on the advantages of LCM to isolate pure phenotypic populations within complex tissues and allows improved ability to identify critical gene products expressed at lower copy number. The combination of LCM of embryonic organs, gene expression microarrays, and extracting spatial and temporal co-variations appear to be a powerful approach to understanding the gene regulatory networks that specify mammalian organogenesis.
Improved dense trajectories for action recognition based on random projection and Fisher vectors
NASA Astrophysics Data System (ADS)
Ai, Shihui; Lu, Tongwei; Xiong, Yudian
2018-03-01
As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.
Cox-nnet: An artificial neural network method for prognosis prediction of high-throughput omics data
Ching, Travers; Zhu, Xun
2018-01-01
Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet. PMID:29634719
16 CFR 1512.19 - Instructions and labeling.
Code of Federal Regulations, 2012 CFR
2012-01-01
... assembly and adjustment, (2) a drawing illustrating the minimum leg-length dimension of a rider and a method of measurement of this dimension. (c) The minimum leg-length dimension shall be readily...
Singularities at the contact point of two kissing Neumann balls
NASA Astrophysics Data System (ADS)
Nazarov, Sergey A.; Taskinen, Jari
2018-02-01
We investigate eigenfunctions of the Neumann Laplacian in a bounded domain Ω ⊂Rd, where a cuspidal singularity is caused by a cavity consisting of two touching balls, or discs in the planar case. We prove that the eigenfunctions with all of their derivatives are bounded in Ω ‾, if the dimension d equals 2, but in dimension d ≥ 3 their gradients have a strong singularity O (| x - O|-α), α ∈ (0 , 2 -√{ 2 } ] at the point of tangency O. Our study is based on dimension reduction and other asymptotic procedures, as well as the Kondratiev theory applied to the limit differential equation in the punctured hyperplane R d - 1 ∖ O. We also discuss other shapes producing thinning gaps between touching cavities.
Chen, Dong; Eisley, Noel A.; Steinmacher-Burow, Burkhard; Heidelberger, Philip
2013-01-29
A computer implemented method and a system for routing data packets in a multi-dimensional computer network. The method comprises routing a data packet among nodes along one dimension towards a root node, each node having input and output communication links, said root node not having any outgoing uplinks, and determining at each node if the data packet has reached a predefined coordinate for the dimension or an edge of the subrectangle for the dimension, and if the data packet has reached the predefined coordinate for the dimension or the edge of the subrectangle for the dimension, determining if the data packet has reached the root node, and if the data packet has not reached the root node, routing the data packet among nodes along another dimension towards the root node.
Positive and negative dimensions of weight control motivation.
Stotland, S; Larocque, M; Sadikaj, G
2012-01-01
This study examined weight control motivation among patients (N=5460 females and 547 males) who sought weight loss treatment with family physicians. An eight-item measure assessed the frequency of thoughts and feelings related to weight control "outcome" (e.g. expected physical and psychological benefits) and "process" (e.g. resentment and doubt). Factor analysis supported the existence of two factors, labeled Positive and Negative motivation. Positive motivation was high (average frequency of thoughts about benefits was 'every day') and stable throughout treatment, while Negative motivation declined rapidly and then stabilized. The determinants of changes in the Positive and Negative dimensions during treatment were examined within 3 time frames: first month, months 2-6, and 6-12. Maintenance of high scores on Positive motivation was associated with higher BMI and more disturbed eating habits. Early reductions in Negative motivation were greater for those starting treatment with higher weight and more disturbed eating habits, but less depression and stress, while later reductions in Negative motivation were predicted by improvements in eating habits, weight, stress and perfectionism. Clinicians treating obesity should be sensitive to fluctuations in both motivational dimensions, as they are likely to play a central role in determining long-term behavior and weight change. Copyright © 2011 Elsevier Ltd. All rights reserved.
Zhang, Peng; Hou, Xiuli; Mi, Jianli; He, Yanqiong; Lin, Lin; Jiang, Qing; Dong, Mingdong
2014-09-07
For the goal of practical industrial development of fuel cells, inexpensive, sustainable, and highly efficient electrocatalysts for oxygen reduction reactions (ORR) are highly desirable alternatives to platinum (Pt) and other rare metals. In this work, based on density functional theory, silicon (Si)-doped carbon nanotubes (CNTs) and graphene as metal-free, low cost, and high-performance electrocatalysts for ORR are studied systematically. It is found that the curvature effect plays an important role in the adsorption and reduction of oxygen. The adsorption of O2 becomes weaker as the curvature varies from positive values (outside CNTs) to negative values (inside CNTs). The free energy change of the rate-determining step of ORR on the concave inner surface of Si-doped CNTs is smaller than that on the counterpart of Si-doped graphene, while that on the convex outer surface of Si-doped CNTs is larger than that on Si-doped graphene. Uncovering this new ORR mechanism on silicon-doped carbon electrodes is significant as the same principle could be applied to the development of various other metal-free efficient ORR catalysts for fuel cell applications.
Content Abstract Classification Using Naive Bayes
NASA Astrophysics Data System (ADS)
Latif, Syukriyanto; Suwardoyo, Untung; Aldrin Wihelmus Sanadi, Edwin
2018-03-01
This study aims to classify abstract content based on the use of the highest number of words in an abstract content of the English language journals. This research uses a system of text mining technology that extracts text data to search information from a set of documents. Abstract content of 120 data downloaded at www.computer.org. Data grouping consists of three categories: DM (Data Mining), ITS (Intelligent Transport System) and MM (Multimedia). Systems built using naive bayes algorithms to classify abstract journals and feature selection processes using term weighting to give weight to each word. Dimensional reduction techniques to reduce the dimensions of word counts rarely appear in each document based on dimensional reduction test parameters of 10% -90% of 5.344 words. The performance of the classification system is tested by using the Confusion Matrix based on comparative test data and test data. The results showed that the best classification results were obtained during the 75% training data test and 25% test data from the total data. Accuracy rates for categories of DM, ITS and MM were 100%, 100%, 86%. respectively with dimension reduction parameters of 30% and the value of learning rate between 0.1-0.5.
DOT National Transportation Integrated Search
1999-07-01
This document presents human factors guidelines for designers, owners operators, and planners involved in the development and operation of traffic management centers. Dimensions of the work environment affecting operator and system performance are ad...
Environmental barriers and social participation in individuals with spinal cord injury.
Tsai, I-Hsuan; Graves, Daniel E; Chan, Wenyaw; Darkoh, Charles; Lee, Meei-Shyuan; Pompeii, Lisa A
2017-02-01
The study aimed to examine the relationship between environmental barriers and social participation among individuals with spinal cord injury (SCI). Individuals admitted to regional centers of the Model Spinal Cord Injury System in the United States due to traumatic SCI were interviewed and included in the National Spinal Cord Injury Database. This cross-sectional study applied a secondary analysis with a mixed effect model on the data from 3,162 individuals who received interviews from 2000 through 2005. Five dimensions of environmental barriers were estimated using the short form of the Craig Hospital Inventory of Environmental Factors-Short Form (CHIEF-SF). Social participation was measured with the short form of the Craig Handicap Assessment and Reporting Technique-Short Form (CHART-SF) and their employment status. Subscales of environmental barriers were negatively associated with the social participation measures. Each 1 point increase in CHIEF-SF total score (indicated greater environmental barriers) was associated with a 0.82 point reduction in CHART-SF total score (95% CI: -1.07, -0.57) (decreased social participation) and 4% reduction in the odds of being employed. Among the 5 CHIEF-SF dimensions, assistance barriers exhibited the strongest negative association with CHART-SF social participation score when compared to other dimensions, while work/school dimension demonstrated the weakest association with CHART-SF. Environmental barriers are negatively associated with social participation in the SCI population. Working toward eliminating environmental barriers, especially assistance/service barriers, may help enhance social participation for people with SCI. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Kakagia, Despoina D; Kazakos, Konstantinos J; Xarchas, Konstantinos C; Karanikas, Michael; Georgiadis, George S; Tripsiannis, Gregory; Manolas, Constantinos
2007-01-01
This study tests the hypothesis that addition of a protease-modulating matrix enhances the efficacy of autologous growth factors in diabetic ulcers. Fifty-one patients with chronic diabetic foot ulcers were managed as outpatients at the Democritus University Hospital of Alexandroupolis and followed up for 8 weeks. All target ulcers were > or = 2.5 cm in any one dimension and had been previously treated only with moist gauze. Patients were randomly allocated in three groups of 17 patients each: Group A was treated only with the oxidized regenerated cellulose/collagen biomaterial (Promogran, Johnson & Johnson, New Brunswick, NJ), Group B was treated only with autologous growth factors delivered by Gravitational Platelet Separation System (GPS, Biomet), and Group C was managed by a combination of both. All ulcers were digitally photographed at initiation of the study and then at change of dressings once weekly. Computerized planimetry (Texas Health Science Center ImageTool, Version 3.0) was used to assess ulcer dimensions that were analyzed for homogeneity and significance using the Statistical Package for Social Sciences, Version 13.0. Post hoc analysis revealed that there was significantly greater reduction of all three dimensions of the ulcers in Group C compared to Groups A and B (all P<.001). Although reduction of ulcer dimensions was greater in Group A than in Group B, these differences did not reach statistical significance. It is concluded that protease-modulating dressings act synergistically with autologous growth factors and enhance their efficacy in diabetic foot ulcers.
Load-bearing capacity of all-ceramic posterior inlay-retained fixed dental prostheses.
Puschmann, Djamila; Wolfart, Stefan; Ludwig, Klaus; Kern, Matthias
2009-06-01
The purpose of this in vitro study was to compare the quasi-static load-bearing capacity of all-ceramic resin-bonded three-unit inlay-retained fixed dental prostheses (IRFDPs) made from computer-aided design/computer-aided manufacturing (CAD/CAM)-manufactured yttria-stabilized tetragonal zirconia polycrystals (Y-TZP) frameworks with two different connector dimensions, with and without fatigue loading. Twelve IRFDPs each were made with connector dimensions 3 x 3 mm(2) (width x height) (control group) and 3 x 2 mm(2) (test group). Inlay-retained fixed dental prostheses were adhesively cemented on identical metal-models using composite resin cement. Subgroups of six specimens each were fatigued with maximal 1,200,000 loading cycles in a chewing simulator with a weight load of 25 kg and a load frequency of 1.5 Hz. The load-bearing capacity was tested in a universal testing machine for IRFDPs without fatigue loading and for IRFDPs that had not already fractured during fatigue loading. During fatigue testing one IRFDP (17%) of the test group failed. Under both loading conditions, IRFDPs of the control group exhibited statistically significantly higher load-bearing capacities than the test group. Fatigue loading reduced the load-bearing capacity in both groups. Considering the maximum chewing forces in the molar region, it seems possible to use zirconia ceramic as a core material for IRFDPs with a minimum connector dimension of 9 mm(2). A further reduction of the connector dimensions to 6 mm(2) results in a significant reduction of the load-bearing capacity.
A spectrum fractal feature classification algorithm for agriculture crops with hyper spectrum image
NASA Astrophysics Data System (ADS)
Su, Junying
2011-11-01
A fractal dimension feature analysis method in spectrum domain for hyper spectrum image is proposed for agriculture crops classification. Firstly, a fractal dimension calculation algorithm in spectrum domain is presented together with the fast fractal dimension value calculation algorithm using the step measurement method. Secondly, the hyper spectrum image classification algorithm and flowchart is presented based on fractal dimension feature analysis in spectrum domain. Finally, the experiment result of the agricultural crops classification with FCL1 hyper spectrum image set with the proposed method and SAM (spectral angle mapper). The experiment results show it can obtain better classification result than the traditional SAM feature analysis which can fulfill use the spectrum information of hyper spectrum image to realize precision agricultural crops classification.
How to Construct a Mixed Methods Research Design.
Schoonenboom, Judith; Johnson, R Burke
2017-01-01
This article provides researchers with knowledge of how to design a high quality mixed methods research study. To design a mixed study, researchers must understand and carefully consider each of the dimensions of mixed methods design, and always keep an eye on the issue of validity. We explain the seven major design dimensions: purpose, theoretical drive, timing (simultaneity and dependency), point of integration, typological versus interactive design approaches, planned versus emergent design, and design complexity. There also are multiple secondary dimensions that need to be considered during the design process. We explain ten secondary dimensions of design to be considered for each research study. We also provide two case studies showing how the mixed designs were constructed.
Social dimensions of science-humanitarian collaboration: lessons from Padang, Sumatra, Indonesia.
Shannon, Rachel; Hope, Max; McCloskey, John; Crowley, Dominic; Crichton, Peter
2014-07-01
This paper contains a critical exploration of the social dimensions of the science-humanitarian relationship. Drawing on literature on the social role of science and on the social dimensions of humanitarian practice, it analyses a science-humanitarian partnership for disaster risk reduction (DRR) in Padang, Sumatra, Indonesia, an area threatened by tsunamigenic earthquakes. The paper draws on findings from case study research that was conducted between 2010 and 2011. The case study illustrates the social processes that enabled and hindered collaboration between the two spheres, including the informal partnership of local people and scientists that led to the co-production of earthquake and tsunami DRR and limited organisational capacity and support in relation to knowledge exchange. The paper reflects on the implications of these findings for science-humanitarian partnering in general, and it assesses the value of using a social dimensions approach to understand scientific and humanitarian dialogue. © 2014 The Author(s). Disasters © Overseas Development Institute, 2014.
The classification of two-loop integrand basis in pure four-dimension
NASA Astrophysics Data System (ADS)
Feng, Bo; Huang, Rijun
2013-02-01
In this paper, we have made the attempt to classify the integrand basis of all two-loop diagrams in pure four-dimensional space-time. The first step of our classification is to determine all different topologies of two-loop diagrams, i.e., the structure of denominators. The second step is to determine the set of independent numerators for each topology using Gröbner basis method. For the second step, varieties defined by putting all propagators on-shell has played an important role. We discuss the structures of varieties and how they split to various irreducible branches under specific kinematic configurations of external momenta. The structures of varieties are crucial to determine coefficients of integrand basis in reduction both numerically or analytically.
Fractal analysis of bone structure with applications to osteoporosis and microgravity effects
NASA Astrophysics Data System (ADS)
Acharya, Raj S.; LeBlanc, Adrian; Shackelford, Linda; Swarnakar, Vivek; Krishnamurthy, Ram; Hausman, E.; Lin, Chin-Shoou
1995-05-01
We characterize the trabecular structure with the aid of fractal dimension. We use alternating sequential filters (ASF) to generate a nonlinear pyramid for fractal dimension computations. We do not make any assumptions of the statistical distributions of the underlying fractal bone structure. The only assumption of our scheme is the rudimentary definition of self-similarity. This allows us the freedom of not being constrained by statistical estimation schemes. With mathematical simulations, we have shown that the ASF methods outperform other existing methods for fractal dimension estimation. We have shown that the fractal dimension remains the same when computed with both the x-ray images and the MRI images of the patella. We have shown that the fractal dimension of osteoporotic subjects is lower than that of the normal subjects. In animal models, we have shown that the fractal dimension of osteoporotic rats was lower than that of the normal rats. In a 17 week bedrest study, we have shown that the subject's prebedrest fractal dimension is higher than that of the postbedrest fractal dimension.
Fractal analysis of bone structure with applications to osteoporosis and microgravity effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acharya, R.S.; Swarnarkar, V.; Krishnamurthy, R.
1995-12-31
The authors characterize the trabecular structure with the aid of fractal dimension. The authors use Alternating Sequential filters to generate a nonlinear pyramid for fractal dimension computations. The authors do not make any assumptions of the statistical distributions of the underlying fractal bone structure. The only assumption of the scheme is the rudimentary definition of self similarity. This allows them the freedom of not being constrained by statistical estimation schemes. With mathematical simulations, the authors have shown that the ASF methods outperform other existing methods for fractal dimension estimation. They have shown that the fractal dimension remains the same whenmore » computed with both the X-Ray images and the MRI images of the patella. They have shown that the fractal dimension of osteoporotic subjects is lower than that of the normal subjects. In animal models, the authors have shown that the fractal dimension of osteoporotic rats was lower than that of the normal rats. In a 17 week bedrest study, they have shown that the subject`s prebedrest fractal dimension is higher than that of the postbedrest fractal dimension.« less
Ghosh, Tonmoy; Fattah, Shaikh Anowarul; Wahid, Khan A
2018-01-01
Wireless capsule endoscopy (WCE) is the most advanced technology to visualize whole gastrointestinal (GI) tract in a non-invasive way. But the major disadvantage here, it takes long reviewing time, which is very laborious as continuous manual intervention is necessary. In order to reduce the burden of the clinician, in this paper, an automatic bleeding detection method for WCE video is proposed based on the color histogram of block statistics, namely CHOBS. A single pixel in WCE image may be distorted due to the capsule motion in the GI tract. Instead of considering individual pixel values, a block surrounding to that individual pixel is chosen for extracting local statistical features. By combining local block features of three different color planes of RGB color space, an index value is defined. A color histogram, which is extracted from those index values, provides distinguishable color texture feature. A feature reduction technique utilizing color histogram pattern and principal component analysis is proposed, which can drastically reduce the feature dimension. For bleeding zone detection, blocks are classified using extracted local features that do not incorporate any computational burden for feature extraction. From extensive experimentation on several WCE videos and 2300 images, which are collected from a publicly available database, a very satisfactory bleeding frame and zone detection performance is achieved in comparison to that obtained by some of the existing methods. In the case of bleeding frame detection, the accuracy, sensitivity, and specificity obtained from proposed method are 97.85%, 99.47%, and 99.15%, respectively, and in the case of bleeding zone detection, 95.75% of precision is achieved. The proposed method offers not only low feature dimension but also highly satisfactory bleeding detection performance, which even can effectively detect bleeding frame and zone in a continuous WCE video data.
Sorensen, Matthew; Harmes, David C; Stoll, Dwight R; Staples, Gregory O; Fekete, Szabolcs; Guillarme, Davy; Beck, Alain
2016-10-01
As research, development, and manufacturing of biosimilar protein therapeutics proliferates, there is great interest in the continued development of a portfolio of complementary analytical methods that can be used to efficiently and effectively characterize biosimilar candidate materials relative to the respective reference (i.e., originator) molecule. Liquid phase separation techniques such as liquid chromatography and capillary electrophoresis are powerful tools that can provide both qualitative and quantitative information about similarities and differences between reference and biosimilar materials, especially when coupled with mass spectrometry. However, the inherent complexity of these protein materials challenges even the most modern one-dimensional (1D) separation methods. Two-dimensional (2D) separations present a number of potential advantages over 1D methods, including increased peak capacity, 2D peak patterns that can facilitate unknown identification, and improvement in the compatibility of some separation methods with mass spectrometry. In this study, we demonstrate the use of comprehensive 2D-LC separations involving cation-exchange (CEX) and reversed-phase (RP) separations in the first and second dimensions to compare 3 reference/biosimilar pairs of monoclonal antibodies (cetuximab, trastuzumab and infliximab) that cover a range of similarity/disimilarity in a middle-up approach. The second dimension RP separations are coupled to time-of-flight mass spectrometry, which enables direct identification of features in the chromatograms obtained from mAbs digested with the IdeS enzyme, or digestion with IdeS followed by reduction with dithiothreitol. As many as 23 chemically unique mAb fragments were detected in a single sample. Our results demonstrate that these rich datasets enable facile assesment of the degree of similarity between reference and biosimilar materials.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-10-21
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-01-01
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347
Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao
2012-01-01
Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.
Automated diagnosis of Alzheimer's disease with multi-atlas based whole brain segmentations
NASA Astrophysics Data System (ADS)
Luo, Yuan; Tang, Xiaoying
2017-03-01
Voxel-based analysis is widely used in quantitative analysis of structural brain magnetic resonance imaging (MRI) and automated disease detection, such as Alzheimer's disease (AD). However, noise at the voxel level may cause low sensitivity to AD-induced structural abnormalities. This can be addressed with the use of a whole brain structural segmentation approach which greatly reduces the dimension of features (the number of voxels). In this paper, we propose an automatic AD diagnosis system that combines such whole brain segmen- tations with advanced machine learning methods. We used a multi-atlas segmentation technique to parcellate T1-weighted images into 54 distinct brain regions and extract their structural volumes to serve as the features for principal-component-analysis-based dimension reduction and support-vector-machine-based classification. The relationship between the number of retained principal components (PCs) and the diagnosis accuracy was systematically evaluated, in a leave-one-out fashion, based on 28 AD subjects and 23 age-matched healthy subjects. Our approach yielded pretty good classification results with 96.08% overall accuracy being achieved using the three foremost PCs. In addition, our approach yielded 96.43% specificity, 100% sensitivity, and 0.9891 area under the receiver operating characteristic curve.
Variable input observer for structural health monitoring of high-rate systems
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Laflamme, Simon; Cao, Liang; Dodson, Jacob
2017-02-01
The development of high-rate structural health monitoring methods is intended to provide damage detection on timescales of 10 µs -10ms where speed of detection is critical to maintain structural integrity. Here, a novel Variable Input Observer (VIO) coupled with an adaptive observer is proposed as a potential solution for complex high-rate problems. The VIO is designed to adapt its input space based on real-time identification of the system's essential dynamics. By selecting appropriate time-delayed coordinates defined by both a time delay and an embedding dimension, the proper input space is chosen which allows more accurate estimations of the current state and a reduction of the convergence rate. The optimal time-delay is estimated based on mutual information, and the embedding dimension is based on false nearest neighbors. A simulation of the VIO is conducted on a two degree-of-freedom system with simulated damage. Results are compared with an adaptive Luenberger observer, a fixed time-delay observer, and a Kalman Filter. Under its preliminary design, the VIO converges significantly faster than the Luenberger and fixed observer. It performed similarly to the Kalman Filter in terms of convergence, but with greater accuracy.
Effect of whole body resistance training on arterial compliance in young men.
Rakobowchuk, M; McGowan, C L; de Groot, P C; Bruinsma, D; Hartman, J W; Phillips, S M; MacDonald, M J
2005-07-01
The effect of resistance training on arterial stiffening is controversial. We tested the hypothesis that resistance training would not alter central arterial compliance. Young healthy men (age, 23 +/- 3.9 (mean +/- s.e.m.) years; n = 28,) were whole-body resistance trained five times a week for 12 weeks, using a rotating 3-day split-body routine. Resting brachial blood pressure (BP), carotid pulse pressure, carotid cross-sectional compliance (CSC), carotid initima-media thickness (IMT) and left ventricular dimensions were evaluated before beginning exercise (PRE), after 6 weeks of exercise (MID) and at the end of 12 weeks of exercise (POST). CSC was measured using the pressure-sonography method. Results indicate reductions in brachial (61.1 +/- 1.4 versus 57.6 +/- 1.2 mmHg; P < 0.01) and carotid pulse pressure (52.2 +/- 1.9 versus 46.8 +/- 2.0 mmHg; P < 0.01) PRE to POST. In contrast, carotid CSC, beta-stiffness index, IMT and cardiac dimensions were unchanged. In young men, central arterial compliance is unaltered with 12 weeks of resistance training and the mechanisms responsible for cardiac hypertrophy and reduced arterial compliance are either not inherent to all resistance-training programmes or may require a prolonged stimulus.
NASA Technical Reports Server (NTRS)
Fukumori, Ichiro; Malanotte-Rizzoli, Paola
1995-01-01
A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kalman filter based on approximation of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.
NASA Astrophysics Data System (ADS)
Aouami, A. El; Feddi, E.; Talbi, A.; Dujardin, F.; Duque, C. A.
2018-06-01
In this study, we have investigated the simultaneous influence of magnetic field combined to the hydrostatic pressure and the geometrical confinement on the behavior of a single dopant confined in GaN/InGaN core/shell quantum dots. Within the scheme of the effective-mass approximation, the eigenvalues equation has solved by using the variational method with one-parameter trial wavefunctions. Variation of the ground state binding energy of the single dopant is determined according to the magnetic field and hydrostatic pressure for several dimensions of the heterostructure. The results show that the binding energy is strongly dependent on the core/shell sizes, the magnetic field, and the hydrostatic pressure. The analysis of the photoionization cross section, corresponding to optical transitions associated to the first donor energy level and the conduction band, shows clearly that the reduction of the dot dimensions and/or the simultaneous influences of applied magnetic field, combined to the hydrostatic pressure strength, cause a shift in resonance peaks towards the higher energies with important variations in the magnitude of the resonant peaks.
NASA Astrophysics Data System (ADS)
Fukumori, Ichiro; Malanotte-Rizzoli, Paola
1995-04-01
A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kaiman filter based on approximations of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, David P.; Fishgrab, Kira L.; Greth, Karl Douglas
The present invention relates to a lateral via to provide an electrical connection to a buried conductor. In one instance, the buried conductor is a through via that extends along a first dimension, and the lateral via extends along a second dimension that is generally orthogonal to the first dimension. In another instance, the second dimension is oblique to the first dimension. Components having such lateral vias, as well as methods for creating such lateral vias are described herein.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1995-01-01
Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.
Katagiri, Fumiaki; Glazebrook, Jane
2003-01-01
A major task in computational analysis of mRNA expression profiles is definition of relationships among profiles on the basis of similarities among them. This is generally achieved by pattern recognition in the distribution of data points representing each profile in a high-dimensional space. Some drawbacks of commonly used pattern recognition algorithms stem from their use of a globally linear space and/or limited degrees of freedom. A pattern recognition method called Local Context Finder (LCF) is described here. LCF uses nonlinear dimensionality reduction for pattern recognition. Then it builds a network of profiles based on the nonlinear dimensionality reduction results. LCF was used to analyze mRNA expression profiles of the plant host Arabidopsis interacting with the bacterial pathogen Pseudomonas syringae. In one case, LCF revealed two dimensions essential to explain the effects of the NahG transgene and the ndr1 mutation on resistant and susceptible responses. In another case, plant mutants deficient in responses to pathogen infection were classified on the basis of LCF analysis of their profiles. The classification by LCF was consistent with the results of biological characterization of the mutants. Thus, LCF is a powerful method for extracting information from expression profile data. PMID:12960373
Oostendorp, Rob A. B.; Elvers, Hans; Mikołajewska, Emilia; Laekeman, Marjan; van Trijffel, Emiel; Samwel, Han; Duquet, William
2015-01-01
Objective. To develop and evaluate process indicators relevant to biopsychosocial history taking in patients with chronic back and neck pain. Methods. The SCEBS method, covering the Somatic, Psychological (Cognition, Emotion, and Behavior), and Social dimensions of chronic pain, was used to evaluate biopsychosocial history taking by manual physical therapists (MPTs). In Phase I, process indicators were developed while in Phase II indicators were tested in practice. Results. Literature-based recommendations were transformed into 51 process indicators. Twenty MTPs contributed 108 patient audio recordings. History taking was excellent (98.3%) for the Somatic dimension, very inadequate for Cognition (43.1%) and Behavior (38.3%), weak (27.8%) for Emotion, and low (18.2%) for the Social dimension. MTPs estimated their coverage of the Somatic dimension as excellent (100%), as adequate for Cognition, Emotion, and Behavior (60.1%), and as very inadequate for the Social dimension (39.8%). Conclusion. MTPs perform screening for musculoskeletal pain mainly through the use of somatic dimension of (chronic) pain. Psychological and social dimensions of chronic pain were inadequately covered by MPTs. Furthermore, a substantial discrepancy between actual and self-estimated use of biopsychosocial history taking was noted. We strongly recommend full implementation of the SCEBS method in educational programs in manual physical therapy. PMID:25945358
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medeiros, Eduardo, E-mail: emedeiros@campus.ul.pt
The use of territorial impact assessment procedures is gaining increasing relevance in the European Union policy evaluation processes. However, no concrete territorial impact assessment tools have been applied to evaluating EU cross-border programmes. In this light, this article provides a pioneering analysis on how to make use of territorial impact assessment procedures on cross-border programmes. More specifically, it assesses the main territorial impacts of the Inner Scandinavian INTERREG-A sub-programme, in the last 20 years (1996–2016). It focuses on its impacts on reducing the barrier effect, in all its main dimensions, posed by the presence of the administrative border. The resultsmore » indicate a quite positive impact of the analysed cross-border cooperation programme, in reducing the barrier effect in all its main dimensions. The obtained potential impact values for each analysed dimension indicate, however, that the ‘economy-technology’ dimension was particularly favoured, following its strategic intervention focus in stimulating the economic activity and the attractiveness of the border area. - Highlights: • A territorial impact assessment method to assess cross-border cooperation is proposed. • This method rationale is based on the main dimensions of the barrier effect. • This method identified positive impacts in all analysed dimensions. • The economy-technological dimension was the most positively impacted one.« less
A fully 3D approach for metal artifact reduction in computed tomography.
Kratz, Barbel; Weyers, Imke; Buzug, Thorsten M
2012-11-01
In computed tomography imaging metal objects in the region of interest introduce inconsistencies during data acquisition. Reconstructing these data leads to an image in spatial domain including star-shaped or stripe-like artifacts. In order to enhance the quality of the resulting image the influence of the metal objects can be reduced. Here, a metal artifact reduction (MAR) approach is proposed that is based on a recomputation of the inconsistent projection data using a fully three-dimensional Fourier-based interpolation. The success of the projection space restoration depends sensitively on a sensible continuation of neighboring structures into the recomputed area. Fortunately, structural information of the entire data is inherently included in the Fourier space of the data. This can be used for a reasonable recomputation of the inconsistent projection data. The key step of the proposed MAR strategy is the recomputation of the inconsistent projection data based on an interpolation using nonequispaced fast Fourier transforms (NFFT). The NFFT interpolation can be applied in arbitrary dimension. The approach overcomes the problem of adequate neighborhood definitions on irregular grids, since this is inherently given through the usage of higher dimensional Fourier transforms. Here, applications up to the third interpolation dimension are presented and validated. Furthermore, prior knowledge may be included by an appropriate damping of the transform during the interpolation step. This MAR method is applicable on each angular view of a detector row, on two-dimensional projection data as well as on three-dimensional projection data, e.g., a set of sequential acquisitions at different spatial positions, projection data of a spiral acquisition, or cone-beam projection data. Results of the novel MAR scheme based on one-, two-, and three-dimensional NFFT interpolations are presented. All results are compared in projection data space and spatial domain with the well-known one-dimensional linear interpolation strategy. In conclusion, it is recommended to include as much spatial information into the recomputation step as possible. This is realized by increasing the dimension of the NFFT. The resulting image quality can be enhanced considerably.
Index based regional vulnerability assessment to cyclones hazards of coastal area of Bangladesh
NASA Astrophysics Data System (ADS)
Mohammad, Q. A.; Kervyn, M.; Khan, A. U.
2016-12-01
Cyclone, storm surge, coastal flooding, salinity intrusion, tornado, nor'wester, and thunderstorms are the listed natural hazards in the coastal areas of Bangladesh. Bangladesh was hit by devastating cyclones in 1970, 1991, 2007, 2009, and 2016. Intensity and frequency of natural hazards in the coastal area are likely to increase in future due to climate change. Risk assessment is one of the most important steps of disaster risk reduction. As a climate change victim nation, Bangladesh claims compensation from green climate fund. It also created its own climate funds. It is therefore very important to assess vulnerability of the coast of Bangladesh to natural hazards for efficient allocation of financial investment to support the national risk reduction. This study aims at identifying the spatial variations in factors contributing to vulnerability of the coastal inhabitants of Bangladesh to natural hazards. An exploratory factor analysis method has been used to assess the vulnerability at each local administrative unit. The 141 initially selected 141 socio-economic indicators were reduced to 41 by converting some of them to meaningful widely accepted indicators and removing highly correlated indicators. Principle component analysis further reduced 41 indicators to 13 dimensions which explained 79% of total variation. PCA dimensions show three types of characteristics of the people that may lead people towards vulnerability. They are (a) demographic, education and job opportunities, (b) access to basic needs and facilities, and (c) special needs people. Vulnerability maps of the study area has been prepared by weighted overlay of the dimensions. Study revealed that 29 and 8 percent of total coastal area are very high and high vulnerable to natural hazards respectively. These are distributed along sea boundary and major rivers. Comparison of this spatial distribution with the capacities to face disaster show that highly vulnerable areas are well covered by cyclone shelters but are not the zone with the most resistant building and the most dense road networks. The findings will be helpful for policy makers to initiate, plan and implement short, medium and long term DRR strategies.
Robertson, Eric P [Idaho Falls, ID; Christiansen, Richard L [Littleton, CO
2007-05-29
A method of optically determining a change in magnitude of at least one dimensional characteristic of a sample in response to a selected chamber environment. A magnitude of at least one dimension of the at least one sample may be optically determined subsequent to altering the at least one environmental condition within the chamber. A maximum change in dimension of the at least one sample may be predicted. A dimensional measurement apparatus for indicating a change in at least one dimension of at least one sample. The dimensional measurement apparatus may include a housing with a chamber configured for accommodating pressure changes and an optical perception device for measuring a dimension of at least one sample disposed in the chamber. Methods of simulating injection of a gas into a subterranean formation, injecting gas into a subterranean formation, and producing methane from a coal bed are also disclosed.
Robertson, Eric P; Christiansen, Richard L.
2007-10-23
A method of optically determining a change in magnitude of at least one dimensional characteristic of a sample in response to a selected chamber environment. A magnitude of at least one dimension of the at least one sample may be optically determined subsequent to altering the at least one environmental condition within the chamber. A maximum change in dimension of the at least one sample may be predicted. A dimensional measurement apparatus for indicating a change in at least one dimension of at least one sample. The dimensional measurement apparatus may include a housing with a chamber configured for accommodating pressure changes and an optical perception device for measuring a dimension of at least one sample disposed in the chamber. Methods of simulating injection of a gas into a subterranean formation, injecting gas into a subterranean formation, and producing methane from a coal bed are also disclosed.
Zhonggang, Liang; Hong, Yan
2006-10-01
A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.
CW-SSIM kernel based random forest for image classification
NASA Astrophysics Data System (ADS)
Fan, Guangzhe; Wang, Zhou; Wang, Jiheng
2010-07-01
Complex wavelet structural similarity (CW-SSIM) index has been proposed as a powerful image similarity metric that is robust to translation, scaling and rotation of images, but how to employ it in image classification applications has not been deeply investigated. In this paper, we incorporate CW-SSIM as a kernel function into a random forest learning algorithm. This leads to a novel image classification approach that does not require a feature extraction or dimension reduction stage at the front end. We use hand-written digit recognition as an example to demonstrate our algorithm. We compare the performance of the proposed approach with random forest learning based on other kernels, including the widely adopted Gaussian and the inner product kernels. Empirical evidences show that the proposed method is superior in its classification power. We also compared our proposed approach with the direct random forest method without kernel and the popular kernel-learning method support vector machine. Our test results based on both simulated and realworld data suggest that the proposed approach works superior to traditional methods without the feature selection procedure.
Multivariate Welch t-test on distances
2016-01-01
Motivation: Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. Results: We develop a solution in the form of a distance-based Welch t-test, TW2, for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and TW2 in reanalysis of two existing microbiome datasets, where the methodology has originated. Availability and Implementation: The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2. Further guidance on application of these methods can be obtained from the author. Contact: alekseye@musc.edu PMID:27515741
Multivariate Welch t-test on distances.
Alekseyenko, Alexander V
2016-12-01
Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. We develop a solution in the form of a distance-based Welch t-test, [Formula: see text], for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and [Formula: see text] in reanalysis of two existing microbiome datasets, where the methodology has originated. The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2 Further guidance on application of these methods can be obtained from the author. alekseye@musc.edu. © The Author 2016. Published by Oxford University Press.
Multivariate Phylogenetic Comparative Methods: Evaluations, Comparisons, and Recommendations.
Adams, Dean C; Collyer, Michael L
2018-01-01
Recent years have seen increased interest in phylogenetic comparative analyses of multivariate data sets, but to date the varied proposed approaches have not been extensively examined. Here we review the mathematical properties required of any multivariate method, and specifically evaluate existing multivariate phylogenetic comparative methods in this context. Phylogenetic comparative methods based on the full multivariate likelihood are robust to levels of covariation among trait dimensions and are insensitive to the orientation of the data set, but display increasing model misspecification as the number of trait dimensions increases. This is because the expected evolutionary covariance matrix (V) used in the likelihood calculations becomes more ill-conditioned as trait dimensionality increases, and as evolutionary models become more complex. Thus, these approaches are only appropriate for data sets with few traits and many species. Methods that summarize patterns across trait dimensions treated separately (e.g., SURFACE) incorrectly assume independence among trait dimensions, resulting in nearly a 100% model misspecification rate. Methods using pairwise composite likelihood are highly sensitive to levels of trait covariation, the orientation of the data set, and the number of trait dimensions. The consequences of these debilitating deficiencies are that a user can arrive at differing statistical conclusions, and therefore biological inferences, simply from a dataspace rotation, like principal component analysis. By contrast, algebraic generalizations of the standard phylogenetic comparative toolkit that use the trace of covariance matrices are insensitive to levels of trait covariation, the number of trait dimensions, and the orientation of the data set. Further, when appropriate permutation tests are used, these approaches display acceptable Type I error and statistical power. We conclude that methods summarizing information across trait dimensions, as well as pairwise composite likelihood methods should be avoided, whereas algebraic generalizations of the phylogenetic comparative toolkit provide a useful means of assessing macroevolutionary patterns in multivariate data. Finally, we discuss areas in which multivariate phylogenetic comparative methods are still in need of future development; namely highly multivariate Ornstein-Uhlenbeck models and approaches for multivariate evolutionary model comparisons. © The Author(s) 2017. Published by Oxford University Press on behalf of the Systematic Biology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A method of vehicle license plate recognition based on PCANet and compressive sensing
NASA Astrophysics Data System (ADS)
Ye, Xianyi; Min, Feng
2018-03-01
The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.
NASA Astrophysics Data System (ADS)
Fan, Y. Z.; Zuo, Z. G.; Liu, S. H.; Wu, Y. L.; Sha, Y. J.
2012-11-01
Primary formulation derivation indicates that the dimension of one existing centrifugal boiler circulation pump casing is too large. As great manufacture cost can be saved by dimension decrease, a numerical simulation research is developed in this paper on dimension decrease for annular casing of this pump with a specific speed equaling to 189, which aims at finding an appropriately smaller dimension of the casing while hydraulic performance and strength performance will hardly be changed according to the requirements of the cooperative company. The research object is one existing centrifugal pump with a diffuser and a semi-spherical annular casing, working as the boiler circulation pump for (ultra) supercritical units in power plants. Dimension decrease, the modification method, is achieved by decreasing the existing casing's internal radius (marked as "Ri0") while keeping the wall thickness. The research analysis is based on primary formulation derivation, CFD (Computational Fluid Dynamics) simulation and FEM (Finite Element Method) simulation. Primary formulation derivation estimates that a design casing's internal radius should be less than 0.75 Ri0. CFD analysis indicates that smaller casing with 0.75 Ri0 has a worse hydraulic performance when working at large flow rates and a better hydraulic performance when working at small flow rates. In consideration of hydraulic performance and dimension decrease, an appropriate casing's internal radius is determined, which equals to 0.875 Ri0. FEM analysis then confirms that modified pump casing has nearly the same strength performance as the existing pump casing. It is concluded that dimension decrease can be an economical method as well as a practical method for large pumps in engineering fields.
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
Consistent Pauli reduction on group manifolds
Baguet, A.; Pope, Christopher N.; Samtleben, H.
2016-01-01
We prove an old conjecture by Duff, Nilsson, Pope and Warner asserting that the NSNS sector of supergravity (and more general the bosonic string) allows for a consistent Pauli reduction on any d-dimensional group manifold G, keeping the full set of gauge bosons of the G×G isometry group of the bi-invariant metric on G. The main tool of the construction is a particular generalised Scherk–Schwarz reduction ansatz in double field theory which we explicitly construct in terms of the group's Killing vectors. Examples include the consistent reduction from ten dimensions on S3×S3 and on similar product spaces. The construction ismore » another example of globally geometric non-toroidal compactifications inducing non-geometric fluxes.« less
Mixing noise reduction for rectangular supersonic jets by nozzle shaping and induced screech mixing
NASA Technical Reports Server (NTRS)
Rice, Edward J.; Raman, Ganesh
1993-01-01
Two methods of mixing noise modification were studied for supersonic jets flowing from rectangular nozzles with an aspect ratio of about five and a small dimension of about 1.4 cm. The first involves nozzle geometry variation using either single (unsymmetrical) or double bevelled (symmetrical) thirty degree cutbacks of the nozzle exit. Both converging (C) and converging-diverging (C-D) versions were tested. The double bevelled C-D nozzle produced a jet mixing noise reduction of about 4 dB compared to a standard rectangular C-D nozzle. In addition all bevelled nozzles produced an upstream shift in peak mixing noise which is conducive to improved attenuation when the nozzle is used in an acoustically treated duct. A large increase in high frequency noise also occurred near the plane of the nozzle exit. Because of near normal incidence, this noise can be easily attenuated with wall treatment. The second approach uses paddles inserted on the edge of the two sides of the jet to induce screech and greatly enhance the jet mixing. Although screech and mixing noise levels are increased, the enhanced mixing moves the source locations upstream and may make an enclosed system more amenable to noise reduction using wall acoustic treatment.
Analysis of radiation-induced small Cu particle cluster formation in aqueous CuCl2
Jayanetti, Sumedha; Mayanovic, Robert A.; Anderson, Alan J.; Bassett, William A.; Chou, I.-Ming
2001-01-01
Radition-induced small Cu particle cluster formation in aqueous CuCl2 was analyzed. It was noticed that nearest neighbor distance increased with the increase in the time of irradiation. This showed that the clusters approached the lattice dimension of bulk copper. As the average cluster size approached its bulk dimensions, an increase in the nearest neighbor coordination number was found with the decrease in the surface to volume ratio. Radiolysis of water by incident x-ray beam led to the reduction of copper ions in the solution to themetallic state.
Apparatus and method for tracking a molecule or particle in three dimensions
Werner, James H [Los Alamos, NM; Goodwin, Peter M [Los Alamos, NM; Lessard, Guillaume [Santa Fe, NM
2009-03-03
An apparatus and method were used to track the movement of fluorescent particles in three dimensions. Control software was used with the apparatus to implement a tracking algorithm for tracking the motion of the individual particles in glycerol/water mixtures. Monte Carlo simulations suggest that the tracking algorithms in combination with the apparatus may be used for tracking the motion of single fluorescent or fluorescently labeled biomolecules in three dimensions.
Spike Triggered Covariance in Strongly Correlated Gaussian Stimuli
Aljadeff, Johnatan; Segev, Ronen; Berry, Michael J.; Sharpee, Tatyana O.
2013-01-01
Many biological systems perform computations on inputs that have very large dimensionality. Determining the relevant input combinations for a particular computation is often key to understanding its function. A common way to find the relevant input dimensions is to examine the difference in variance between the input distribution and the distribution of inputs associated with certain outputs. In systems neuroscience, the corresponding method is known as spike-triggered covariance (STC). This method has been highly successful in characterizing relevant input dimensions for neurons in a variety of sensory systems. So far, most studies used the STC method with weakly correlated Gaussian inputs. However, it is also important to use this method with inputs that have long range correlations typical of the natural sensory environment. In such cases, the stimulus covariance matrix has one (or more) outstanding eigenvalues that cannot be easily equalized because of sampling variability. Such outstanding modes interfere with analyses of statistical significance of candidate input dimensions that modulate neuronal outputs. In many cases, these modes obscure the significant dimensions. We show that the sensitivity of the STC method in the regime of strongly correlated inputs can be improved by an order of magnitude or more. This can be done by evaluating the significance of dimensions in the subspace orthogonal to the outstanding mode(s). Analyzing the responses of retinal ganglion cells probed with Gaussian noise, we find that taking into account outstanding modes is crucial for recovering relevant input dimensions for these neurons. PMID:24039563
Chen, Yifei; Sun, Yuxing; Han, Bing-Qing
2015-01-01
Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.
NASA Astrophysics Data System (ADS)
Donovan, Brian F.; Jensen, Wade A.; Chen, Long; Giri, Ashutosh; Poon, S. Joseph; Floro, Jerrold A.; Hopkins, Patrick E.
2018-05-01
We use aluminum nano-inclusions in silicon to demonstrate the dominance of elastic modulus mismatch induced scattering in phonon transport. We use time domain thermoreflectance to measure the thermal conductivity of thin films of silicon co-deposited with aluminum via molecular beam epitaxy resulting in a Si film with 10% clustered Al inclusions with nanoscale dimensions and a reduction in thermal conductivity of over an order of magnitude. We compare these results with well-known models in order to demonstrate that the reduction in the thermal transport is driven by elastic mismatch effects induced by aluminum in the system.
ERIC Educational Resources Information Center
Kaldy, Zsuzsa; Blaser, Erik A.; Leslie, Alan M.
2006-01-01
We report a new method for calibrating differences in perceptual salience across feature dimensions, in infants. The problem of inter-dimensional salience arises in many areas of infant studies, but a general method for addressing the problem has not previously been described. Our method is based on a preferential looking paradigm, adapted to…
Instantons in Lifshitz field theories
NASA Astrophysics Data System (ADS)
Fujimori, Toshiaki; Nitta, Muneto
2015-10-01
BPS instantons are discussed in Lifshitz-type anisotropic field theories. We consider generalizations of the sigma model/Yang-Mills instantons in renormalizable higher dimensional models with the classical Lifshitz scaling invariance. In each model, BPS instanton equation takes the form of the gradient flow equations for "the superpotential" defining "the detailed balance condition". The anisotropic Weyl rescaling and the coset space dimensional reduction are used to map rotationally symmetric instantons to vortices in two-dimensional anisotropic systems on the hyperbolic plane. As examples, we study anisotropic BPS baby Skyrmion 1+1 dimensions and BPS Skyrmion in 2+1 dimensions, for which we take Kähler 1-form and the Wess-Zumiono-Witten term as the superpotentials, respectively, and an anisotropic generalized Yang-Mills instanton in 4 + 1 dimensions, for which we take the Chern-Simons term as the superpotential.
Agabi, J O; Akhigbe, A O
2016-01-01
The pancreas is an insulin-producing gland and is prone to varying degrees of destruction and change in patients with diabetes mellitus (DM). Various morphological changes including reduction in the pancreas dimensions have been described in DM. To determine pancreatic anteroposterior (AP) dimensions in diabetics by sonography and compare with nondiabetics. To also evaluate the correlation of the AP dimensions with patient's anthropometry, as well as the duration of the disease in comparison with nondiabetics. This is a comparative cross-sectional study involving 150 diabetics with 150 sex and age matched healthy normoglycemic group used as controls. Sonographic measurements of the AP dimensions of the pancreatic head, body, and tail of both study groups were performed with the use of 3.5 MHz curvilinear array transducer of a SonoAce X4 ultrasound machine. Data were analyzed using Statistical Package for Social Sciences version 17 (SPSS Inc., Chicago, IL, USA). A statistical test was considered significant at P ≤ 0.05 and 95% confidence interval. Pancreas AP dimensions were significantly smaller in diabetics compared to those of the controls. The mean dimensions were 1.91 ± 0.26 cm, 0.95 ± 0.12 cm, and 0.91 ± 0.11 cm for the head, body, and tail, respectively, in diabetics and 2.32 ± 0.22 cm, 1.43 ± 0.19 cm, and 1.34 ± 0.20 cm in the control (P < 0.001 in all cases). The dimensions were also significantly smaller in the Type 1 diabetics compared to Type 2 (P < 0.001 in all cases). The mean duration of illness for the Types 1 and 2 diabetics were 3.09 ± 1.38 and 3.78 ± 3.12 years, respectively. Longer duration of illness was associated with smaller pancreas body and tail dimensions, while pancreas head dimension was not significantly affected by the duration of illness. Diabetics have smaller pancreas AP dimensions compared to the normal population.
Learning binary code via PCA of angle projection for image retrieval
NASA Astrophysics Data System (ADS)
Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong
2018-01-01
With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.
Sparse partial least squares regression for simultaneous dimension reduction and variable selection
Chun, Hyonho; Keleş, Sündüz
2010-01-01
Partial least squares regression has been an alternative to ordinary least squares for handling multicollinearity in several areas of scientific research since the 1960s. It has recently gained much attention in the analysis of high dimensional genomic data. We show that known asymptotic consistency of the partial least squares estimator for a univariate response does not hold with the very large p and small n paradigm. We derive a similar result for a multivariate response regression with partial least squares. We then propose a sparse partial least squares formulation which aims simultaneously to achieve good predictive performance and variable selection by producing sparse linear combinations of the original predictors. We provide an efficient implementation of sparse partial least squares regression and compare it with well-known variable selection and dimension reduction approaches via simulation experiments. We illustrate the practical utility of sparse partial least squares regression in a joint analysis of gene expression and genomewide binding data. PMID:20107611
Stochastic dynamic analysis of marine risers considering Gaussian system uncertainties
NASA Astrophysics Data System (ADS)
Ni, Pinghe; Li, Jun; Hao, Hong; Xia, Yong
2018-03-01
This paper performs the stochastic dynamic response analysis of marine risers with material uncertainties, i.e. in the mass density and elastic modulus, by using Stochastic Finite Element Method (SFEM) and model reduction technique. These uncertainties are assumed having Gaussian distributions. The random mass density and elastic modulus are represented by using the Karhunen-Loève (KL) expansion. The Polynomial Chaos (PC) expansion is adopted to represent the vibration response because the covariance of the output is unknown. Model reduction based on the Iterated Improved Reduced System (IIRS) technique is applied to eliminate the PC coefficients of the slave degrees of freedom to reduce the dimension of the stochastic system. Monte Carlo Simulation (MCS) is conducted to obtain the reference response statistics. Two numerical examples are studied in this paper. The response statistics from the proposed approach are compared with those from MCS. It is noted that the computational time is significantly reduced while the accuracy is kept. The results demonstrate the efficiency of the proposed approach for stochastic dynamic response analysis of marine risers.
Reduction of initial shock in decadal predictions using a new initialization strategy
NASA Astrophysics Data System (ADS)
He, Yujun; Wang, Bin; Liu, Mimi; Liu, Li; Yu, Yongqiang; Liu, Juanjuan; Li, Ruizhe; Zhang, Cheng; Xu, Shiming; Huang, Wenyu; Liu, Qun; Wang, Yong; Li, Feifei
2017-08-01
A novel full-field initialization strategy based on the dimension-reduced projection four-dimensional variational data assimilation (DRP-4DVar) is proposed to alleviate the well-known initial shock occurring in the early years of decadal predictions. It generates consistent initial conditions, which best fit the monthly mean oceanic analysis data along the coupled model trajectory in 1 month windows. Three indices to measure the initial shock intensity are also proposed. Results indicate that this method does reduce the initial shock in decadal predictions by Flexible Global Ocean-Atmosphere-Land System model, Grid-point version 2 (FGOALS-g2) compared with the three-dimensional variational data assimilation-based nudging full-field initialization for the same model and is comparable to or even better than the different initialization strategies for other fifth phase of the Coupled Model Intercomparison Project (CMIP5) models. Better hindcasts of global mean surface air temperature anomalies can be obtained than in other FGOALS-g2 experiments. Due to the good model response to external forcing and the reduction of initial shock, higher decadal prediction skill is achieved than in other CMIP5 models.
Finite element techniques in computational time series analysis of turbulent flows
NASA Astrophysics Data System (ADS)
Horenko, I.
2009-04-01
In recent years there has been considerable increase of interest in the mathematical modeling and analysis of complex systems that undergo transitions between several phases or regimes. Such systems can be found, e.g., in weather forecast (transitions between weather conditions), climate research (ice and warm ages), computational drug design (conformational transitions) and in econometrics (e.g., transitions between different phases of the market). In all cases, the accumulation of sufficiently detailed time series has led to the formation of huge databases, containing enormous but still undiscovered treasures of information. However, the extraction of essential dynamics and identification of the phases is usually hindered by the multidimensional nature of the signal, i.e., the information is "hidden" in the time series. The standard filtering approaches (like f.~e. wavelets-based spectral methods) have in general unfeasible numerical complexity in high-dimensions, other standard methods (like f.~e. Kalman-filter, MVAR, ARCH/GARCH etc.) impose some strong assumptions about the type of the underlying dynamics. Approach based on optimization of the specially constructed regularized functional (describing the quality of data description in terms of the certain amount of specified models) will be introduced. Based on this approach, several new adaptive mathematical methods for simultaneous EOF/SSA-like data-based dimension reduction and identification of hidden phases in high-dimensional time series will be presented. The methods exploit the topological structure of the analysed data an do not impose severe assumptions on the underlying dynamics. Special emphasis will be done on the mathematical assumptions and numerical cost of the constructed methods. The application of the presented methods will be first demonstrated on a toy example and the results will be compared with the ones obtained by standard approaches. The importance of accounting for the mathematical assumptions used in the analysis will be pointed up in this example. Finally, applications to analysis of meteorological and climate data will be presented.
2008-01-01
A new method for extracting common themes from written text is introduced and applied to 1,165 open-ended self-descriptive narratives. Drawing on a lexical approach to personality, the most commonly-used adjectives within narratives written by college students were identified using computerized text analytic tools. A factor analysis on the use of these adjectives in the self-descriptions produced a 7-factor solution consisting of psychologically meaningful dimensions. Some dimensions were unipolar (e.g., Negativity factor, wherein most loaded items were negatively valenced adjectives); others were dimensional in that semantically opposite words clustered together (e.g., Sociability factor, wherein terms such as shy, outgoing, reserved, and loud all loaded in the same direction). The factors exhibited modest reliability across different types of writ writing samples and were correlated with self-reports and behaviors consistent with the dimensions. Similar analyses with additional content words (adjectives, adverbs, nouns, and verbs) yielded additional psychological dimensions associated with physical appearance, school, relationships, etc. in which people contextualize their self-concepts. The results suggest that the meaning extraction method is a promising strategy that determines the dimensions along which people think about themselves. PMID:18802499
Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension
NASA Astrophysics Data System (ADS)
Yan, Z.; Luan, X.
2017-12-01
Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.
Zhao, Yan; Chang, Cheng; Qin, Peibin; Cao, Qichen; Tian, Fang; Jiang, Jing; Li, Xianyu; Yu, Wenfeng; Zhu, Yunping; He, Fuchu; Ying, Wantao; Qian, Xiaohong
2016-01-21
Human plasma is a readily available clinical sample that reflects the status of the body in normal physiological and disease states. Although the wide dynamic range and immense complexity of plasma proteins are obstacles, comprehensive proteomic analysis of human plasma is necessary for biomarker discovery and further verification. Various methods such as immunodepletion, protein equalization and hyper fractionation have been applied to reduce the influence of high-abundance proteins (HAPs) and to reduce the high level of complexity. However, the depth at which the human plasma proteome has been explored in a relatively short time frame has been limited, which impedes the transfer of proteomic techniques to clinical research. Development of an optimal strategy is expected to improve the efficiency of human plasma proteome profiling. Here, five three-dimensional strategies combining HAP depletion (the 1st dimension) and protein fractionation (the 2nd dimension), followed by LC-MS/MS analysis (the 3rd dimension) were developed and compared for human plasma proteome profiling. Pros and cons of the five strategies are discussed for two issues: HAP depletion and complexity reduction. Strategies A and B used proteome equalization and tandem Seppro IgY14 immunodepletion, respectively, as the first dimension. Proteome equalization (strategy A) was biased toward the enrichment of basic and low-molecular weight proteins and had limited ability to enrich low-abundance proteins. By tandem removal of HAPs (strategy B), the efficiency of HAP depletion was significantly increased, whereas more off-target proteins were subtracted simultaneously. In the comparison of complexity reduction, strategy D involved a deglycosylation step before high-pH RPLC separation. However, the increase in sequence coverage did not increase the protein number as expected. Strategy E introduced SDS-PAGE separation of proteins, and the results showed oversampling of HAPs and identification of fewer proteins. Strategy C combined single Seppro IgY14 immunodepletion, high-pH RPLC fractionation and LC-MS/MS analysis. It generated the largest dataset, containing 1544 plasma protein groups and 258 newly identified proteins in a 30-h-machine-time analysis, making it the optimum three-dimensional strategy in our study. Further analysis of the integrated data from the five strategies showed identical distribution patterns in terms of sequence features and GO functional analysis with the 1929-plasma-protein dataset, further supporting the reliability of our plasma protein identifications. The characterization of 20 cytokines in the concentration range from sub-nanograms/milliliter to micrograms/milliliter demonstrated the sensitivity of the current strategies. Copyright © 2015 Elsevier B.V. All rights reserved.
Adaptive Discontinuous Galerkin Methods in Multiwavelets Bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archibald, Richard K; Fann, George I; Shelton Jr, William Allison
2011-01-01
We use a multiwavelet basis with the Discontinuous Galerkin (DG) method to produce a multi-scale DG method. We apply this Multiwavelet DG method to convection and convection-diffusion problems in multiple dimensions. Merging the DG method with multiwavelets allows the adaptivity in the DG method to be resolved through manipulation of multiwavelet coefficients rather than grid manipulation. Additionally, the Multiwavelet DG method is tested on non-linear equations in one dimension and on the cubed sphere.
Quantum quenches in two spatial dimensions using chain array matrix product states
A. J. A. James; Konik, R.
2015-10-15
We describe a method for simulating the real time evolution of extended quantum systems in two dimensions (2D). The method combines the benefits of integrability and matrix product states in one dimension to avoid several issues that hinder other applications of tensor based methods in 2D. In particular, it can be extended to infinitely long cylinders. As an example application we present results for quantum quenches in the 2D quantum [(2+1)-dimensional] Ising model. As a result, in quenches that cross a phase boundary we find that the return probability shows nonanalyticities in time.
Mutually unbiased bases and semi-definite programming
NASA Astrophysics Data System (ADS)
Brierley, Stephen; Weigert, Stefan
2010-11-01
A complex Hilbert space of dimension six supports at least three but not more than seven mutually unbiased bases. Two computer-aided analytical methods to tighten these bounds are reviewed, based on a discretization of parameter space and on Gröbner bases. A third algorithmic approach is presented: the non-existence of more than three mutually unbiased bases in composite dimensions can be decided by a global optimization method known as semidefinite programming. The method is used to confirm that the spectral matrix cannot be part of a complete set of seven mutually unbiased bases in dimension six.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate ofmore » $$O(n^{-1/2})$$, the corresponding IRUQ converges at $$O(n^{-1})$$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.« less
Salvarzi, Elham; Choobineh, Alireza; Jahangiri, Mehdi; Keshavarzi, Sareh
2018-02-26
Craniometry is a subset of anthropometry, which measures the anatomical sizes of the head and face (craniofacial indicators). These dimensions are used in designing devices applied in the facial area, including respirators. This study was conducted to measure craniofacial dimensions of Iranian male workers required for face protective equipment design. In this study, facial anthropometric dimensions of 50 randomly selected Iranian male workers were measured by photographic method and Digimizer version 4.1.1.0. Ten facial dimensions were extracted from photographs and measured by Digimizer version 4.1.1.0. Mean, standard deviation and 5th, 50th and 95th percentiles for each dimension were determined and the relevant data bank was established. The anthropometric data bank for the 10 dimensions required for respirator design was provided for the target group with photo-anthropometric methods. The results showed that Iranian face dimensions were different from those of other nations and ethnicities. In this pilot study, anthropometric dimensions required for half-mask respirator design for Iranian male workers were measured by Digimizer version 4.1.1.0. The obtained anthropometric tables could be useful for the design of personal face protective equipment.
Sarilita, Erli; Rynn, Christopher; Mossey, Peter A; Black, Sue; Oscandar, Fahmi
2018-05-01
This study investigated nose profile morphology and its relationship to the skull in Scottish subadult and Indonesian adult populations, with the aim of improving the accuracy of forensic craniofacial reconstruction. Samples of 86 lateral head cephalograms from Dundee Dental School (mean age, 11.8 years) and 335 lateral head cephalograms from the Universitas Padjadjaran Dental Hospital, Bandung, Indonesia (mean age 24.2 years), were measured. The method of nose profile estimation based on skull morphology previously proposed by Rynn and colleagues in 2010 (FSMP 6:20-34) was tested in this study. Following this method, three nasal aperture-related craniometrics and six nose profile dimensions were measured from the cephalograms. To assess the accuracy of the method, six nose profile dimensions were estimated from the three craniometric parameters using the published method and then compared to the actual nose profile dimensions.In the Scottish subadult population, no sexual dimorphism was evident in the measured dimensions. In contrast, sexual dimorphism of the Indonesian adult population was evident in all craniometric and nose profile dimensions; notably, males exhibited statistically significant larger values than females. The published method by Rynn and colleagues (FSMP 6:20-34, 2010) performed better in the Scottish subadult population (mean difference of maximum, 2.35 mm) compared to the Indonesian adult population (mean difference of maximum, 5.42 mm in males and 4.89 mm in females).In addition, regression formulae were derived to estimate nose profile dimensions based on the craniometric measurements for the Indonesian adult population. The published method is not sufficiently accurate for use on the Indonesian population, so the derived method should be used. The accuracy of the published method by Rynn and colleagues (FSMP 6:20-34, 2010) was sufficiently reliable to be applied in Scottish subadult population.
Wang, Qiuyan; Zhao, Wenxiang; Liang, Zhiqiang; Wang, Xibin; Zhou, Tianfeng; Wu, Yongbo; Jiao, Li
2018-03-01
The wear behaviors of grinding wheel have significant influence on the work-surface topography. However, a comprehensive and quantitative method is lacking for evaluating the wear conditions of grinding wheel. In this paper, a fractal analysis method is used to investigate the wear behavior of resin-bonded diamond wheel in Elliptical Ultrasonic Assisted Grinding (EUAG) of monocrystal sapphire, and a series of experiments on EUAG and conventional grinding (CG) are performed. The results show that the fractal dimension of grinding wheel topography is highly correlated to the wear behavior, i.e., grain fracture, grain pullout, and wheel loading. An increase in cutting edge density on the wheel surface results in an increase of the fractal dimension, but an increase in the grain pullout and wheel loading results in a decrease in the fractal dimension. The wheel topography in EUAG has a higher fractal dimension than that in CG before 60 passes due to better self-sharpening behavior, and then has a smaller fractal dimension because of more serious wheel loadings after 60 passes. By angle-dependent distribution analysis of profile fractal dimensions, the wheel surface topography is transformed from isotropic to anisotropic. These indicated that the fractal analysis method could be further used in monitoring of a grinding wheel performance in EUAG. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krause, Josua; Dasgupta, Aritra; Fekete, Jean-Daniel
Dealing with the curse of dimensionality is a key challenge in high-dimensional data visualization. We present SeekAView to address three main gaps in the existing research literature. First, automated methods like dimensionality reduction or clustering suffer from a lack of transparency in letting analysts interact with their outputs in real-time to suit their exploration strategies. The results often suffer from a lack of interpretability, especially for domain experts not trained in statistics and machine learning. Second, exploratory visualization techniques like scatter plots or parallel coordinates suffer from a lack of visual scalability: it is difficult to present a coherent overviewmore » of interesting combinations of dimensions. Third, the existing techniques do not provide a flexible workflow that allows for multiple perspectives into the analysis process by automatically detecting and suggesting potentially interesting subspaces. In SeekAView we address these issues using suggestion based visual exploration of interesting patterns for building and refining multidimensional subspaces. Compared to the state-of-the-art in subspace search and visualization methods, we achieve higher transparency in showing not only the results of the algorithms, but also interesting dimensions calibrated against different metrics. We integrate a visually scalable design space with an iterative workflow guiding the analysts by choosing the starting points and letting them slice and dice through the data to find interesting subspaces and detect correlations, clusters, and outliers. We present two usage scenarios for demonstrating how SeekAView can be applied in real-world data analysis scenarios.« less
NASA Astrophysics Data System (ADS)
Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua
2017-04-01
Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.
Multi-dimension feature fusion for action recognition
NASA Astrophysics Data System (ADS)
Dong, Pei; Li, Jie; Dong, Junyu; Qi, Lin
2018-04-01
Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. The challenge for action recognition is to capture and fuse the multi-dimension information in video data. In order to take into account these characteristics simultaneously, we present a novel method that fuses multiple dimensional features, such as chromatic images, depth and optical flow fields. We built our model based on the multi-stream deep convolutional networks with the help of temporal segment networks and extract discriminative spatial and temporal features by fusing ConvNets towers multi-dimension, in which different feature weights are assigned in order to take full advantage of this multi-dimension information. Our architecture is trained and evaluated on the currently largest and most challenging benchmark NTU RGB-D dataset. The experiments demonstrate that the performance of our method outperforms the state-of-the-art methods.
Application of random coherence order selection in gradient-enhanced multidimensional NMR
NASA Astrophysics Data System (ADS)
Bostock, Mark J.; Nietlispach, Daniel
2016-03-01
Development of multidimensional NMR is essential to many applications, for example in high resolution structural studies of biomolecules. Multidimensional techniques enable separation of NMR signals over several dimensions, improving signal resolution, whilst also allowing identification of new connectivities. However, these advantages come at a significant cost. The Fourier transform theorem requires acquisition of a grid of regularly spaced points to satisfy the Nyquist criterion, while frequency discrimination and acquisition of a pure phase spectrum require acquisition of both quadrature components for each time point in every indirect (non-acquisition) dimension, adding a factor of 2 N -1 to the number of free- induction decays which must be acquired, where N is the number of dimensions. Compressed sensing (CS) ℓ 1-norm minimisation in combination with non-uniform sampling (NUS) has been shown to be extremely successful in overcoming the Nyquist criterion. Previously, maximum entropy reconstruction has also been used to overcome the limitation of frequency discrimination, processing data acquired with only one quadrature component at a given time interval, known as random phase detection (RPD), allowing a factor of two reduction in the number of points for each indirect dimension (Maciejewski et al. 2011 PNAS 108 16640). However, whilst this approach can be easily applied in situations where the quadrature components are acquired as amplitude modulated data, the same principle is not easily extended to phase modulated (P-/N-type) experiments where data is acquired in the form exp (iωt) or exp (-iωt), and which make up many of the multidimensional experiments used in modern NMR. Here we demonstrate a modification of the CS ℓ 1-norm approach to allow random coherence order selection (RCS) for phase modulated experiments; we generalise the nomenclature for RCS and RPD as random quadrature detection (RQD). With this method, the power of RQD can be extended to the full suite of experiments available to modern NMR spectroscopy, allowing resolution enhancements for all indirect dimensions; alone or in combination with NUS, RQD can be used to improve experimental resolution, or shorten experiment times, of considerable benefit to the challenging applications undertaken by modern NMR.
Birur, Badari; Thirthalli, Jagadisha; Janakiramaiah, N; Shelton, Richard C; Gangadhar, Bangalore N
2016-12-01
The pattern of symptom response to second generation antipsychotics (SGAs) has not been studied extensively. Understanding the time course of symptom response would help to rationally monitor patient progress. To determine the short-term differential time course of response of symptom dimensions of first episode schizophrenia viz., negative, positive symptoms and 5 factors of anergia, thought disturbance, activation, paranoid-belligerence and depression to treatment with SGA olanzapine. 57 drug naive patients with schizophrenia were treated for 4 weeks with olanzapine 10mg/day, increased to 20mg/day in 1 week. Weight was recorded and ratings with the Positive and Negative Syndrome scale (PANSS), the Simpson Angus Scale (SAS) were performed weekly. 43 patients completed 4 weeks of assessment. Scores on all of the dimensions improved. By the end of week 1, only positive syndrome, thought disturbance and paranoid-belligerence dimensions improved. Maximum improvement was seen with paranoid-belligerence by week 1, followed by positive syndrome in week 2, and depression at week 3. The percentage improvement in positive syndrome was significantly greater than negative. Over 4 weeks there was a mean weight gain of 2kg and there were significant extrapyramidal symptoms. Olanzapine produced reduction in all dimensions, but the pace of responding of individual dimensions differed. Longer-term studies comparing SGAs with first generation antipsychotics are needed. Copyright © 2016 Elsevier B.V. All rights reserved.
A k-space method for acoustic propagation using coupled first-order equations in three dimensions.
Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C
2009-09-01
A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.
Perception Expansion Training: An Approach to Conflict Reduction.
ERIC Educational Resources Information Center
Huseman, Richard C.
Interpersonal conflict in organizations is due to differences in perception of organizational sub-group systems operations. Such conflict can be reduced through implementation of "PET," perception expansion training. PET procedures will determine the dimensions of conflict situations and bring into play interacting group therapy which expands the…
ERIC Educational Resources Information Center
Lipps, Leann E. T.
To investigate two measures which have been used to assess children's attention to stimulus dimensions, component selection, and dimension preference, both measures were administered to 38 3 1/2 to 5-year-olds and 20 5- to 6 1/2-year-olds. Seven to ten days after the dimension preference task was given. the component selection measure was…
Polimeni, Licia; Pastori, Daniele; Baratta, Francesco; Tozzi, Giulia; Novo, Marta; Vicinanza, Roberto; Troisi, Giovanni; Pannitteri, Gaetano; Ceci, Fabrizio; Scardella, Laura; Violi, Francesco; Angelico, Francesco; Del Ben, Maria
2017-12-01
Fatty liver and splenomegaly are typical features of genetic lysosomal acid lipase (LAL) deficiency. No data in adult patients with non-genetic reduction of LAL activity are available. We investigate the association between spleen dimensions and LAL activity in non-alcoholic fatty liver disease (NAFLD) patients, in whom a reduced LAL activity has been reported. We include 425 consecutive patients who underwent abdominal ultrasound to evaluate hepatic steatosis and spleen dimensions. LAL activity was measured with dried blood spot method (Lalistat2). NAFLD was present in 74.1% of screened patients. Higher median spleen longitudinal diameter (10.6 vs. 9.9 cm; p < 0.001) and spleen area (SA) (32.7 vs. 27.7 cm 2 ; p < 0.001), together with a higher and proportion of splenomegaly (17.8 vs. 5.5%, p = 0.001), are present in patients with NAFLD compared to those without. In NAFLD patients, median LAL activity is 0.9 nmol/spot/h. LAL activity is lower in 56 patients with splenomegaly, as compared to those without (p = 0.009). At multivariable logistic regression analysis, age (above median, OR 0.344; p = 0.003), LAL activity (below median, OR 2.206, p = 0.028), and platelets (OR 0.101, p = 0.002) are significantly associated with splenomegaly. NAFLD patients disclose a relatively high prevalence of spleen enlargement and splenomegaly, which are significantly associated with a reduced LAL activity, suggesting that LAL may contribute to spleen enlargement in this setting.
Casimir force in Randall-Sundrum models with q+1 dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, Mariana; Turan, Ismail; Saad, Nasser
2008-09-01
We evaluate the Casimir force between two parallel plates in Randall-Sundrum (RS) scenarios extended by q compact dimensions. After giving exact expressions for one extra compact dimension (6D RS model), we generalize to an arbitrary number of compact dimensions. We present the complete calculation for both the two-brane scenario (RSI model) and the one-brane scenario (RSII model) using the method of summing over the modes. We investigate the effects of extra dimensions on the magnitude and sign of the force, and comment on limits for the size and number of the extra dimensions.
NASA Astrophysics Data System (ADS)
Zhang, Chen; Ni, Zhiwei; Ni, Liping; Tang, Na
2016-10-01
Feature selection is an important method of data preprocessing in data mining. In this paper, a novel feature selection method based on multi-fractal dimension and harmony search algorithm is proposed. Multi-fractal dimension is adopted as the evaluation criterion of feature subset, which can determine the number of selected features. An improved harmony search algorithm is used as the search strategy to improve the efficiency of feature selection. The performance of the proposed method is compared with that of other feature selection algorithms on UCI data-sets. Besides, the proposed method is also used to predict the daily average concentration of PM2.5 in China. Experimental results show that the proposed method can obtain competitive results in terms of both prediction accuracy and the number of selected features.
Synthetic Minority Oversampling Technique and Fractal Dimension for Identifying Multiple Sclerosis
NASA Astrophysics Data System (ADS)
Zhang, Yu-Dong; Zhang, Yin; Phillips, Preetha; Dong, Zhengchao; Wang, Shuihua
Multiple sclerosis (MS) is a severe brain disease. Early detection can provide timely treatment. Fractal dimension can provide statistical index of pattern changes with scale at a given brain image. In this study, our team used susceptibility weighted imaging technique to obtain 676 MS slices and 880 healthy slices. We used synthetic minority oversampling technique to process the unbalanced dataset. Then, we used Canny edge detector to extract distinguishing edges. The Minkowski-Bouligand dimension was a fractal dimension estimation method and used to extract features from edges. Single hidden layer neural network was used as the classifier. Finally, we proposed a three-segment representation biogeography-based optimization to train the classifier. Our method achieved a sensitivity of 97.78±1.29%, a specificity of 97.82±1.60% and an accuracy of 97.80±1.40%. The proposed method is superior to seven state-of-the-art methods in terms of sensitivity and accuracy.
Laaksonen, Ari; Malila, Jussi; Nenes, Athanasios; Hung, Hui-Ming; Chen, Jen-Ping
2016-05-03
Surface porosity affects the ability of a substance to adsorb gases. The surface fractal dimension D is a measure that indicates the amount that a surface fills a space, and can thereby be used to characterize the surface porosity. Here we propose a new method for determining D, based on measuring both the water vapour adsorption isotherm of a given substance, and its ability to act as a cloud condensation nucleus when introduced to humidified air in aerosol form. We show that our method agrees well with previous methods based on measurement of nitrogen adsorption. Besides proving the usefulness of the new method for general surface characterization of materials, our results show that the surface fractal dimension is an important determinant in cloud drop formation on water insoluble particles. We suggest that a closure can be obtained between experimental critical supersaturation for cloud drop activation and that calculated based on water adsorption data, if the latter is corrected using the surface fractal dimension of the insoluble cloud nucleus.
NASA Astrophysics Data System (ADS)
Laaksonen, Ari; Malila, Jussi; Nenes, Athanasios; Hung, Hui-Ming; Chen, Jen-Ping
2016-05-01
Surface porosity affects the ability of a substance to adsorb gases. The surface fractal dimension D is a measure that indicates the amount that a surface fills a space, and can thereby be used to characterize the surface porosity. Here we propose a new method for determining D, based on measuring both the water vapour adsorption isotherm of a given substance, and its ability to act as a cloud condensation nucleus when introduced to humidified air in aerosol form. We show that our method agrees well with previous methods based on measurement of nitrogen adsorption. Besides proving the usefulness of the new method for general surface characterization of materials, our results show that the surface fractal dimension is an important determinant in cloud drop formation on water insoluble particles. We suggest that a closure can be obtained between experimental critical supersaturation for cloud drop activation and that calculated based on water adsorption data, if the latter is corrected using the surface fractal dimension of the insoluble cloud nucleus.
Helicopter Control Energy Reduction Using Moving Horizontal Tail
Oktay, Tugrul; Sal, Firat
2015-01-01
Helicopter moving horizontal tail (i.e., MHT) strategy is applied in order to save helicopter flight control system (i.e., FCS) energy. For this intention complex, physics-based, control-oriented nonlinear helicopter models are used. Equations of MHT are integrated into these models and they are together linearized around straight level flight condition. A specific variance constrained control strategy, namely, output variance constrained Control (i.e., OVC) is utilized for helicopter FCS. Control energy savings due to this MHT idea with respect to a conventional helicopter are calculated. Parameters of helicopter FCS and dimensions of MHT are simultaneously optimized using a stochastic optimization method, namely, simultaneous perturbation stochastic approximation (i.e., SPSA). In order to observe improvement in behaviors of classical controls closed loop analyses are done. PMID:26180841
Estimation of mean response via effective balancing score
Hu, Zonghui; Follmann, Dean A.; Wang, Naisyin
2015-01-01
Summary We introduce effective balancing scores for estimation of the mean response under a missing at random mechanism. Unlike conventional balancing scores, the effective balancing scores are constructed via dimension reduction free of model specification. Three types of effective balancing scores are introduced: those that carry the covariate information about the missingness, the response, or both. They lead to consistent estimation with little or no loss in efficiency. Compared to existing estimators, the effective balancing score based estimator relieves the burden of model specification and is the most robust. It is a near-automatic procedure which is most appealing when high dimensional covariates are involved. We investigate both the asymptotic and the numerical properties, and demonstrate the proposed method in a study on Human Immunodeficiency Virus disease. PMID:25797955
A predictor-corrector technique for visualizing unsteady flow
NASA Technical Reports Server (NTRS)
Banks, David C.; Singer, Bart A.
1995-01-01
We present a method for visualizing unsteady flow by displaying its vortices. The vortices are identified by using a vorticity-predictor pressure-corrector scheme that follows vortex cores. The cross-sections of a vortex at each point along the core can be represented by a Fourier series. A vortex can be faithfully reconstructed from the series as a simple quadrilateral mesh, or its reconstruction can be enhanced to indicate helical motion. The mesh can reduce the representation of the flow features by a factor of one thousand or more compared with the volumetric dataset. With this amount of reduction it is possible to implement an interactive system on a graphics workstation to permit a viewer to examine, in three dimensions, the evolution of the vortical structures in a complex, unsteady flow.
Multi-dimensional photonic states from a quantum dot
NASA Astrophysics Data System (ADS)
Lee, J. P.; Bennett, A. J.; Stevenson, R. M.; Ellis, D. J. P.; Farrer, I.; Ritchie, D. A.; Shields, A. J.
2018-04-01
Quantum states superposed across multiple particles or degrees of freedom offer an advantage in the development of quantum technologies. Creating these states deterministically and with high efficiency is an ongoing challenge. A promising approach is the repeated excitation of multi-level quantum emitters, which have been shown to naturally generate light with quantum statistics. Here we describe how to create one class of higher dimensional quantum state, a so called W-state, which is superposed across multiple time bins. We do this by repeated Raman scattering of photons from a charged quantum dot in a pillar microcavity. We show this method can be scaled to larger dimensions with no reduction in coherence or single-photon character. We explain how to extend this work to enable the deterministic creation of arbitrary time-bin encoded qudits.
Bittman, Barry; Bruhn, Karl T; Stevens, Christine; Westengard, James; Umbach, Paul O
2003-01-01
This controlled, prospective, randomized study examined the clinical and potential economic impact of a 6-session Recreational Music-making (RMM) protocol on burnout and mood dimensions, as well as on Total Mood Disturbance (TMD) in an interdisciplinary group of long-term care workers. A total of 112 employees participated in a 6-session RMM protocol focusing on building support, communication, and interdisciplinary respect utilizing group drumming and keyboard accompaniment. Changes in burnout and mood dimensions were assessed with the Maslach Burnout Inventory and the Profile of Mood States respectively. Cost savings were projected by an independent consulting firm, which developed an economic impact model. Statistically-significant reductions of multiple burnout and mood dimensions, as well as TMD scores, were noted. Economic-impact analysis projected cost savings of $89,100 for a single typical 100-bed facility, with total annual potential savings to the long-term care industry of $1.46 billion. A cost-effective, 6-session RMM protocol reduces burnout and mood dimensions, as well as TMD, in long-term care workers.
NASA Technical Reports Server (NTRS)
Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)
2001-01-01
The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.; ...
2016-09-18
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
Generalised Eisenhart lift of the Toda chain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cariglia, Marco, E-mail: marco@iceb.ufop.br; Gibbons, Gary, E-mail: g.w.gibbons@damtp.cam.ac.uk
The Toda chain of nearest neighbour interacting particles on a line can be described both in terms of geodesic motion on a manifold with one extra dimension, the Eisenhart lift, or in terms of geodesic motion in a symmetric space with several extra dimensions. We examine the relationship between these two realisations and discover that the symmetric space is a generalised, multi-particle Eisenhart lift of the original problem that reduces to the standard Eisenhart lift. Such generalised Eisenhart lift acts as an inverse Kaluza-Klein reduction, promoting coupling constants to momenta in higher dimension. In particular, isometries of the generalised liftmore » metric correspond to energy preserving transformations that mix coordinates and coupling constants. A by-product of the analysis is that the lift of the Toda Lax pair can be used to construct higher rank Killing tensors for both the standard and generalised lift metrics.« less
Challenges in Characterizing and Controlling Complex Cellular Systems
NASA Astrophysics Data System (ADS)
Wikswo, John
2011-03-01
Multicellular dynamic biological processes such as developmental differentiation, wound repair, disease, aging, and even homeostasis can be represented by trajectories through a phase space whose extent reflects the genetic, post-translational, and metabolic complexity of the process - easily extending to tens of thousands of dimensions. Intra- and inter-cellular sensing and regulatory systems and their nested, redundant, and non-linear feed-forward and feed-back controls create high-dimensioned attractors in this phase space. Metabolism provides free energy to drive non-equilibrium processes and dynamically reconfigure attractors. Studies of single molecules and cells provide only minimalist projections onto a small number of axes. It may be difficult to infer larger-scale emergent behavior from linearized experiments that perform only small amplitude perturbations on a limited number of the dimensions. Complete characterization may succeed for bounded component problems, such as an individual cell cycle or signaling cascade, but larger systems problems will require a coarse-grained approach. Hence a new experimental and analytical framework is needed. Possibly one could utilize high-amplitude, multi-variable driving of the system to infer coarse-grained, effective models, which in turn can be tested by their ability to control systems behavior. Navigation at will between attractors in a high-dimensioned dynamical system will provide not only detailed knowledge of the shape of attractor basins, but also measures of underlying stochastic events such as noise in gene expression or receptor binding and how both affect system stability and robustness. Needed for this are wide-bandwidth methods to sense and actuate large numbers of intracellular and extracellular variables and automatically and rapidly infer dynamic control models. The success of this approach may be determined by how broadly the sensors and actuators can span the full dimensionality of the phase space. Supported by the Defense Threat Reduction Agency HDTRA-09-1-0013, NIH National Institute on Drug Abuse RC2DA028981, the National Academies Keck Futures Initiative, and the Vanderbilt Institute for Integrative Biosystems Research and Education.
Counselling for burnout in Norwegian doctors: one year cohort study.
Rø, Karin E Isaksson; Gude, Tore; Tyssen, Reidar; Aasland, Olaf G
2008-11-11
To investigate levels and predictors of change in dimensions of burnout after an intervention for stressed doctors. Cohort study followed by self reported assessment at one year. Norwegian resource centre. 227 doctors participating in counselling intervention, 2003-5. Counselling (lasting one day (individual) or one week (group based)) aimed at motivating reflection on and acknowledgement of the doctors' situation and personal needs. Levels of burnout (Maslach burnout inventory) and predictors of reduction in emotional exhaustion investigated by linear regression. 185 doctors (81%, 88 men, 97 women) completed one year follow-up. The mean level of emotional exhaustion (scale 1-5) was significantly reduced from 3.00 (SD 0.94) to 2.53 (SD 0.76) (t=6.76, P<0.001), similar to the level found in a representative sample of 390 Norwegian doctors. Participants had reduced their working hours by 1.6 hours/week (SD 11.4). There was a considerable reduction in the proportion of doctors on full time sick leave, from 35% (63/182) at baseline to 6% (10/182) at follow-up and a parallel increase in the proportion who had undergone psychotherapy, from 20% (36/182) to 53% (97/182). In the whole cohort, reduction in emotional exhaustion was independently associated with reduced number of work hours/week (beta=0.17, P=0.03), adjusted for sex, age, and personality dimensions. Among men "satisfaction with the intervention" (beta=0.25, P=0.04) independently predicted reduction in emotional exhaustion. A short term counselling intervention could contribute to reduction in emotional exhaustion in doctors. This was associated with reduced working hours for the whole cohort and, in men, was predicted by satisfaction with the intervention.
Sparse Representation Based Classification with Structure Preserving Dimension Reduction
2014-03-13
dictionary learning [39] used stochastic approximations to update dictionary with a large data set. Laplacian score dictionary ( LSD ) [58], which is based on...vol. 4. 2003. p. 864–7. 47. Shaw B, Jebara T. Structure preserving embedding. In: The 26th annual international conference on machine learning, ICML
The Multiplicative Zak Transform, Dimension Reduction, and Wavelet Analysis of LIDAR Data
2010-01-01
systems is likely to fail. Auslander, Eichmann , Gertner, and Tolimieri defined a multiplicative Zak transform [1], mimicking the construction of the Gabor...L. Auslander, G. Eichmann , I. Gertner and R. Tolimieri, “Time-Frequency Analysis and Synthesis of Non-Stationary Signals,” Proc. Soc. Photo-Opt. In
Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ochilov, S.; Alam, M. S.; Bal, A.
2006-05-01
Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.
Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions
Burke, Timothy P.; Kiedrowski, Brian C.
2017-12-11
Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less
Lipok, Christian; Hippler, Jörg; Schmitz, Oliver J
2018-02-09
A two-dimensional GC (2D-GC) method was developed and coupled to an ion mobility-high resolution mass spectrometer, which enables the separation of complex samples in four dimensions (2D-GC, ion mobilility spectrometry and mass spectrometry). This approach works as a continuous multiheart-cutting GC-system (GC+GC), using a long modulation time of 20s, which allows the complete transfer of most of the first dimension peaks to the second dimension column without fractionation, in comparison to comprehensive two-dimensional gas chromatography (GCxGC). Hence, each compound delivers only one peak in the second dimension, which simplifies the data handling even when ion mobility spectrometry as a third and mass spectrometry as a fourth dimension are introduced. The analysis of a plant extract from Calendula officinales shows the separation power of this four dimensional separation method. The introduction of ion mobility spectrometry provides an additional separation dimension and allows to determine collision cross sections (CCS) of the analytes as a further physicochemical constant supporting the identification. A CCS database with more than 800 standard substances including drug-like compounds and pesticides was used for CCS data base search in this work. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baguet, A.; Pope, Christopher N.; Samtleben, H.
We prove an old conjecture by Duff, Nilsson, Pope and Warner asserting that the NSNS sector of supergravity (and more general the bosonic string) allows for a consistent Pauli reduction on any d-dimensional group manifold G, keeping the full set of gauge bosons of the G×G isometry group of the bi-invariant metric on G. The main tool of the construction is a particular generalised Scherk–Schwarz reduction ansatz in double field theory which we explicitly construct in terms of the group's Killing vectors. Examples include the consistent reduction from ten dimensions on S3×S3 and on similar product spaces. The construction ismore » another example of globally geometric non-toroidal compactifications inducing non-geometric fluxes.« less
NASA Astrophysics Data System (ADS)
Chen, Da-Ming; Xu, Y. F.; Zhu, W. D.
2018-05-01
An effective and reliable damage identification method for plates with a continuously scanning laser Doppler vibrometer (CSLDV) system is proposed. A new constant-speed scan algorithm is proposed to create a two-dimensional (2D) scan trajectory and automatically scan a whole plate surface. Full-field measurement of the plate can be achieved by applying the algorithm to the CSLDV system. Based on the new scan algorithm, the demodulation method is extended from one dimension for beams to two dimensions for plates to obtain a full-field operating deflection shape (ODS) of the plate from velocity response measured by the CSLDV system. The full-field ODS of an associated undamaged plate is obtained by using polynomials with proper orders to fit the corresponding full-field ODS from the demodulation method. A curvature damage index (CDI) using differences between curvatures of ODSs (CODSs) associated with ODSs that are obtained by the demodulation method and the polynomial fit is proposed to identify damage. An auxiliary CDI obtained by averaging CDIs at different excitation frequencies is defined to further assist damage identification. An experiment of an aluminum plate with damage in the form of 10.5% thickness reduction in a damage area of 0.86% of the whole scan area is conducted to investigate the proposed method. Six frequencies close to natural frequencies of the plate and one randomly selected frequency are used as sinusoidal excitation frequencies. Two 2D scan trajectories, i.e., a horizontally moving 2D scan trajectory and a vertically moving 2D scan trajectory, are used to obtain ODSs, CODSs, and CDIs of the plate. The damage is successfully identified near areas with consistently high values of CDIs at different excitation frequencies along the two 2D scan trajectories; the damage area is also identified by auxiliary CDIs.
Modelling spatiotemporal change using multidimensional arrays Meng
NASA Astrophysics Data System (ADS)
Lu, Meng; Appel, Marius; Pebesma, Edzer
2017-04-01
The large variety of remote sensors, model simulations, and in-situ records provide great opportunities to model environmental change. The massive amount of high-dimensional data calls for methods to integrate data from various sources and to analyse spatiotemporal and thematic information jointly. An array is a collection of elements ordered and indexed in arbitrary dimensions, which naturally represent spatiotemporal phenomena that are identified by their geographic locations and recording time. In addition, array regridding (e.g., resampling, down-/up-scaling), dimension reduction, and spatiotemporal statistical algorithms are readily applicable to arrays. However, the role of arrays in big geoscientific data analysis has not been systematically studied: How can arrays discretise continuous spatiotemporal phenomena? How can arrays facilitate the extraction of multidimensional information? How can arrays provide a clean, scalable and reproducible change modelling process that is communicable between mathematicians, computer scientist, Earth system scientist and stakeholders? This study emphasises on detecting spatiotemporal change using satellite image time series. Current change detection methods using satellite image time series commonly analyse data in separate steps: 1) forming a vegetation index, 2) conducting time series analysis on each pixel, and 3) post-processing and mapping time series analysis results, which does not consider spatiotemporal correlations and ignores much of the spectral information. Multidimensional information can be better extracted by jointly considering spatial, spectral, and temporal information. To approach this goal, we use principal component analysis to extract multispectral information and spatial autoregressive models to account for spatial correlation in residual based time series structural change modelling. We also discuss the potential of multivariate non-parametric time series structural change methods, hierarchical modelling, and extreme event detection methods to model spatiotemporal change. We show how array operations can facilitate expressing these methods, and how the open-source array data management and analytics software SciDB and R can be used to scale the process and make it easily reproducible.
Dimensions of Posttraumatic Growth in Patients With Cancer: A Mixed Method Study.
Heidarzadeh, Mehdi; Rassouli, Maryam; Brant, Jeannine M; Mohammadi-Shahbolaghi, Farahnaz; Alavi-Majd, Hamid
2017-08-12
Posttraumatic growth (PTG) refers to positive outcomes after exposure to stressful events. Previous studies suggest cross-cultural differences in the nature and amount of PTG. The aim of this study was to explore different dimensions of PTG in Iranian patients with cancer. A mixed method study with convergent parallel design was applied to clarify and determine dimensions of PTG. Using the Posttraumatic Growth Inventory (PTGI), confirmatory factor analysis was used to quantitatively identify dimensions of PTG in 402 patients with cancer. Simultaneously, phenomenological methodology (in-depth interview with 12 patients) was used to describe and interpret the lived experiences of cancer patients in the qualitative part of the study. Five dimensions of PTGI were confirmed from the original PTGI. Qualitatively, new dimensions of PTG emerged including "inner peace and other positive personal attributes," "finding meaning of life," "being a role model," and "performing health promoting behaviors." Results of the study indicated that PTG is a 5-dimensional concept with a broad range of subthemes for Iranian cancer patients and that the PTGI did not reflect all growth dimensions in Iranian cancer patients. Awareness of PTG dimensions can enable nurses to guide their use as coping strategies and provide context for positive changes in patients to promote quality care.
Zuendorf, Gerhard; Kerrouche, Nacer; Herholz, Karl; Baron, Jean-Claude
2003-01-01
Principal component analysis (PCA) is a well-known technique for reduction of dimensionality of functional imaging data. PCA can be looked at as the projection of the original images onto a new orthogonal coordinate system with lower dimensions. The new axes explain the variance in the images in decreasing order of importance, showing correlations between brain regions. We used an efficient, stable and analytical method to work out the PCA of Positron Emission Tomography (PET) images of 74 normal subjects using [(18)F]fluoro-2-deoxy-D-glucose (FDG) as a tracer. Principal components (PCs) and their relation to age effects were investigated. Correlations between the projections of the images on the new axes and the age of the subjects were carried out. The first two PCs could be identified as being the only PCs significantly correlated to age. The first principal component, which explained 10% of the data set variance, was reduced only in subjects of age 55 or older and was related to loss of signal in and adjacent to ventricles and basal cisterns, reflecting expected age-related brain atrophy with enlarging CSF spaces. The second principal component, which accounted for 8% of the total variance, had high loadings from prefrontal, posterior parietal and posterior cingulate cortices and showed the strongest correlation with age (r = -0.56), entirely consistent with previously documented age-related declines in brain glucose utilization. Thus, our method showed that the effect of aging on brain metabolism has at least two independent dimensions. This method should have widespread applications in multivariate analysis of brain functional images. Copyright 2002 Wiley-Liss, Inc.
Lee, Yii-Ching; Zeng, Pei-Shan; Huang, Chih-Hsuan; Wu, Hsin-Hung
2018-01-01
This study uses the decision-making trial and evaluation laboratory method to identify critical dimensions of the safety attitudes questionnaire in Taiwan in order to improve the patient safety culture from experts' viewpoints. Teamwork climate, stress recognition, and perceptions of management are three causal dimensions, while safety climate, job satisfaction, and working conditions are receiving dimensions. In practice, improvements on effect-based dimensions might receive little effects when a great amount of efforts have been invested. In contrast, improving a causal dimension not only improves itself but also results in better performance of other dimension(s) directly affected by this particular dimension. Teamwork climate and perceptions of management are found to be the most critical dimensions because they are both causal dimensions and have significant influences on four dimensions apiece. It is worth to note that job satisfaction is the only dimension affected by the other dimensions. In order to effectively enhance the patient safety culture for healthcare organizations, teamwork climate, and perceptions of management should be closely monitored.
Zeng, Pei-Shan; Huang, Chih-Hsuan
2018-01-01
This study uses the decision-making trial and evaluation laboratory method to identify critical dimensions of the safety attitudes questionnaire in Taiwan in order to improve the patient safety culture from experts' viewpoints. Teamwork climate, stress recognition, and perceptions of management are three causal dimensions, while safety climate, job satisfaction, and working conditions are receiving dimensions. In practice, improvements on effect-based dimensions might receive little effects when a great amount of efforts have been invested. In contrast, improving a causal dimension not only improves itself but also results in better performance of other dimension(s) directly affected by this particular dimension. Teamwork climate and perceptions of management are found to be the most critical dimensions because they are both causal dimensions and have significant influences on four dimensions apiece. It is worth to note that job satisfaction is the only dimension affected by the other dimensions. In order to effectively enhance the patient safety culture for healthcare organizations, teamwork climate, and perceptions of management should be closely monitored. PMID:29686825
The SYMLOG Dimensions and Small Group Conflict.
ERIC Educational Resources Information Center
Wall, Victor D., Jr.; Galanes, Gloria J.
1986-01-01
Explores the potential usefulness of R.F. Bales' systematic method for the multiple level observation of groups (SYMLOG) by testing the predictive capability of the three SYMLOG dimensions and the amount of member dispersion on each dimension with the amounts of conflict, reported satisfaction, styles of conflict management, and quality of…
Mathematics Teachers' Criteria of Dimension
ERIC Educational Resources Information Center
Ural, Alattin
2014-01-01
The aim of the study is to determine mathematics teachers' decisions about dimensions of the geometric figures, criteria of dimension and consistency of decision-criteria. The research is a qualitative research and the model applied in the study is descriptive method on the basis of general scanning model. 15 mathematics teachers attended the…
A Community Study of Association between Parenting Dimensions and Externalizing Behaviors
ERIC Educational Resources Information Center
Sharma, Vandana; Sandhu, Gurpreet K.
2006-01-01
Background: Association between parenting dimensions and externalizing behaviors in children was examined. Method: Data on children from the middle class families of Patiala (N = 240) were collected from schools and families. Parents completed questionnaires on parenting dimensions and externalizing behaviors of children. Results: Analysis of…
Development of the cardiovascular system: an interactive video computer program.
Smolen, A. J.; Zeiset, G. E.; Beaston-Wimmer, P.
1992-01-01
The major aim of this project is to provide interactive video computer based courseware that can be used by the medical student and others to supplement his or her learning of this very important aspect of basic biomedical education. Embryology is a science that depends on the ability of the student to visualize dynamic changes in structure which occur in four dimensions--X, Y, Z, and time. Traditional didactic methods, including lectures employing photographic slides and laboratories employing histological sections, are limited to two dimensions--X and Y. The third spatial dimension and the dimension of time cannot be readily illustrated using these methods. Computer based learning, particularly when used in conjunction with interactive video, can be used effectively to illustrate developmental processes in all four dimensions. This methodology can also be used to foster the critical skills of independent learning and problem solving. PMID:1483013
Zhang, Li; Qian, Liqiang; Ding, Chuntao; Zhou, Weida; Li, Fanzhang
2015-09-01
The family of discriminant neighborhood embedding (DNE) methods is typical graph-based methods for dimension reduction, and has been successfully applied to face recognition. This paper proposes a new variant of DNE, called similarity-balanced discriminant neighborhood embedding (SBDNE) and applies it to cancer classification using gene expression data. By introducing a novel similarity function, SBDNE deals with two data points in the same class and the different classes with different ways. The homogeneous and heterogeneous neighbors are selected according to the new similarity function instead of the Euclidean distance. SBDNE constructs two adjacent graphs, or between-class adjacent graph and within-class adjacent graph, using the new similarity function. According to these two adjacent graphs, we can generate the local between-class scatter and the local within-class scatter, respectively. Thus, SBDNE can maximize the between-class scatter and simultaneously minimize the within-class scatter to find the optimal projection matrix. Experimental results on six microarray datasets show that SBDNE is a promising method for cancer classification. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chadeau-Hyam, Marc; Campanella, Gianluca; Jombart, Thibaut; Bottolo, Leonardo; Portengen, Lutzen; Vineis, Paolo; Liquet, Benoit; Vermeulen, Roel C H
2013-08-01
Recent technological advances in molecular biology have given rise to numerous large-scale datasets whose analysis imposes serious methodological challenges mainly relating to the size and complex structure of the data. Considerable experience in analyzing such data has been gained over the past decade, mainly in genetics, from the Genome-Wide Association Study era, and more recently in transcriptomics and metabolomics. Building upon the corresponding literature, we provide here a nontechnical overview of well-established methods used to analyze OMICS data within three main types of regression-based approaches: univariate models including multiple testing correction strategies, dimension reduction techniques, and variable selection models. Our methodological description focuses on methods for which ready-to-use implementations are available. We describe the main underlying assumptions, the main features, and advantages and limitations of each of the models. This descriptive summary constitutes a useful tool for driving methodological choices while analyzing OMICS data, especially in environmental epidemiology, where the emergence of the exposome concept clearly calls for unified methods to analyze marginally and jointly complex exposure and OMICS datasets. Copyright © 2013 Wiley Periodicals, Inc.
Diverse Power Iteration Embeddings and Its Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang H.; Yoo S.; Yu, D.
2014-12-14
Abstract—Spectral Embedding is one of the most effective dimension reduction algorithms in data mining. However, its computation complexity has to be mitigated in order to apply it for real-world large scale data analysis. Many researches have been focusing on developing approximate spectral embeddings which are more efficient, but meanwhile far less effective. This paper proposes Diverse Power Iteration Embeddings (DPIE), which not only retains the similar efficiency of power iteration methods but also produces a series of diverse and more effective embedding vectors. We test this novel method by applying it to various data mining applications (e.g. clustering, anomaly detectionmore » and feature selection) and evaluating their performance improvements. The experimental results show our proposed DPIE is more effective than popular spectral approximation methods, and obtains the similar quality of classic spectral embedding derived from eigen-decompositions. Moreover it is extremely fast on big data applications. For example in terms of clustering result, DPIE achieves as good as 95% of classic spectral clustering on the complex datasets but 4000+ times faster in limited memory environment.« less
Cheng, Weiwei; Sun, Da-Wen; Pu, Hongbin; Wei, Qingyi
2017-04-15
The feasibility of hyperspectral imaging (HSI) (400-1000nm) for tracing the chemical spoilage extent of the raw meat used for two kinds of processed meats was investigated. Calibration models established separately for salted and cooked meats using full wavebands showed good results with the determination coefficient in prediction (R 2 P ) of 0.887 and 0.832, respectively. For simplifying the calibration models, two variable selection methods were used and compared. The results showed that genetic algorithm-partial least squares (GA-PLS) with as much continuous wavebands selected as possible always had better performance. The potential of HSI to develop one multispectral system for simultaneously tracing the chemical spoilage extent of the two kinds of processed meats was also studied. Good result with an R 2 P of 0.854 was obtained using GA-PLS as the dimension reduction method, which was thus used to visualize total volatile base nitrogen (TVB-N) contents corresponding to each pixel of the image. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Olson, S. L.; Beeson, H.; Haas, J.
2001-01-01
One of the performance goals for NASA's enterprise of Human Exploration and Development of Space (HEDS) is to develop methods, data bases, and validating tests for material flammability characterization, hazard reduction, and fire detection/suppression strategies for spacecraft and extraterrestrial habitats. This work addresses these needs by applying the fundamental knowledge gained from low stretch experiments to the development of a normal gravity low stretch material flammability test method. The concept of the apparatus being developed uses the low stretch geometry to simulate the conditions of the extraterrestrial environment through proper scaling of the sample dimensions to reduce the buoyant stretch in normal gravity. The apparatus uses controlled forced-air flow to augment the low stretch to levels which simulate Lunar or Martian gravity levels. In addition, the effect of imposed radiant heat flux on material flammability can be studied with the cone heater. After breadboard testing, the apparatus will be integrated into NASA's White Sands Test Facility's Atmosphere-Controlled Cone Calorimeter for evaluation as a new materials screening test method.
Modified Saez–Ballester scalar–tensor theory from 5D space-time
NASA Astrophysics Data System (ADS)
Rasouli, S. M. M.; Vargas Moniz, Paulo
2018-01-01
In this paper, we bring together the five-dimensional Saez–Ballester (SB) scalar–tensor theory (Saez and Ballester 1986 Phys. Lett. 113A 9) and the induced-matter-theory (IMT) setting (Wesson and Ponce de Leon 1992 J. Math. Phys. 33 3883), to obtain a modified SB theory (MSBT) in four dimensions. Specifically, by using an intrinsic dimensional reduction procedure into the SB field equations in five-dimensions, a MSBT is obtained onto a hypersurface orthogonal to the extra dimension. This four-dimensional MSBT is shown to bear distinctive new features in contrast to the usual corresponding SB theory as well as to IMT and the modified Brans–Dicke theory (MBDT) (Rasouli et al 2014 Class. Quantum Grav. 31 115002). In more detail, besides the usual induced matter terms retrieved through the IMT, the MSBT scalar field is provided with additional physically distinct (namely, SB induced) terms as well as an intrinsic self-interacting potential (interpreted as a consequence of the IMT process and the concrete geometry associated with the extra dimension). Moreover, our MSBT has four sets of field equations, with two sets having no analog in the standard SB scalar–tensor theory. It should be emphasized that the herein appealing solutions can emerge solely from the geometrical reductional process, from the presence also of extra dimension(s) and not from any ad-hoc matter either in the bulk or on the hypersurface. Subsequently, we apply the herein MSBT to cosmology and consider an extended spatially flat FLRW geometry in a five-dimensional vacuum space-time. After obtaining the exact solutions in the bulk, we proceed to construct, by means of the MSBT setting, the corresponding dynamic, on the four-dimensional hypersurface. More precisely, we obtain the (SB) components of the induced matter, including the induced scalar potential terms. We retrieve two different classes of solutions. Concerning the first class, we show that the MSBT yields a barotropic equation of state for the induced perfect fluid. We then investigate vacuum, dust, radiation, stiff fluid and false vacuum cosmologies for this scenario and contrast the results with those obtained in the standard SB theory, IMT and BD theory. Regarding the second class solutions, we show that the scale factor behaves in a similar way to a de Sitter (DeS) model. However, in our MSBT setting, this behavior is assisted by non-vanishing induced matter instead, without any a priori cosmological constant. Moreover, for all these solutions, we show that the extra dimension contracts with the cosmic time.
Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.
Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun
2018-06-04
Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.
Hsu, Jia-Lien; Hung, Ping-Cheng; Lin, Hung-Yen; Hsieh, Chung-Ho
2015-04-01
Breast cancer is one of the most common cause of cancer mortality. Early detection through mammography screening could significantly reduce mortality from breast cancer. However, most of screening methods may consume large amount of resources. We propose a computational model, which is solely based on personal health information, for breast cancer risk assessment. Our model can be served as a pre-screening program in the low-cost setting. In our study, the data set, consisting of 3976 records, is collected from Taipei City Hospital starting from 2008.1.1 to 2008.12.31. Based on the dataset, we first apply the sampling techniques and dimension reduction method to preprocess the testing data. Then, we construct various kinds of classifiers (including basic classifiers, ensemble methods, and cost-sensitive methods) to predict the risk. The cost-sensitive method with random forest classifier is able to achieve recall (or sensitivity) as 100 %. At the recall of 100 %, the precision (positive predictive value, PPV), and specificity of cost-sensitive method with random forest classifier was 2.9 % and 14.87 %, respectively. In our study, we build a breast cancer risk assessment model by using the data mining techniques. Our model has the potential to be served as an assisting tool in the breast cancer screening.
Methods And System Suppressing Clutter In A Gain-Block, Radar-Responsive Tag System
Ormesher, Richard C.; Axline, Robert M.
2006-04-18
Methods and systems reduce clutter interference in a radar-responsive tag system. A radar transmits a series of linear-frequency-modulated pulses and receives echo pulses from nearby terrain and from radar-responsive tags that may be in the imaged scene. Tags in the vicinity of the radar are activated by the radar's pulses. The tags receive and remodulate the radar pulses. Tag processing reverses the direction, in time, of the received waveform's linear frequency modulation. The tag retransmits the remodulated pulses. The radar uses a reversed-chirp de-ramp pulse to process the tag's echo. The invention applies to radar systems compatible with coherent gain-block tags. The invention provides a marked reduction in the strength of residual clutter echoes on each and every echo pulse received by the radar. SAR receiver processing effectively whitens passive-clutter signatures across the range dimension. Clutter suppression of approximately 14 dB is achievable for a typical radar system.
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Astrophysics Data System (ADS)
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
An Export-Marketing Model for Pharmaceutical Firms (The Case of Iran)
Mohammadzadeh, Mehdi; Aryanpour, Narges
2013-01-01
Internationalization is a matter of committed decision-making that starts with export marketing, in which an organization tries to diagnose and use opportunities in target markets based on realistic evaluation of internal strengths and weaknesses with analysis of macro and microenvironments in order to gain presence in other countries. A developed model for export and international marketing of pharmaceutical companies is introduced. The paper reviews common theories of the internationalization process, followed by examining different methods and models for assessing preparation for export activities and examining conceptual model based on a single case study method on a basket of seven leading domestic firms by using mainly questionares as the data gathering tool along with interviews for bias reduction. Finally, in keeping with the study objectives, the special aspects of the pharmaceutical marketing environment have been covered, revealing special dimensions of pharmaceutical marketing that have been embedded within the appropriate base model. The new model for international activities of pharmaceutical companies was refined by expert opinions extracted from result of questionnaires. PMID:24250597
Fusion of Deep Learning and Compressed Domain features for Content Based Image Retrieval.
Liu, Peizhong; Guo, Jing-Ming; Wu, Chi-Yi; Cai, Danlin
2017-08-29
This paper presents an effective image retrieval method by combining high-level features from Convolutional Neural Network (CNN) model and low-level features from Dot-Diffused Block Truncation Coding (DDBTC). The low-level features, e.g., texture and color, are constructed by VQ-indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features (DL-TLCF) is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate (APR) and average recall rate (ARR), are employed to examine various datasets. As documented in the experimental results, the proposed schemes can achieve superior performance compared to the state-of-the-art methods with either low- or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.
Age and gender classification in the wild with unsupervised feature learning
NASA Astrophysics Data System (ADS)
Wan, Lihong; Huo, Hong; Fang, Tao
2017-03-01
Inspired by unsupervised feature learning (UFL) within the self-taught learning framework, we propose a method based on UFL, convolution representation, and part-based dimensionality reduction to handle facial age and gender classification, which are two challenging problems under unconstrained circumstances. First, UFL is introduced to learn selective receptive fields (filters) automatically by applying whitening transformation and spherical k-means on random patches collected from unlabeled data. The learning process is fast and has no hyperparameters to tune. Then, the input image is convolved with these filters to obtain filtering responses on which local contrast normalization is applied. Average pooling and feature concatenation are then used to form global face representation. Finally, linear discriminant analysis with part-based strategy is presented to reduce the dimensions of the global representation and to improve classification performances further. Experiments on three challenging databases, namely, Labeled faces in the wild, Gallagher group photos, and Adience, demonstrate the effectiveness of the proposed method relative to that of state-of-the-art approaches.
An export-marketing model for pharmaceutical firms (the case of iran).
Mohammadzadeh, Mehdi; Aryanpour, Narges
2013-01-01
Internationalization is a matter of committed decision-making that starts with export marketing, in which an organization tries to diagnose and use opportunities in target markets based on realistic evaluation of internal strengths and weaknesses with analysis of macro and microenvironments in order to gain presence in other countries. A developed model for export and international marketing of pharmaceutical companies is introduced. The paper reviews common theories of the internationalization process, followed by examining different methods and models for assessing preparation for export activities and examining conceptual model based on a single case study method on a basket of seven leading domestic firms by using mainly questionares as the data gathering tool along with interviews for bias reduction. Finally, in keeping with the study objectives, the special aspects of the pharmaceutical marketing environment have been covered, revealing special dimensions of pharmaceutical marketing that have been embedded within the appropriate base model. The new model for international activities of pharmaceutical companies was refined by expert opinions extracted from result of questionnaires.
Volume of interest CBCT and tube current modulation for image guidance using dynamic kV collimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsons, David, E-mail: david.parsons@dal.ca, E-mail: james.robar@nshealth.ca; Robar, James L., E-mail: david.parsons@dal.ca, E-mail: james.robar@nshealth.ca
2016-04-15
Purpose: The focus of this work is the development of a novel blade collimation system enabling volume of interest (VOI) CBCT with tube current modulation using the kV image guidance source on a linear accelerator. Advantages of the system are assessed, particularly with regard to reduction and localization of dose and improvement of image quality. Methods: A four blade dynamic kV collimator was developed to track a VOI during a CBCT acquisition. The current prototype is capable of tracking an arbitrary volume defined by the treatment planner for subsequent CBCT guidance. During gantry rotation, the collimator tracks the VOI withmore » adjustment of position and dimension. CBCT image quality was investigated as a function of collimator dimension, while maintaining the same dose to the VOI, for a 22.2 cm diameter cylindrical water phantom with a 9 mm diameter bone insert centered on isocenter. Dose distributions were modeled using a dynamic BEAMnrc library and DOSXYZnrc. The resulting VOI dose distributions were compared to full-field CBCT distributions to quantify dose reduction and localization to the target volume. A novel method of optimizing x-ray tube current during CBCT acquisition was developed and assessed with regard to contrast-to-noise ratio (CNR) and imaging dose. Results: Measurements show that the VOI CBCT method using the dynamic blade system yields an increase in contrast-to-noise ratio by a factor of approximately 2.2. Depending upon the anatomical site, dose was reduced to 15%–80% of the full-field CBCT value along the central axis plane and down to less than 1% out of plane. The use of tube current modulation allowed for specification of a desired SNR within projection data. For approximately the same dose to the VOI, CNR was further increased by a factor of 1.2 for modulated VOI CBCT, giving a combined improvement of 2.6 compared to full-field CBCT. Conclusions: The present dynamic blade system provides significant improvements in CNR for the same imaging dose and localization of imaging dose to a predefined volume of interest. The approach is compatible with tube current modulation, allowing optimization of the imaging protocol.« less
Agustini, Deonir; Mangrich, Antonio Salvio; Bergamini, Márcio F; Marcolino-Junior, Luiz Humberto
2015-09-01
A simple and sensitive electroanalytical method was developed for determination of nanomolar levels of Pb(II) based on the voltammetric stripping response at a carbon paste electrode modified with biochar (a special charcoal) and bismuth nanostructures (nBi-BchCPE). The proposed methodology was based on spontaneous interactions between the highly functionalized biochar surface and Pb(II) ions followed by reduction of these ions into bismuth nanodots which promote an improvement on the stripping anodic current. The experimental procedure could be summarized in three steps: including an open circuit pre-concentration, reduction of accumulated lead ions at the electrode surface and stripping step under differential pulse voltammetric conditions (DPAdSV). SEM images revealed dimensions of bismuth nanodots ranging from 20 nm to 70 nm. The effects of main parameters related to biochar, bismuth and operational parameters were examined in detail. Under the optimal conditions, the proposed sensor has exhibited linear range from 5.0 to 1000 nmol L(-1) and detection limit of 1.41 nmol L(-1) for Pb(II). The optimized method was successfully applied for determination of Pb(II) released from overglaze-decorated ceramic dishes. Results obtained were compared with those given by inductively coupled plasma optical emission spectroscopy (ICP-OES) and they are in agreement at 99% of confidence level. Copyright © 2015. Published by Elsevier B.V.