Sample records for distance matrix methods

  1. Implementation of hierarchical clustering using k-mer sparse matrix to analyze MERS-CoV genetic relationship

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Ulul, E. D.; Hura, H. F. A.; Siswantining, T.

    2017-07-01

    Hierarchical clustering is one of effective methods in creating a phylogenetic tree based on the distance matrix between DNA (deoxyribonucleic acid) sequences. One of the well-known methods to calculate the distance matrix is k-mer method. Generally, k-mer is more efficient than some distance matrix calculation techniques. The steps of k-mer method are started from creating k-mer sparse matrix, and followed by creating k-mer singular value vectors. The last step is computing the distance amongst vectors. In this paper, we analyze the sequences of MERS-CoV (Middle East Respiratory Syndrome - Coronavirus) DNA by implementing hierarchical clustering using k-mer sparse matrix in order to perform the phylogenetic analysis. Our results show that the ancestor of our MERS-CoV is coming from Egypt. Moreover, we found that the MERS-CoV infection that occurs in one country may not necessarily come from the same country of origin. This suggests that the process of MERS-CoV mutation might not only be influenced by geographical factor.

  2. Protein structure estimation from NMR data by matrix completion.

    PubMed

    Li, Zhicheng; Li, Yang; Lei, Qiang; Zhao, Qing

    2017-09-01

    Knowledge of protein structures is very important to understand their corresponding physical and chemical properties. Nuclear Magnetic Resonance (NMR) spectroscopy is one of the main methods to measure protein structure. In this paper, we propose a two-stage approach to calculate the structure of a protein from a highly incomplete distance matrix, where most data are obtained from NMR. We first randomly "guess" a small part of unobservable distances by utilizing the triangle inequality, which is crucial for the second stage. Then we use matrix completion to calculate the protein structure from the obtained incomplete distance matrix. We apply the accelerated proximal gradient algorithm to solve the corresponding optimization problem. Furthermore, the recovery error of our method is analyzed, and its efficiency is demonstrated by several practical examples.

  3. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    PubMed

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA

    PubMed Central

    Kelly, Brendan J.; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D.; Collman, Ronald G.; Bushman, Frederic D.; Li, Hongzhe

    2015-01-01

    Motivation: The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence–absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. Results: We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. Availability and implementation: http://github.com/brendankelly/micropower. Contact: brendank@mail.med.upenn.edu or hongzhe@upenn.edu PMID:25819674

  5. Rapid construction of pinhole SPECT system matrices by distance-weighted Gaussian interpolation method combined with geometric parameter estimations

    NASA Astrophysics Data System (ADS)

    Lee, Ming-Wei; Chen, Yi-Chun

    2014-02-01

    In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.

  6. Person Re-Identification via Distance Metric Learning With Latent Variables.

    PubMed

    Sun, Chong; Wang, Dong; Lu, Huchuan

    2017-01-01

    In this paper, we propose an effective person re-identification method with latent variables, which represents a pedestrian as the mixture of a holistic model and a number of flexible models. Three types of latent variables are introduced to model uncertain factors in the re-identification problem, including vertical misalignments, horizontal misalignments and leg posture variations. The distance between two pedestrians can be determined by minimizing a given distance function with respect to latent variables, and then be used to conduct the re-identification task. In addition, we develop a latent metric learning method for learning the effective metric matrix, which can be solved via an iterative manner: once latent information is specified, the metric matrix can be obtained based on some typical metric learning methods; with the computed metric matrix, the latent variables can be determined by searching the state space exhaustively. Finally, extensive experiments are conducted on seven databases to evaluate the proposed method. The experimental results demonstrate that our method achieves better performance than other competing algorithms.

  7. A comparison of visual outcomes in three different types of monofocal intraocular lenses

    PubMed Central

    Shetty, Vijay; Haldipurkar, Suhas S; Gore, Rujuta; Dhamankar, Rita; Paik, Anirban; Setia, Maninder Singh

    2015-01-01

    AIM To compare the visual outcomes (distance and near) in patients opting for three different types of monofocal intraocular lens (IOL) (Matrix Aurium, AcrySof single piece, and AcrySof IQ lens). METHODS The present study is a cross-sectional analysis of secondary clinical data collected from 153 eyes (52 eyes in Matrix Aurium, 48 in AcrySof single piece, and 53 in AcrySof IQ group) undergoing cataract surgery (2011-2012). We compared near vision, distance vision, distance corrected near vision in these three types of lenses on day 15 (±3) post-surgery. RESULTS About 69% of the eyes in the Matrix Aurium group had good uncorrected distance vision post-surgery; the proportion was 48% and 57% in the AcrySof single piece and AcrySof IQ group (P=0.09). The proportion of eyes with good distance corrected near vision were 38%, 33%, and 15% in the Matrix Aurium, AcrySof single piece, and AcrySof IQ groups respectively (P=0.02). Similarly, The proportion with good “both near and distance vision” were 38%, 33%, and 15% in the Matrix Aurium, AcrySof single piece, and AcrySof IQ groups respectively (P=0.02). It was only the Matrix Aurium group which had significantly better both “distance and near vision” compared with the AcrySof IQ group (odds ratio: 5.87, 95% confidence intervals: 1.68 to 20.56). CONCLUSION Matrix Aurium monofocal lenses may be a good option for those patients who desire to have a good near as well as distance vision post-surgery. PMID:26682168

  8. Manifold Preserving: An Intrinsic Approach for Semisupervised Distance Metric Learning.

    PubMed

    Ying, Shihui; Wen, Zhijie; Shi, Jun; Peng, Yaxin; Peng, Jigen; Qiao, Hong

    2017-05-18

    In this paper, we address the semisupervised distance metric learning problem and its applications in classification and image retrieval. First, we formulate a semisupervised distance metric learning model by considering the metric information of inner classes and interclasses. In this model, an adaptive parameter is designed to balance the inner metrics and intermetrics by using data structure. Second, we convert the model to a minimization problem whose variable is symmetric positive-definite matrix. Third, in implementation, we deduce an intrinsic steepest descent method, which assures that the metric matrix is strictly symmetric positive-definite at each iteration, with the manifold structure of the symmetric positive-definite matrix manifold. Finally, we test the proposed algorithm on conventional data sets, and compare it with other four representative methods. The numerical results validate that the proposed method significantly improves the classification with the same computational efficiency.

  9. Automatic face naming by learning discriminative affinity matrices from weakly labeled images.

    PubMed

    Xiao, Shijie; Xu, Dong; Wu, Jianxin

    2015-10-01

    Given a collection of images, where each image contains several faces and is associated with a few names in the corresponding caption, the goal of face naming is to infer the correct name for each face. In this paper, we propose two new methods to effectively solve this problem by learning two discriminative affinity matrices from these weakly labeled images. We first propose a new method called regularized low-rank representation by effectively utilizing weakly supervised information to learn a low-rank reconstruction coefficient matrix while exploring multiple subspace structures of the data. Specifically, by introducing a specially designed regularizer to the low-rank representation method, we penalize the corresponding reconstruction coefficients related to the situations where a face is reconstructed by using face images from other subjects or by using itself. With the inferred reconstruction coefficient matrix, a discriminative affinity matrix can be obtained. Moreover, we also develop a new distance metric learning method called ambiguously supervised structural metric learning by using weakly supervised information to seek a discriminative distance metric. Hence, another discriminative affinity matrix can be obtained using the similarity matrix (i.e., the kernel matrix) based on the Mahalanobis distances of the data. Observing that these two affinity matrices contain complementary information, we further combine them to obtain a fused affinity matrix, based on which we develop a new iterative scheme to infer the name of each face. Comprehensive experiments demonstrate the effectiveness of our approach.

  10. Distance learning in discriminative vector quantization.

    PubMed

    Schneider, Petra; Biehl, Michael; Hammer, Barbara

    2009-10-01

    Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.

  11. Graph edit distance from spectral seriation.

    PubMed

    Robles-Kelly, Antonio; Hancock, Edwin R

    2005-03-01

    This paper is concerned with computing graph edit distance. One of the criticisms that can be leveled at existing methods for computing graph edit distance is that they lack some of the formality and rigor of the computation of string edit distance. Hence, our aim is to convert graphs to string sequences so that string matching techniques can be used. To do this, we use a graph spectral seriation method to convert the adjacency matrix into a string or sequence order. We show how the serial ordering can be established using the leading eigenvector of the graph adjacency matrix. We pose the problem of graph-matching as a maximum a posteriori probability (MAP) alignment of the seriation sequences for pairs of graphs. This treatment leads to an expression in which the edit cost is the negative logarithm of the a posteriori sequence alignment probability. We compute the edit distance by finding the sequence of string edit operations which minimizes the cost of the path traversing the edit lattice. The edit costs are determined by the components of the leading eigenvectors of the adjacency matrix and by the edge densities of the graphs being matched. We demonstrate the utility of the edit distance on a number of graph clustering problems.

  12. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

    USGS Publications Warehouse

    Williams, C.J.; Heglund, P.J.

    2009-01-01

    Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

  13. A spectral method to detect community structure based on distance modularity matrix

    NASA Astrophysics Data System (ADS)

    Yang, Jin-Xuan; Zhang, Xiao-Dong

    2017-08-01

    There are many community organizations in social and biological networks. How to identify these community structure in complex networks has become a hot issue. In this paper, an algorithm to detect community structure of networks is proposed by using spectra of distance modularity matrix. The proposed algorithm focuses on the distance of vertices within communities, rather than the most weakly connected vertex pairs or number of edges between communities. The experimental results show that our method achieves better effectiveness to identify community structure for a variety of real-world networks and computer generated networks with a little more time-consumption.

  14. A novel edge-preserving nonnegative matrix factorization method for spectral unmixing

    NASA Astrophysics Data System (ADS)

    Bao, Wenxing; Ma, Ruishi

    2015-12-01

    Spectral unmixing technique is one of the key techniques to identify and classify the material in the hyperspectral image processing. A novel robust spectral unmixing method based on nonnegative matrix factorization(NMF) is presented in this paper. This paper used an edge-preserving function as hypersurface cost function to minimize the nonnegative matrix factorization. To minimize the hypersurface cost function, we constructed the updating functions for signature matrix of end-members and abundance fraction respectively. The two functions are updated alternatively. For evaluation purpose, synthetic data and real data have been used in this paper. Synthetic data is used based on end-members from USGS digital spectral library. AVIRIS Cuprite dataset have been used as real data. The spectral angle distance (SAD) and abundance angle distance(AAD) have been used in this research for assessment the performance of proposed method. The experimental results show that this method can obtain more ideal results and good accuracy for spectral unmixing than present methods.

  15. Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT

    PubMed Central

    Nguyen, Thu L. N.; Shin, Yoan

    2016-01-01

    Localization in wireless sensor networks (WSNs) is one of the primary functions of the intelligent Internet of Things (IoT) that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton’s method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach. PMID:27213378

  16. Determination of matrix composition based on solute-solute nearest-neighbor distances in atom probe tomography.

    PubMed

    De Geuser, F; Lefebvre, W

    2011-03-01

    In this study, we propose a fast automatic method providing the matrix concentration in an atom probe tomography (APT) data set containing two phases or more. The principle of this method relies on the calculation of the relative amount of isolated solute atoms (i.e., not surrounded by a similar solute atom) as a function of a distance d in the APT reconstruction. Simulated data sets have been generated to test the robustness of this new tool and demonstrate that rapid and reproducible results can be obtained without the need of any user input parameter. The method has then been successfully applied to a ternary Al-Zn-Mg alloy containing a fine dispersion of hardening precipitates. The relevance of this method for direct estimation of matrix concentration is discussed and compared with the existing methodologies. Copyright © 2010 Wiley-Liss, Inc.

  17. A novel three-stage distance-based consensus ranking method

    NASA Astrophysics Data System (ADS)

    Aghayi, Nazila; Tavana, Madjid

    2018-05-01

    In this study, we propose a three-stage weighted sum method for identifying the group ranks of alternatives. In the first stage, a rank matrix, similar to the cross-efficiency matrix, is obtained by computing the individual rank position of each alternative based on importance weights. In the second stage, a secondary goal is defined to limit the vector of weights since the vector of weights obtained in the first stage is not unique. Finally, in the third stage, the group rank position of alternatives is obtained based on a distance of individual rank positions. The third stage determines a consensus solution for the group so that the ranks obtained have a minimum distance from the ranks acquired by each alternative in the previous stage. A numerical example is presented to demonstrate the applicability and exhibit the efficacy of the proposed method and algorithms.

  18. Method of assessing the state of a rolling bearing based on the relative compensation distance of multiple-domain features and locally linear embedding

    NASA Astrophysics Data System (ADS)

    Kang, Shouqiang; Ma, Danyang; Wang, Yujing; Lan, Chaofeng; Chen, Qingguo; Mikulovich, V. I.

    2017-03-01

    To effectively assess different fault locations and different degrees of performance degradation of a rolling bearing with a unified assessment index, a novel state assessment method based on the relative compensation distance of multiple-domain features and locally linear embedding is proposed. First, for a single-sample signal, time-domain and frequency-domain indexes can be calculated for the original vibration signal and each sensitive intrinsic mode function obtained by improved ensemble empirical mode decomposition, and the singular values of the sensitive intrinsic mode function matrix can be extracted by singular value decomposition to construct a high-dimensional hybrid-domain feature vector. Second, a feature matrix can be constructed by arranging each feature vector of multiple samples, the dimensions of each row vector of the feature matrix can be reduced by the locally linear embedding algorithm, and the compensation distance of each fault state of the rolling bearing can be calculated using the support vector machine. Finally, the relative distance between different fault locations and different degrees of performance degradation and the normal-state optimal classification surface can be compensated, and on the basis of the proposed relative compensation distance, the assessment model can be constructed and an assessment curve drawn. Experimental results show that the proposed method can effectively assess different fault locations and different degrees of performance degradation of the rolling bearing under certain conditions.

  19. Generalising Ward's Method for Use with Manhattan Distances.

    PubMed

    Strauss, Trudie; von Maltitz, Michael Johan

    2017-01-01

    The claim that Ward's linkage algorithm in hierarchical clustering is limited to use with Euclidean distances is investigated. In this paper, Ward's clustering algorithm is generalised to use with l1 norm or Manhattan distances. We argue that the generalisation of Ward's linkage method to incorporate Manhattan distances is theoretically sound and provide an example of where this method outperforms the method using Euclidean distances. As an application, we perform statistical analyses on languages using methods normally applied to biology and genetic classification. We aim to quantify differences in character traits between languages and use a statistical language signature based on relative bi-gram (sequence of two letters) frequencies to calculate a distance matrix between 32 Indo-European languages. We then use Ward's method of hierarchical clustering to classify the languages, using the Euclidean distance and the Manhattan distance. Results obtained from using the different distance metrics are compared to show that the Ward's algorithm characteristic of minimising intra-cluster variation and maximising inter-cluster variation is not violated when using the Manhattan metric.

  20. Polynomial Supertree Methods Revisited

    PubMed Central

    Brinkmeyer, Malte; Griebel, Thasso; Böcker, Sebastian

    2011-01-01

    Supertree methods allow to reconstruct large phylogenetic trees by combining smaller trees with overlapping leaf sets into one, more comprehensive supertree. The most commonly used supertree method, matrix representation with parsimony (MRP), produces accurate supertrees but is rather slow due to the underlying hard optimization problem. In this paper, we present an extensive simulation study comparing the performance of MRP and the polynomial supertree methods MinCut Supertree, Modified MinCut Supertree, Build-with-distances, PhySIC, PhySIC_IST, and super distance matrix. We consider both quality and resolution of the reconstructed supertrees. Our findings illustrate the tradeoff between accuracy and running time in supertree construction, as well as the pros and cons of voting- and veto-based supertree approaches. Based on our results, we make some general suggestions for supertree methods yet to come. PMID:22229028

  1. Collision for Li++He System. I. Potential Curves and Non-Adiabatic Coupling Matrix Elements

    NASA Astrophysics Data System (ADS)

    Yoshida, Junichi; O-Ohata, Kiyosi

    1984-02-01

    The potential curves and the non-adiabatic coupling matrix elements for the Li++He collision system were computed. The SCF molecular orbitals were constructed with the CGTO atomic bases centered on each nucleus and the center of mass of two nuclei. The SCF and CI calculations were done at various internuclear distances in the range of 0.1˜25.0 a.u. The potential energies and the wavefunctions were calculated with good approximation over whole internuclear distance. The non-adiabatic coupling matrix elements were calculated with the tentative method in which the ETF are approximately taken into account.

  2. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis

    PubMed Central

    Liu, Jingxian; Wu, Kefeng

    2017-01-01

    The Shipboard Automatic Identification System (AIS) is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW), a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA) is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our proposed method with traditional spectral clustering and fast affinity propagation clustering. Experimental results have illustrated its superior performance in terms of quantitative and qualitative evaluations. PMID:28777353

  3. Correction of spin diffusion during iterative automated NOE assignment

    NASA Astrophysics Data System (ADS)

    Linge, Jens P.; Habeck, Michael; Rieping, Wolfgang; Nilges, Michael

    2004-04-01

    Indirect magnetization transfer increases the observed nuclear Overhauser enhancement (NOE) between two protons in many cases, leading to an underestimation of target distances. Wider distance bounds are necessary to account for this error. However, this leads to a loss of information and may reduce the quality of the structures generated from the inter-proton distances. Although several methods for spin diffusion correction have been published, they are often not employed to derive distance restraints. This prompted us to write a user-friendly and CPU-efficient method to correct for spin diffusion that is fully integrated in our program ambiguous restraints for iterative assignment (ARIA). ARIA thus allows automated iterative NOE assignment and structure calculation with spin diffusion corrected distances. The method relies on numerical integration of the coupled differential equations which govern relaxation by matrix squaring and sparse matrix techniques. We derive a correction factor for the distance restraints from calculated NOE volumes and inter-proton distances. To evaluate the impact of our spin diffusion correction, we tested the new calibration process extensively with data from the Pleckstrin homology (PH) domain of Mus musculus β-spectrin. By comparing structures refined with and without spin diffusion correction, we show that spin diffusion corrected distance restraints give rise to structures of higher quality (notably fewer NOE violations and a more regular Ramachandran map). Furthermore, spin diffusion correction permits the use of tighter error bounds which improves the distinction between signal and noise in an automated NOE assignment scheme.

  4. Automatic segmentation of right ventricular ultrasound images using sparse matrix transform and a level set

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Cong, Zhibin; Fei, Baowei

    2013-11-01

    An automatic segmentation framework is proposed to segment the right ventricle (RV) in echocardiographic images. The method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform, a training model, and a localized region-based level set. First, the sparse matrix transform extracts main motion regions of the myocardium as eigen-images by analyzing the statistical information of the images. Second, an RV training model is registered to the eigen-images in order to locate the position of the RV. Third, the training model is adjusted and then serves as an optimized initialization for the segmentation of each image. Finally, based on the initializations, a localized, region-based level set algorithm is applied to segment both epicardial and endocardial boundaries in each echocardiograph. Three evaluation methods were used to validate the performance of the segmentation framework. The Dice coefficient measures the overall agreement between the manual and automatic segmentation. The absolute distance and the Hausdorff distance between the boundaries from manual and automatic segmentation were used to measure the accuracy of the segmentation. Ultrasound images of human subjects were used for validation. For the epicardial and endocardial boundaries, the Dice coefficients were 90.8 ± 1.7% and 87.3 ± 1.9%, the absolute distances were 2.0 ± 0.42 mm and 1.79 ± 0.45 mm, and the Hausdorff distances were 6.86 ± 1.71 mm and 7.02 ± 1.17 mm, respectively. The automatic segmentation method based on a sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.

  5. Heterogeneity Measurement Based on Distance Measure for Polarimetric SAR Data

    NASA Astrophysics Data System (ADS)

    Xing, Xiaoli; Chen, Qihao; Liu, Xiuguo

    2018-04-01

    To effectively test the scene heterogeneity for polarimetric synthetic aperture radar (PolSAR) data, in this paper, the distance measure is introduced by utilizing the similarity between the sample and pixels. Moreover, given the influence of the distribution and modeling texture, the K distance measure is deduced according to the Wishart distance measure. Specifically, the average of the pixels in the local window replaces the class center coherency or covariance matrix. The Wishart and K distance measure are calculated between the average matrix and the pixels. Then, the ratio of the standard deviation to the mean is established for the Wishart and K distance measure, and the two features are defined and applied to reflect the complexity of the scene. The proposed heterogeneity measure is proceeded by integrating the two features using the Pauli basis. The experiments conducted on the single-look and multilook PolSAR data demonstrate the effectiveness of the proposed method for the detection of the scene heterogeneity.

  6. Non-negative Matrix Factorization and Co-clustering: A Promising Tool for Multi-tasks Bearing Fault Diagnosis

    NASA Astrophysics Data System (ADS)

    Shen, Fei; Chen, Chao; Yan, Ruqiang

    2017-05-01

    Classical bearing fault diagnosis methods, being designed according to one specific task, always pay attention to the effectiveness of extracted features and the final diagnostic performance. However, most of these approaches suffer from inefficiency when multiple tasks exist, especially in a real-time diagnostic scenario. A fault diagnosis method based on Non-negative Matrix Factorization (NMF) and Co-clustering strategy is proposed to overcome this limitation. Firstly, some high-dimensional matrixes are constructed using the Short-Time Fourier Transform (STFT) features, where the dimension of each matrix equals to the number of target tasks. Then, the NMF algorithm is carried out to obtain different components in each dimension direction through optimized matching, such as Euclidean distance and divergence distance. Finally, a Co-clustering technique based on information entropy is utilized to realize classification of each component. To verity the effectiveness of the proposed approach, a series of bearing data sets were analysed in this research. The tests indicated that although the diagnostic performance of single task is comparable to traditional clustering methods such as K-mean algorithm and Guassian Mixture Model, the accuracy and computational efficiency in multi-tasks fault diagnosis are improved.

  7. Comparison of efficiency of distance measurement methodologies in mango (Mangifera indica) progenies based on physicochemical descriptors.

    PubMed

    Alves, E O S; Cerqueira-Silva, C B M; Souza, A M; Santos, C A F; Lima Neto, F P; Corrêa, R X

    2012-03-14

    We investigated seven distance measures in a set of observations of physicochemical variables of mango (Mangifera indica) submitted to multivariate analyses (distance, projection and grouping). To estimate the distance measurements, five mango progeny (total of 25 genotypes) were analyzed, using six fruit physicochemical descriptors (fruit weight, equatorial diameter, longitudinal diameter, total soluble solids in °Brix, total titratable acidity, and pH). The distance measurements were compared by the Spearman correlation test, projection in two-dimensional space and grouping efficiency. The Spearman correlation coefficients between the seven distance measurements were, except for the Mahalanobis' generalized distance (0.41 ≤ rs ≤ 0.63), high and significant (rs ≥ 0.91; P < 0.001). Regardless of the origin of the distance matrix, the unweighted pair group method with arithmetic mean grouping method proved to be the most adequate. The various distance measurements and grouping methods gave different values for distortion (-116.5 ≤ D ≤ 74.5), cophenetic correlation (0.26 ≤ rc ≤ 0.76) and stress (-1.9 ≤ S ≤ 58.9). Choice of distance measurement and analysis methods influence the.

  8. Method of locating related items in a geometric space for data mining

    DOEpatents

    Hendrickson, B.A.

    1999-07-27

    A method for locating related items in a geometric space transforms relationships among items to geometric locations. The method locates items in the geometric space so that the distance between items corresponds to the degree of relatedness. The method facilitates communication of the structure of the relationships among the items. The method is especially beneficial for communicating databases with many items, and with non-regular relationship patterns. Examples of such databases include databases containing items such as scientific papers or patents, related by citations or keywords. A computer system adapted for practice of the present invention can include a processor, a storage subsystem, a display device, and computer software to direct the location and display of the entities. The method comprises assigning numeric values as a measure of similarity between each pairing of items. A matrix is constructed, based on the numeric values. The eigenvectors and eigenvalues of the matrix are determined. Each item is located in the geometric space at coordinates determined from the eigenvectors and eigenvalues. Proper construction of the matrix and proper determination of coordinates from eigenvectors can ensure that distance between items in the geometric space is representative of the numeric value measure of the items' similarity. 12 figs.

  9. Method of locating related items in a geometric space for data mining

    DOEpatents

    Hendrickson, Bruce A.

    1999-01-01

    A method for locating related items in a geometric space transforms relationships among items to geometric locations. The method locates items in the geometric space so that the distance between items corresponds to the degree of relatedness. The method facilitates communication of the structure of the relationships among the items. The method is especially beneficial for communicating databases with many items, and with non-regular relationship patterns. Examples of such databases include databases containing items such as scientific papers or patents, related by citations or keywords. A computer system adapted for practice of the present invention can include a processor, a storage subsystem, a display device, and computer software to direct the location and display of the entities. The method comprises assigning numeric values as a measure of similarity between each pairing of items. A matrix is constructed, based on the numeric values. The eigenvectors and eigenvalues of the matrix are determined. Each item is located in the geometric space at coordinates determined from the eigenvectors and eigenvalues. Proper construction of the matrix and proper determination of coordinates from eigenvectors can ensure that distance between items in the geometric space is representative of the numeric value measure of the items' similarity.

  10. Topological Distances Between Brain Networks

    PubMed Central

    Lee, Hyekyoung; Solo, Victor; Davidson, Richard J.; Pollak, Seth D.

    2018-01-01

    Many existing brain network distances are based on matrix norms. The element-wise differences may fail to capture underlying topological differences. Further, matrix norms are sensitive to outliers. A few extreme edge weights may severely affect the distance. Thus it is necessary to develop network distances that recognize topology. In this paper, we introduce Gromov-Hausdorff (GH) and Kolmogorov-Smirnov (KS) distances. GH-distance is often used in persistent homology based brain network models. The superior performance of KS-distance is contrasted against matrix norms and GH-distance in random network simulations with the ground truths. The KS-distance is then applied in characterizing the multimodal MRI and DTI study of maltreated children.

  11. A Case-Based Reasoning Method with Rank Aggregation

    NASA Astrophysics Data System (ADS)

    Sun, Jinhua; Du, Jiao; Hu, Jian

    2018-03-01

    In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.

  12. Towards a formal genealogical classification of the Lezgian languages (North Caucasus): testing various phylogenetic methods on lexical data.

    PubMed

    Kassian, Alexei

    2015-01-01

    A lexicostatistical classification is proposed for 20 languages and dialects of the Lezgian group of the North Caucasian family, based on meticulously compiled 110-item wordlists, published as part of the Global Lexicostatistical Database project. The lexical data have been subsequently analyzed with the aid of the principal phylogenetic methods, both distance-based and character-based: Starling neighbor joining (StarlingNJ), Neighbor joining (NJ), Unweighted pair group method with arithmetic mean (UPGMA), Bayesian Markov chain Monte Carlo (MCMC), Unweighted maximum parsimony (UMP). Cognation indexes within the input matrix were marked by two different algorithms: traditional etymological approach and phonetic similarity, i.e., the automatic method of consonant classes (Levenshtein distances). Due to certain reasons (first of all, high lexicographic quality of the wordlists and a consensus about the Lezgian phylogeny among Caucasologists), the Lezgian database is a perfect testing area for appraisal of phylogenetic methods. For the etymology-based input matrix, all the phylogenetic methods, with the possible exception of UMP, have yielded trees that are sufficiently compatible with each other to generate a consensus phylogenetic tree of the Lezgian lects. The obtained consensus tree agrees with the traditional expert classification as well as some of the previously proposed formal classifications of this linguistic group. Contrary to theoretical expectations, the UMP method has suggested the least plausible tree of all. In the case of the phonetic similarity-based input matrix, the distance-based methods (StarlingNJ, NJ, UPGMA) have produced the trees that are rather close to the consensus etymology-based tree and the traditional expert classification, whereas the character-based methods (Bayesian MCMC, UMP) have yielded less likely topologies.

  13. Towards a Formal Genealogical Classification of the Lezgian Languages (North Caucasus): Testing Various Phylogenetic Methods on Lexical Data

    PubMed Central

    Kassian, Alexei

    2015-01-01

    A lexicostatistical classification is proposed for 20 languages and dialects of the Lezgian group of the North Caucasian family, based on meticulously compiled 110-item wordlists, published as part of the Global Lexicostatistical Database project. The lexical data have been subsequently analyzed with the aid of the principal phylogenetic methods, both distance-based and character-based: Starling neighbor joining (StarlingNJ), Neighbor joining (NJ), Unweighted pair group method with arithmetic mean (UPGMA), Bayesian Markov chain Monte Carlo (MCMC), Unweighted maximum parsimony (UMP). Cognation indexes within the input matrix were marked by two different algorithms: traditional etymological approach and phonetic similarity, i.e., the automatic method of consonant classes (Levenshtein distances). Due to certain reasons (first of all, high lexicographic quality of the wordlists and a consensus about the Lezgian phylogeny among Caucasologists), the Lezgian database is a perfect testing area for appraisal of phylogenetic methods. For the etymology-based input matrix, all the phylogenetic methods, with the possible exception of UMP, have yielded trees that are sufficiently compatible with each other to generate a consensus phylogenetic tree of the Lezgian lects. The obtained consensus tree agrees with the traditional expert classification as well as some of the previously proposed formal classifications of this linguistic group. Contrary to theoretical expectations, the UMP method has suggested the least plausible tree of all. In the case of the phonetic similarity-based input matrix, the distance-based methods (StarlingNJ, NJ, UPGMA) have produced the trees that are rather close to the consensus etymology-based tree and the traditional expert classification, whereas the character-based methods (Bayesian MCMC, UMP) have yielded less likely topologies. PMID:25719456

  14. Distance-Dependent Multimodal Image Registration for Agriculture Tasks

    PubMed Central

    Berenstein, Ron; Hočevar, Marko; Godeša, Tone; Edan, Yael; Ben-Shahar, Ohad

    2015-01-01

    Image registration is the process of aligning two or more images of the same scene taken at different times; from different viewpoints; and/or by different sensors. This research focuses on developing a practical method for automatic image registration for agricultural systems that use multimodal sensory systems and operate in natural environments. While not limited to any particular modalities; here we focus on systems with visual and thermal sensory inputs. Our approach is based on pre-calibrating a distance-dependent transformation matrix (DDTM) between the sensors; and representing it in a compact way by regressing the distance-dependent coefficients as distance-dependent functions. The DDTM is measured by calculating a projective transformation matrix for varying distances between the sensors and possible targets. To do so we designed a unique experimental setup including unique Artificial Control Points (ACPs) and their detection algorithms for the two sensors. We demonstrate the utility of our approach using different experiments and evaluation criteria. PMID:26308000

  15. A matrix-inversion method for gamma-source mapping from gamma-count data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adsley, Ian; Burgess, Claire; Bull, Richard K

    In a previous paper it was proposed that a simple matrix inversion method could be used to extract source distributions from gamma-count maps, using simple models to calculate the response matrix. The method was tested using numerically generated count maps. In the present work a 100 kBq Co{sup 60} source has been placed on a gridded surface and the count rate measured using a NaI scintillation detector. The resulting map of gamma counts was used as input to the matrix inversion procedure and the source position recovered. A multi-source array was simulated by superposition of several single-source count maps andmore » the source distribution was again recovered using matrix inversion. The measurements were performed for several detector heights. The effects of uncertainties in source-detector distances on the matrix inversion method are also examined. The results from this work give confidence in the application of the method to practical applications, such as the segregation of highly active objects amongst fuel-element debris. (authors)« less

  16. A flexible new method for 3D measurement based on multi-view image sequences

    NASA Astrophysics Data System (ADS)

    Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu

    2016-11-01

    Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.

  17. Aluminum/alkaline earth metal composites and method for producing

    DOEpatents

    Russell, Alan M; Anderson, Iver E; Kim, Hyong J; Freichs, Andrew E

    2014-02-11

    A composite is provided having an electrically conducting Al matrix and elongated filaments comprising Ca and/or Sr and/or Ba disposed in the matrix and extending along a longitudinal axis of the composite. The filaments initially comprise Ca and/or Sr and/or Ba metal or allow and then may be reacted with the Al matrix to form a strengthening intermetallic compound comprising Al and Ca and/or Sr and/or Ba. The composite is useful as a long-distance, high voltage power transmission conductor.

  18. A comparison of visual outcomes in three different types of monofocal intraocular lenses.

    PubMed

    Shetty, Vijay; Haldipurkar, Suhas S; Gore, Rujuta; Dhamankar, Rita; Paik, Anirban; Setia, Maninder Singh

    2015-01-01

    To compare the visual outcomes (distance and near) in patients opting for three different types of monofocal intraocular lens (IOL) (Matrix Aurium, AcrySof single piece, and AcrySof IQ lens). The present study is a cross-sectional analysis of secondary clinical data collected from 153 eyes (52 eyes in Matrix Aurium, 48 in AcrySof single piece, and 53 in AcrySof IQ group) undergoing cataract surgery (2011-2012). We compared near vision, distance vision, distance corrected near vision in these three types of lenses on day 15 (±3) post-surgery. About 69% of the eyes in the Matrix Aurium group had good uncorrected distance vision post-surgery; the proportion was 48% and 57% in the AcrySof single piece and AcrySof IQ group (P=0.09). The proportion of eyes with good distance corrected near vision were 38%, 33%, and 15% in the Matrix Aurium, AcrySof single piece, and AcrySof IQ groups respectively (P=0.02). Similarly, The proportion with good "both near and distance vision" were 38%, 33%, and 15% in the Matrix Aurium, AcrySof single piece, and AcrySof IQ groups respectively (P=0.02). It was only the Matrix Aurium group which had significantly better both "distance and near vision" compared with the AcrySof IQ group (odds ratio: 5.87, 95% confidence intervals: 1.68 to 20.56). Matrix Aurium monofocal lenses may be a good option for those patients who desire to have a good near as well as distance vision post-surgery.

  19. Gradient-based stochastic estimation of the density matrix

    NASA Astrophysics Data System (ADS)

    Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton

    2018-03-01

    Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.

  20. Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.

    PubMed

    Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui

    2018-03-01

    Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.

  1. Pathloss Calculation Using the Transmission Line Matrix and Finite Difference Time Domain Methods With Coarse Grids

    DOE PAGES

    Nutaro, James; Kuruganti, Teja

    2017-02-24

    Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less

  2. EXTENDING MULTIVARIATE DISTANCE MATRIX REGRESSION WITH AN EFFECT SIZE MEASURE AND THE ASYMPTOTIC NULL DISTRIBUTION OF THE TEST STATISTIC

    PubMed Central

    McArtor, Daniel B.; Lubke, Gitta H.; Bergeman, C. S.

    2017-01-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains. PMID:27738957

  3. Extending multivariate distance matrix regression with an effect size measure and the asymptotic null distribution of the test statistic.

    PubMed

    McArtor, Daniel B; Lubke, Gitta H; Bergeman, C S

    2017-12-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains.

  4. Sexual dimorphism in the human face assessed by euclidean distance matrix analysis.

    PubMed Central

    Ferrario, V F; Sforza, C; Pizzini, G; Vogel, G; Miani, A

    1993-01-01

    The form of any object can be viewed as a combination of size and shape. A recently proposed method (euclidean distance matrix analysis) can differentiate between size and shape differences. It has been applied to analyse the sexual dimorphism in facial form in a sample of 108 healthy young adults (57 men, 51 women). The face was wider and longer in men than in women. A global shape difference was demonstrated, the male face being more rectangular and the female face more square. Gender variations involved especially the lower third of the face and, in particular, the position of the pogonion relative to the other structures. PMID:8300436

  5. A rough set approach for determining weights of decision makers in group decision making.

    PubMed

    Yang, Qiang; Du, Ping-An; Wang, Yong; Liang, Bin

    2017-01-01

    This study aims to present a novel approach for determining the weights of decision makers (DMs) based on rough group decision in multiple attribute group decision-making (MAGDM) problems. First, we construct a rough group decision matrix from all DMs' decision matrixes on the basis of rough set theory. After that, we derive a positive ideal solution (PIS) founded on the average matrix of rough group decision, and negative ideal solutions (NISs) founded on the lower and upper limit matrixes of rough group decision. Then, we obtain the weight of each group member and priority order of alternatives by using relative closeness method, which depends on the distances from each individual group member' decision to the PIS and NISs. Through comparisons with existing methods and an on-line business manager selection example, the proposed method show that it can provide more insights into the subjectivity and vagueness of DMs' evaluations and selections.

  6. Research on propagation properties of controllable hollow flat-topped beams in turbulent atmosphere based on ABCD matrix

    NASA Astrophysics Data System (ADS)

    Liu, Huilong; Lü, Yanfei; Zhang, Jing; Xia, Jing; Pu, Xiaoyun; Dong, Yuan; Li, Shutao; Fu, Xihong; Zhang, Angfeng; Wang, Changjia; Tan, Yong; Zhang, Xihe

    2015-01-01

    This paper studies the propagation properties of controllable hollow flat-topped beams (CHFBs) in turbulent atmosphere based on ABCD matrix, sets up a propagation model and obtains an analytical expression for the propagation. With the help of numerical simulation, the propagation properties of CHFBs in different parameters are studied. Results indicate that in turbulent atmosphere, with the increase of propagation distance, the darkness of CHFBs gradually annihilate, and eventually evolve into Gaussian beams. Compared with the propagation properties in free space, the turbulent atmosphere enhances the diffraction effect of CHFBs and reduces the propagation distance for CHFBs to evolve into Gaussian beams. In strong turbulence atmospheric propagation, Airy disk phenomenon will disappear. The study on the propagation properties of CHFBs in turbulence atmosphere by using ABCD matrix is simple and convenient. This method can also be applied to study the propagation properties of other hollow laser beams in turbulent atmosphere.

  7. Three-dimensional structure of the human immunodeficiency virus type 1 matrix protein.

    PubMed

    Massiah, M A; Starich, M R; Paschall, C; Summers, M F; Christensen, A M; Sundquist, W I

    1994-11-25

    The HIV-1 matrix protein forms an icosahedral shell associated with the inner membrane of the mature virus. Genetic analyses have indicated that the protein performs important functions throughout the viral life-cycle, including anchoring the transmembrane envelope protein on the surface of the virus, assisting in viral penetration, transporting the proviral integration complex across the nuclear envelope, and localizing the assembling virion to the cell membrane. We now report the three-dimensional structure of recombinant HIV-1 matrix protein, determined at high resolution by nuclear magnetic resonance (NMR) methods. The HIV-1 matrix protein is the first retroviral matrix protein to be characterized structurally and only the fourth HIV-1 protein of known structure. NMR signal assignments required recently developed triple-resonance (1H, 13C, 15N) NMR methodologies because signals for 91% of 132 assigned H alpha protons and 74% of the 129 assignable backbone amide protons resonate within chemical shift ranges of 0.8 p.p.m. and 1 p.p.m., respectively. A total of 636 nuclear Overhauser effect-derived distance restraints were employed for distance geometry-based structure calculations, affording an average of 13.0 NMR-derived distance restraints per residue for the experimentally constrained amino acids. An ensemble of 25 refined distance geometry structures with penalties (sum of the squares of the distance violations) of 0.32 A2 or less and individual distance violations under 0.06 A was generated; best-fit superposition of ordered backbone heavy atoms relative to mean atom positions afforded root-mean-square deviations of 0.50 (+/- 0.08) A. The folded HIV-1 matrix protein structure is composed of five alpha-helices, a short 3(10) helical stretch, and a three-strand mixed beta-sheet. Helices I to III and the 3(10) helix pack about a central helix (IV) to form a compact globular domain that is capped by the beta-sheet. The C-terminal helix (helix V) projects away from the beta-sheet to expose carboxyl-terminal residues essential for early steps in the HIV-1 infectious cycle. Basic residues implicated in membrane binding and nuclear localization functions cluster about an extruded cationic loop that connects beta-strands 1 and 2. The structure suggests that both membrane binding and nuclear localization may be mediated by complex tertiary structures rather than simple linear determinants.

  8. Differential computation method used to calibrate the angle-centroid relationship in coaxial reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-05-01

    A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.

  9. VizieR Online Data Catalog: Outliers and similarity in APOGEE (Reis+, 2018)

    NASA Astrophysics Data System (ADS)

    Reis, I.; Poznanski, D.; Baron, D.; Zasowski, G.; Shahaf, S.

    2017-11-01

    t-SNE is a dimensionality reduction algorithm that is particularly well suited for the visualization of high-dimensional datasets. We use t-SNE to visualize our distance matrix. A-priori, these distances could define a space with almost as many dimensions as objects, i.e., tens of thousand of dimensions. Obviously, since many stars are quite similar, and their spectra are defined by a few physical parameters, the minimal spanning space might be smaller. By using t-SNE we can examine the structure of our sample projected into 2D. We use our distance matrix as input to the t-SNE algorithm and in return get a 2D map of the objects in our dataset. For each star in a sample of 183232 APOGEE stars, the APOGEE IDs of the 99 stars with most similar spectra (according to the method described in paper), ordered by similarity. (3 data files).

  10. Wave bandgap formation and its evolution in two-dimensional phononic crystals composed of rubber matrix with periodic steel quarter-cylinders

    NASA Astrophysics Data System (ADS)

    Li, Peng; Wang, Guan; Luo, Dong; Cao, Xiaoshan

    2018-02-01

    The band structure of a two-dimensional phononic crystal, which is composed of four homogenous steel quarter-cylinders immersed in rubber matrix, is investigated and compared with the traditional steel/rubber crystal by the finite element method (FEM). It is revealed that the frequency can then be tuned by changing the distance between adjacent quarter-cylinders. When the distance is relatively small, the integrality of scatterers makes the inner region inside them almost motionless, so that they can be viewed as a whole at high-frequencies. In the case of relatively larger distance, the interaction between each quarter-cylinder and rubber will introduce some new bandgaps at relatively low-frequencies. Lastly, the point defect states induced by the four quarter-cylinders are revealed. These results will be helpful in fabricating devices, such as vibration insulators and acoustic/elastic filters, whose band frequencies can be manipulated artificially.

  11. Structural brain connectivity and cognitive ability differences: A multivariate distance matrix regression analysis.

    PubMed

    Ponsoda, Vicente; Martínez, Kenia; Pineda-Pardo, José A; Abad, Francisco J; Olea, Julio; Román, Francisco J; Barbey, Aron K; Colom, Roberto

    2017-02-01

    Neuroimaging research involves analyses of huge amounts of biological data that might or might not be related with cognition. This relationship is usually approached using univariate methods, and, therefore, correction methods are mandatory for reducing false positives. Nevertheless, the probability of false negatives is also increased. Multivariate frameworks have been proposed for helping to alleviate this balance. Here we apply multivariate distance matrix regression for the simultaneous analysis of biological and cognitive data, namely, structural connections among 82 brain regions and several latent factors estimating cognitive performance. We tested whether cognitive differences predict distances among individuals regarding their connectivity pattern. Beginning with 3,321 connections among regions, the 36 edges better predicted by the individuals' cognitive scores were selected. Cognitive scores were related to connectivity distances in both the full (3,321) and reduced (36) connectivity patterns. The selected edges connect regions distributed across the entire brain and the network defined by these edges supports high-order cognitive processes such as (a) (fluid) executive control, (b) (crystallized) recognition, learning, and language processing, and (c) visuospatial processing. This multivariate study suggests that one widespread, but limited number, of regions in the human brain, supports high-level cognitive ability differences. Hum Brain Mapp 38:803-816, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Tensor manifold-based extreme learning machine for 2.5-D face recognition

    NASA Astrophysics Data System (ADS)

    Chong, Lee Ying; Ong, Thian Song; Teoh, Andrew Beng Jin

    2018-01-01

    We explore the use of the Gabor regional covariance matrix (GRCM), a flexible matrix-based descriptor that embeds the Gabor features in the covariance matrix, as a 2.5-D facial descriptor and an effective means of feature fusion for 2.5-D face recognition problems. Despite its promise, matching is not a trivial problem for GRCM since it is a special instance of a symmetric positive definite (SPD) matrix that resides in non-Euclidean space as a tensor manifold. This implies that GRCM is incompatible with the existing vector-based classifiers and distance matchers. Therefore, we bridge the gap of the GRCM and extreme learning machine (ELM), a vector-based classifier for the 2.5-D face recognition problem. We put forward a tensor manifold-compliant ELM and its two variants by embedding the SPD matrix randomly into reproducing kernel Hilbert space (RKHS) via tensor kernel functions. To preserve the pair-wise distance of the embedded data, we orthogonalize the random-embedded SPD matrix. Hence, classification can be done using a simple ridge regressor, an integrated component of ELM, on the random orthogonal RKHS. Experimental results show that our proposed method is able to improve the recognition performance and further enhance the computational efficiency.

  13. Kinetic-energy matrix elements for atomic Hylleraas-CI wave functions.

    PubMed

    Harris, Frank E

    2016-05-28

    Hylleraas-CI is a superposition-of-configurations method in which each configuration is constructed from a Slater-type orbital (STO) product to which is appended (linearly) at most one interelectron distance rij. Computations of the kinetic energy for atoms by this method have been difficult due to the lack of formulas expressing these matrix elements for general angular momentum in terms of overlap and potential-energy integrals. It is shown here that a strategic application of angular-momentum theory, including the use of vector spherical harmonics, enables the reduction of all atomic kinetic-energy integrals to overlap and potential-energy matrix elements. The new formulas are validated by showing that they yield correct results for a large number of integrals published by other investigators.

  14. Estimating gene function with least squares nonnegative matrix factorization.

    PubMed

    Wang, Guoli; Ochs, Michael F

    2007-01-01

    Nonnegative matrix factorization is a machine learning algorithm that has extracted information from data in a number of fields, including imaging and spectral analysis, text mining, and microarray data analysis. One limitation with the method for linking genes through microarray data in order to estimate gene function is the high variance observed in transcription levels between different genes. Least squares nonnegative matrix factorization uses estimates of the uncertainties on the mRNA levels for each gene in each condition, to guide the algorithm to a local minimum in normalized chi2, rather than a Euclidean distance or divergence between the reconstructed data and the data itself. Herein, application of this method to microarray data is demonstrated in order to predict gene function.

  15. Distance descending ordering method: An O(n) algorithm for inverting the mass matrix in simulation of macromolecules with long branches

    NASA Astrophysics Data System (ADS)

    Xu, Xiankun; Li, Peiwen

    2017-11-01

    Fixman's work in 1974 and the follow-up studies have developed a method that can factorize the inverse of mass matrix into an arithmetic combination of three sparse matrices-one of them is positive definite and needs to be further factorized by using the Cholesky decomposition or similar methods. When the molecule subjected to study is of serial chain structure, this method can achieve O (n) time complexity. However, for molecules with long branches, Cholesky decomposition about the corresponding positive definite matrix will introduce massive fill-in due to its nonzero structure. Although there are several methods can be used to reduce the number of fill-in, none of them could strictly guarantee for zero fill-in for all molecules according to our test, and thus cannot obtain O (n) time complexity by using these traditional methods. In this paper we present a new method that can guarantee for no fill-in in doing the Cholesky decomposition, which was developed based on the correlations between the mass matrix and the geometrical structure of molecules. As a result, the inverting of mass matrix will remain the O (n) time complexity, no matter the molecule structure has long branches or not.

  16. Unraveling the Tangles of Language Evolution

    NASA Astrophysics Data System (ADS)

    Petroni, F.; Serva, M.; Volchenkov, D.

    2012-07-01

    The relationships between languages molded by extremely complex social, cultural and political factors are assessed by an automated method, in which the distance between languages is estimated by the average normalized Levenshtein distance between words from the list of 200 meanings maximally resistant to change. A sequential process of language classification described by random walks on the matrix of lexical distances allows to represent complex relationships between languages geometrically, in terms of distances and angles. We have tested the method on a sample of 50 Indo-European and 50 Austronesian languages. The geometric representations of language taxonomy allows for making accurate interfaces on the most significant events of human history by tracing changes in language families through time. The Anatolian and Kurgan hypothesis of the Indo-European origin and the "express train" model of the Polynesian origin are thoroughly discussed.

  17. Distance-Based Tear Lactoferrin Assay on Microfluidic Paper Device Using Interfacial Interactions on Surface-Modified Cellulose.

    PubMed

    Yamada, Kentaro; Henares, Terence G; Suzuki, Koji; Citterio, Daniel

    2015-11-11

    "Distance-based" detection motifs on microfluidic paper-based analytical devices (μPADs) allow quantitative analysis without using signal readout instruments in a similar manner to classical analogue thermometers. To realize a cost-effective and calibration-free distance-based assay of lactoferrin in human tear fluid on a μPAD not relying on antibodies or enzymes, we investigated the fluidic mobilities of the target protein and Tb(3+) cations used as the fluorescent detection reagent on surface-modified cellulosic filter papers. Chromatographic elution experiments in a tear-like sample matrix containing electrolytes and proteins revealed a collapse of attractive electrostatic interactions between lactoferrin or Tb(3+) and the cellulosic substrate, which was overcome by the modification of the paper surface with the sulfated polysaccharide ι-carrageenan. The resulting μPAD based on the fluorescence emission distance successfully analyzed 0-4 mg mL(-1) of lactoferrin in complex human tear matrix with a lower limit of detection of 0.1 mg mL(-1) by simple visual inspection. Assay results of 18 human tear samples including ocular disease patients and healthy volunteers showed good correlation to the reference ELISA method with a slope of 0.997 and a regression coefficient of 0.948. The distance-based quantitative signal and the good batch-to-batch fabrication reproducibility relying on printing methods enable quantitative analysis by simply reading out "concentration scale marks" printed on the μPAD without performing any calibration and using any signal readout instrument.

  18. Is Hidden Crossings Theory a New MOCC Method?

    NASA Astrophysics Data System (ADS)

    Krstić, Predrag; Schultz, David

    1998-05-01

    We find un unitary transformation of the scaled adiabatic Hamiltonian of a two-center, one-electron collision system which yields a new representation for the matrix elements of nonadiabatic radial coupling, valid for low-to-intermediate collision velocities. These are given in analytic form once the topology of the branch points of the adiabatic Hamiltonian in the plane of complex internuclear distance R is known. The matrix elements do not depend on origin of electronic coordinates and properly vanish at large internuclear distances. The role of the rotational couplings in the new representation is also discussed. The aproach is appropriately extended and compared with the PSS treatment in the fully quantal description of the collision. We apply new radial and rotational matrix elements in the standard Molecular Orbital Close Coupling (MOCC) approach to describe excitation and ionization in collisions of antiprotons with He^+ and of alpha-particles with hydrogen(P.S. Krstić et al, J. Phys. B. 31, in press (1998).). The results are compared with those obtained from the standard MOCC method and from the direct solutions of the Schrödinger equation on lattice (LTDSE)(D.R. Schultz et al, Phys. Rev. A 56, 3710 (1997)).

  19. Sparsity of the normal matrix in the refinement of macromolecules at atomic and subatomic resolution.

    PubMed

    Jelsch, C

    2001-09-01

    The normal matrix in the least-squares refinement of macromolecules is very sparse when the resolution reaches atomic and subatomic levels. The elements of the normal matrix, related to coordinates, thermal motion and charge-density parameters, have a global tendency to decrease rapidly with the interatomic distance between the atoms concerned. For instance, in the case of the protein crambin at 0.54 A resolution, the elements are reduced by two orders of magnitude for distances above 1.5 A. The neglect a priori of most of the normal-matrix elements according to a distance criterion represents an approximation in the refinement of macromolecules, which is particularly valid at very high resolution. The analytical expressions of the normal-matrix elements, which have been derived for the coordinates and the thermal parameters, show that the degree of matrix sparsity increases with the diffraction resolution and the size of the asymmetric unit.

  20. A generalized graph-theoretical matrix of heterosystems and its application to the VMV procedure.

    PubMed

    Mozrzymas, Anna

    2011-12-14

    The extensions of generalized (molecular) graph-theoretical matrix and vector-matrix-vector procedure are considered. The elements of the generalized matrix are redefined in order to describe molecules containing heteroatoms and multiple bonds. The adjacency, distance, detour and reciprocal distance matrices of heterosystems, and corresponding vectors are derived from newly defined generalized graph matrix. The topological indices, which are most widely used in predicting physicochemical and biological properties/activities of various compounds, can be calculated from the new generalized vector-matrix-vector invariant. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. A Calculation Method of Electric Distance and Subarea Division Application Based on Transmission Impedance

    NASA Astrophysics Data System (ADS)

    Fang, G. J.; Bao, H.

    2017-12-01

    The widely used method of calculating electric distances is sensitivity method. The sensitivity matrix is the result of linearization and based on the hypothesis that the active power and reactive power are decoupled, so it is inaccurate. In addition, it calculates the ratio of two partial derivatives as the relationship of two dependent variables, so there is no physical meaning. This paper presents a new method for calculating electrical distance, namely transmission impedance method. It forms power supply paths based on power flow tracing, then establishes generalized branches to calculate transmission impedances. In this paper, the target of power flow tracing is S instead of Q. Q itself has no direction and the grid delivers complex power so that S contains more electrical information than Q. By describing the power transmission relationship of the branch and drawing block diagrams in both forward and reverse directions, it can be found that the numerators of feedback parts of two block diagrams are all the transmission impedances. To ensure the distance is scalar, the absolute value of transmission impedance is defined as electrical distance. Dividing network according to the electric distances and comparing with the results of sensitivity method, it proves that the transmission impedance method can adapt to the dynamic change of system better and reach a reasonable subarea division scheme.

  2. A rough set approach for determining weights of decision makers in group decision making

    PubMed Central

    Yang, Qiang; Du, Ping-an; Wang, Yong; Liang, Bin

    2017-01-01

    This study aims to present a novel approach for determining the weights of decision makers (DMs) based on rough group decision in multiple attribute group decision-making (MAGDM) problems. First, we construct a rough group decision matrix from all DMs’ decision matrixes on the basis of rough set theory. After that, we derive a positive ideal solution (PIS) founded on the average matrix of rough group decision, and negative ideal solutions (NISs) founded on the lower and upper limit matrixes of rough group decision. Then, we obtain the weight of each group member and priority order of alternatives by using relative closeness method, which depends on the distances from each individual group member’ decision to the PIS and NISs. Through comparisons with existing methods and an on-line business manager selection example, the proposed method show that it can provide more insights into the subjectivity and vagueness of DMs’ evaluations and selections. PMID:28234974

  3. Multidimensional Unfolding by Nonmetric Multidimensional Scaling of Spearman Distances in the Extended Permutation Polytope

    ERIC Educational Resources Information Center

    Van Deun, Katrijn; Heiser, Willem J.; Delbeke, Luc

    2007-01-01

    A multidimensional unfolding technique that is not prone to degenerate solutions and is based on multidimensional scaling of a complete data matrix is proposed: distance information about the unfolding data and about the distances both among judges and among objects is included in the complete matrix. The latter information is derived from the…

  4. Approximate string matching algorithms for limited-vocabulary OCR output correction

    NASA Astrophysics Data System (ADS)

    Lasko, Thomas A.; Hauser, Susan E.

    2000-12-01

    Five methods for matching words mistranslated by optical character recognition to their most likely match in a reference dictionary were tested on data from the archives of the National Library of Medicine. The methods, including an adaptation of the cross correlation algorithm, the generic edit distance algorithm, the edit distance algorithm with a probabilistic substitution matrix, Bayesian analysis, and Bayesian analysis on an actively thinned reference dictionary were implemented and their accuracy rates compared. Of the five, the Bayesian algorithm produced the most correct matches (87%), and had the advantage of producing scores that have a useful and practical interpretation.

  5. Evaluation of entropy and JM-distance criterions as features selection methods using spectral and spatial features derived from LANDSAT images

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Dutra, L. V.; Mascarenhas, N. D. A.; Mitsuo, Fernando Augusta, II

    1984-01-01

    A study area near Ribeirao Preto in Sao Paulo state was selected, with predominance in sugar cane. Eight features were extracted from the 4 original bands of LANDSAT image, using low-pass and high-pass filtering to obtain spatial features. There were 5 training sites in order to acquire the necessary parameters. Two groups of four channels were selected from 12 channels using JM-distance and entropy criterions. The number of selected channels was defined by physical restrictions of the image analyzer and computacional costs. The evaluation was performed by extracting the confusion matrix for training and tests areas, with a maximum likelihood classifier, and by defining performance indexes based on those matrixes for each group of channels. Results show that in spatial features and supervised classification, the entropy criterion is better in the sense that allows a more accurate and generalized definition of class signature. On the other hand, JM-distance criterion strongly reduces the misclassification within training areas.

  6. Invalid-point removal based on epipolar constraint in the structured-light method

    NASA Astrophysics Data System (ADS)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-06-01

    In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.

  7. Big geo data surface approximation using radial basis functions: A comparative study

    NASA Astrophysics Data System (ADS)

    Majdisova, Zuzana; Skala, Vaclav

    2017-12-01

    Approximation of scattered data is often a task in many engineering problems. The Radial Basis Function (RBF) approximation is appropriate for big scattered datasets in n-dimensional space. It is a non-separable approximation, as it is based on the distance between two points. This method leads to the solution of an overdetermined linear system of equations. In this paper the RBF approximation methods are briefly described, a new approach to the RBF approximation of big datasets is presented, and a comparison for different Compactly Supported RBFs (CS-RBFs) is made with respect to the accuracy of the computation. The proposed approach uses symmetry of a matrix, partitioning the matrix into blocks and data structures for storage of the sparse matrix. The experiments are performed for synthetic and real datasets.

  8. Kinetic-energy matrix elements for atomic Hylleraas-CI wave functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, Frank E., E-mail: harris@qtp.ufl.edu

    Hylleraas-CI is a superposition-of-configurations method in which each configuration is constructed from a Slater-type orbital (STO) product to which is appended (linearly) at most one interelectron distance r{sub ij}. Computations of the kinetic energy for atoms by this method have been difficult due to the lack of formulas expressing these matrix elements for general angular momentum in terms of overlap and potential-energy integrals. It is shown here that a strategic application of angular-momentum theory, including the use of vector spherical harmonics, enables the reduction of all atomic kinetic-energy integrals to overlap and potential-energy matrix elements. The new formulas are validatedmore » by showing that they yield correct results for a large number of integrals published by other investigators.« less

  9. Computing wave functions in multichannel collisions with non-local potentials using the R-matrix method

    NASA Astrophysics Data System (ADS)

    Bonitati, Joey; Slimmer, Ben; Li, Weichuan; Potel, Gregory; Nunes, Filomena

    2017-09-01

    The calculable form of the R-matrix method has been previously shown to be a useful tool in approximately solving the Schrodinger equation in nuclear scattering problems. We use this technique combined with the Gauss quadrature for the Lagrange-mesh method to efficiently solve for the wave functions of projectile nuclei in low energy collisions (1-100 MeV) involving an arbitrary number of channels. We include the local Woods-Saxon potential, the non-local potential of Perey and Buck, a Coulomb potential, and a coupling potential to computationally solve for the wave function of two nuclei at short distances. Object oriented programming is used to increase modularity, and parallel programming techniques are introduced to reduce computation time. We conclude that the R-matrix method is an effective method to predict the wave functions of nuclei in scattering problems involving both multiple channels and non-local potentials. Michigan State University iCER ACRES REU.

  10. Multivariate Welch t-test on distances

    PubMed Central

    2016-01-01

    Motivation: Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. Results: We develop a solution in the form of a distance-based Welch t-test, TW2, for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and TW2 in reanalysis of two existing microbiome datasets, where the methodology has originated. Availability and Implementation: The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2. Further guidance on application of these methods can be obtained from the author. Contact: alekseye@musc.edu PMID:27515741

  11. Multivariate Welch t-test on distances.

    PubMed

    Alekseyenko, Alexander V

    2016-12-01

    Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. We develop a solution in the form of a distance-based Welch t-test, [Formula: see text], for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and [Formula: see text] in reanalysis of two existing microbiome datasets, where the methodology has originated. The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2 Further guidance on application of these methods can be obtained from the author. alekseye@musc.edu. © The Author 2016. Published by Oxford University Press.

  12. Clustering Tree-structured Data on Manifold

    PubMed Central

    Lu, Na; Miao, Hongyu

    2016-01-01

    Tree-structured data usually contain both topological and geometrical information, and are necessarily considered on manifold instead of Euclidean space for appropriate data parameterization and analysis. In this study, we propose a novel tree-structured data parameterization, called Topology-Attribute matrix (T-A matrix), so the data clustering task can be conducted on matrix manifold. We incorporate the structure constraints embedded in data into the non-negative matrix factorization method to determine meta-trees from the T-A matrix, and the signature vector of each single tree can then be extracted by meta-tree decomposition. The meta-tree space turns out to be a cone space, in which we explore the distance metric and implement the clustering algorithm based on the concepts like Fréchet mean. Finally, the T-A matrix based clustering (TAMBAC) framework is evaluated and compared using both simulated data and real retinal images to illus trate its efficiency and accuracy. PMID:26660696

  13. EDENetworks: a user-friendly software to build and analyse networks in biogeography, ecology and population genetics.

    PubMed

    Kivelä, Mikko; Arnaud-Haond, Sophie; Saramäki, Jari

    2015-01-01

    The recent application of graph-based network theory analysis to biogeography, community ecology and population genetics has created a need for user-friendly software, which would allow a wider accessibility to and adaptation of these methods. EDENetworks aims to fill this void by providing an easy-to-use interface for the whole analysis pipeline of ecological and evolutionary networks starting from matrices of species distributions, genotypes, bacterial OTUs or populations characterized genetically. The user can choose between several different ecological distance metrics, such as Bray-Curtis or Sorensen distance, or population genetic metrics such as FST or Goldstein distances, to turn the raw data into a distance/dissimilarity matrix. This matrix is then transformed into a network by manual or automatic thresholding based on percolation theory or by building the minimum spanning tree. The networks can be visualized along with auxiliary data and analysed with various metrics such as degree, clustering coefficient, assortativity and betweenness centrality. The statistical significance of the results can be estimated either by resampling the original biological data or by null models based on permutations of the data. © 2014 John Wiley & Sons Ltd.

  14. Automatic Configuration of Programmable Logic Controller Emulators

    DTIC Science & Technology

    2015-03-01

    25 11 Example tree generated using UPGMA [Edw13] . . . . . . . . . . . . . . . . . . . . 33 12 Example sequence alignment for two... UPGMA Unweighted Pair Group Method with Arithmetic Mean URL uniform resource locator VM virtual machine XML Extensible Markup Language xx List of...appearance in the ses- sion, and then they are clustered again using Unweighted Pair Group Method with Arithmetic Mean ( UPGMA ) with a distance matrix based

  15. A chemogenomic analysis of the human proteome: application to enzyme families.

    PubMed

    Bernasconi, Paul; Chen, Min; Galasinski, Scott; Popa-Burke, Ioana; Bobasheva, Anna; Coudurier, Louis; Birkos, Steve; Hallam, Rhonda; Janzen, William P

    2007-10-01

    Sequence-based phylogenies (SBP) are well-established tools for describing relationships between proteins. They have been used extensively to predict the behavior and sensitivity toward inhibitors of enzymes within a family. The utility of this approach diminishes when comparing proteins with little sequence homology. Even within an enzyme family, SBPs must be complemented by an orthogonal method that is independent of sequence to better predict enzymatic behavior. A chemogenomic approach is demonstrated here that uses the inhibition profile of a 130,000 diverse molecule library to uncover relationships within a set of enzymes. The profile is used to construct a semimetric additive distance matrix. This matrix, in turn, defines a sequence-independent phylogeny (SIP). The method was applied to 97 enzymes (kinases, proteases, and phosphatases). SIP does not use structural information from the molecules used for establishing the profile, thus providing a more heuristic method than the current approaches, which require knowledge of the specific inhibitor's structure. Within enzyme families, SIP shows a good overall correlation with SBP. More interestingly, SIP uncovers distances within families that are not recognizable by sequence-based methods. In addition, SIP allows the determination of distance between enzymes with no sequence homology, thus uncovering novel relationships not predicted by SBP. This chemogenomic approach, used in conjunction with SBP, should prove to be a powerful tool for choosing target combinations for drug discovery programs as well as for guiding the selection of profiling and liability targets.

  16. Whitby Mudstone, flow from matrix to fractures

    NASA Astrophysics Data System (ADS)

    Houben, Maartje; Hardebol, Nico; Barnhoorn, Auke; Boersma, Quinten; Peach, Colin; Bertotti, Giovanni; Drury, Martyn

    2016-04-01

    Fluid flow from matrix to well in shales would be faster if we account for the duality of the permeable medium considering a high permeable fracture network together with a tight matrix. To investigate how long and how far a gas molecule would have to travel through the matrix until it reaches an open connected fracture we investigated the permeability of the Whitby Mudstone (UK) matrix in combination with mapping the fracture network present in the current outcrops of the Whitby Mudstone at the Yorkshire coast. Matrix permeability was measured perpendicular to the bedding using a pressure step decay method on core samples and permeability values are in the microdarcy range. The natural fracture network present in the pavement shows a connected network with dominant NS and EW strikes, where the NS fractures are the main fracture set with an orthogonal fracture set EW. Fracture spacing relations in the pavements show that the average distance to the nearest fracture varies between 7 cm (EW) and 14 cm (NS), where 90% of the matrix is 30 cm away from the nearest fracture. By making some assumptions like; fracture network at depth is similar to what is exposed in the current pavements and open to flow, fracture network is at hydrostatic pressure at 3 km depth, overpressure between matrix and fractures is 10% and a matrix permeability perpendicular to the bedding of 0.1 microdarcy, we have calculated the time it takes for a gas molecule to travel to the nearest fracture. These input values give travel times up to 8 days for a distance of 14 cm. If the permeability is changed to 1 nanodarcy or 10 microdarcy travel times change to 2.2 years or 2 hours respectively.

  17. Turbine bucket for use in gas turbine engines and methods for fabricating the same

    DOEpatents

    Garcia-Crespo, Andres

    2014-06-03

    A turbine bucket for use with a turbine engine. The turbine bucket includes an airfoil that extends between a root end and a tip end. The airfoil includes an outer wall that defines a cavity that extends from the root end to the tip end. The outer wall includes a first ceramic matrix composite (CMC) substrate that extends a first distance from the root end to the tip end. An inner wall is positioned within the cavity. The inner wall includes a second CMC substrate that extends a second distance from the root end towards the tip end that is different than the first distance.

  18. Distance Delivery of Vocational Education Technologies and Planning Matrixes.

    ERIC Educational Resources Information Center

    Norenberg, Curtis D.; Lundblad, Larry

    This document presents a general review of distance education as it currently pertains to secondary, postsecondary, and adult education. Chapter I discusses the general concepts of distance education. It addresses the nature of distance education and distance delivery, the distance learner, the distance instructor, and distance education learning…

  19. Mining Diagnostic Assessment Data for Concept Similarity

    ERIC Educational Resources Information Center

    Madhyastha, Tara; Hunt, Earl

    2009-01-01

    This paper introduces a method for mining multiple-choice assessment data for similarity of the concepts represented by the multiple choice responses. The resulting similarity matrix can be used to visualize the distance between concepts in a lower-dimensional space. This gives an instructor a visualization of the relative difficulty of concepts…

  20. A method to calculate synthetic waveforms in stratified VTI media

    NASA Astrophysics Data System (ADS)

    Wang, W.; Wen, L.

    2012-12-01

    Transverse isotropy with a vertical axis of symmetry (VTI) may be an important material property in the Earth's interior. In this presentation, we develop a method to calculate synthetic seismograms for wave propagation in stratified VTI media. Our method is based on the generalized reflection and transmission method (GRTM) (Luco & Apsel 1983). We extend it to transversely isotropic VTI media. GRTM has the advantage of remaining stable in high frequency calculations compared to the Haskell Matrix method (Haskell 1964), which explicitly excludes the exponential growth terms in the propagation matrix and is limited to low frequency computation. In the implementation, we also improve GRTM in two aspects. 1) We apply the Shanks transformation (Bender & Orszag 1999) to improve the convergence rate of convergence. This improvement is especially important when the depths of source and receiver are close. 2) We adopt a self-adaptive Simpson integration method (Chen & Zhang 2001) in the discrete wavenumber integration so that the integration can still be efficiently carried out at large epicentral distances. Because the calculation is independent in each frequency, the program can also be effectively implemented in parallel computing. Our method provides a powerful tool to synthesize broadband seismograms of VIT media at a large epicenter distance range. We will present examples of using the method to study possible transverse isotropy in the upper mantle and the lowermost mantle.

  1. Insight on agglomerates of gold nanoparticles in glass based on surface plasmon resonance spectrum: study by multi-spheres T-matrix method

    NASA Astrophysics Data System (ADS)

    Avakyan, L. A.; Heinz, M.; Skidanenko, A. V.; Yablunovski, K. A.; Ihlemann, J.; Meinertz, J.; Patzig, C.; Dubiel, M.; Bugaev, L. A.

    2018-01-01

    The formation of a localized surface plasmon resonance (SPR) spectrum of randomly distributed gold nanoparticles in the surface layer of silicate float glass, generated and implanted by UV ArF-excimer laser irradiation of a thin gold layer sputter-coated on the glass surface, was studied by the T-matrix method, which enables particle agglomeration to be taken into account. The experimental technique used is promising for the production of submicron patterns of plasmonic nanoparticles (given by laser masks or gratings) without damage to the glass surface. Analysis of the applicability of the multi-spheres T-matrix (MSTM) method to the studied material was performed through calculations of SPR characteristics for differently arranged and structured gold nanoparticles (gold nanoparticles in solution, particles pairs, and core-shell silver-gold nanoparticles) for which either experimental data or results of the modeling by other methods are available. For the studied gold nanoparticles in glass, it was revealed that the theoretical description of their SPR spectrum requires consideration of the plasmon coupling between particles, which can be done effectively by MSTM calculations. The obtained statistical distributions over particle sizes and over interparticle distances demonstrated the saturation behavior with respect to the number of particles under consideration, which enabled us to determine the effective aggregate of particles, sufficient to form the SPR spectrum. The suggested technique for the fitting of an experimental SPR spectrum of gold nanoparticles in glass by varying the geometrical parameters of the particles aggregate in the recurring calculations of spectrum by MSTM method enabled us to determine statistical characteristics of the aggregate: the average distance between particles, average size, and size distribution of the particles. The fitting strategy of the SPR spectrum presented here can be applied to nanoparticles of any nature and in various substances, and, in principle, can be extended for particles with non-spherical shapes, like ellipsoids, rod-like and other T-matrix-solvable shapes.

  2. Genetic diversity of popcorn genotypes using molecular analysis.

    PubMed

    Resh, F S; Scapim, C A; Mangolin, C A; Machado, M F P S; do Amaral, A T; Ramos, H C C; Vivas, M

    2015-08-19

    In this study, we analyzed dominant molecular markers to estimate the genetic divergence of 26 popcorn genotypes and evaluate whether using various dissimilarity coefficients with these dominant markers influences the results of cluster analysis. Fifteen random amplification of polymorphic DNA primers produced 157 amplified fragments, of which 65 were monomorphic and 92 were polymorphic. To calculate the genetic distances among the 26 genotypes, the complements of the Jaccard, Dice, and Rogers and Tanimoto similarity coefficients were used. A matrix of Dij values (dissimilarity matrix) was constructed, from which the genetic distances among genotypes were represented in a more simplified manner as a dendrogram generated using the unweighted pair-group method with arithmetic average. Clusters determined by molecular analysis generally did not group material from the same parental origin together. The largest genetic distance was between varieties 17 (UNB-2) and 18 (PA-091). In the identification of genotypes with the smallest genetic distance, the 3 coefficients showed no agreement. The 3 dissimilarity coefficients showed no major differences among their grouping patterns because agreement in determining the genotypes with large, medium, and small genetic distances was high. The largest genetic distances were observed for the Rogers and Tanimoto dissimilarity coefficient (0.74), followed by the Jaccard coefficient (0.65) and the Dice coefficient (0.48). The 3 coefficients showed similar estimations for the cophenetic correlation coefficient. Correlations among the matrices generated using the 3 coefficients were positive and had high magnitudes, reflecting strong agreement among the results obtained using the 3 evaluated dissimilarity coefficients.

  3. A Comparison of Weights Matrices on Computation of Dengue Spatial Autocorrelation

    NASA Astrophysics Data System (ADS)

    Suryowati, K.; Bekti, R. D.; Faradila, A.

    2018-04-01

    Spatial autocorrelation is one of spatial analysis to identify patterns of relationship or correlation between locations. This method is very important to get information on the dispersal patterns characteristic of a region and linkages between locations. In this study, it applied on the incidence of Dengue Hemorrhagic Fever (DHF) in 17 sub districts in Sleman, Daerah Istimewa Yogyakarta Province. The link among location indicated by a spatial weight matrix. It describe the structure of neighbouring and reflects the spatial influence. According to the spatial data, type of weighting matrix can be divided into two types: point type (distance) and the neighbourhood area (contiguity). Selection weighting function is one determinant of the results of the spatial analysis. This study use queen contiguity based on first order neighbour weights, queen contiguity based on second order neighbour weights, and inverse distance weights. Queen contiguity first order and inverse distance weights shows that there is the significance spatial autocorrelation in DHF, but not by queen contiguity second order. Queen contiguity first and second order compute 68 and 86 neighbour list

  4. Color normalization of histology slides using graph regularized sparse NMF

    NASA Astrophysics Data System (ADS)

    Sha, Lingdao; Schonfeld, Dan; Sethi, Amit

    2017-03-01

    Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The representation of a pixel in the stain density space is constrained to follow the feature distance of the pixel to pixels in the neighborhood graph. Utilizing color matrix transfer method with the stain concentrations found using our GSNMF method, the color normalization performance was also better than existing methods.

  5. Distance dependence in photo-induced intramolecular electron transfer

    NASA Astrophysics Data System (ADS)

    Larsson, Sven; Volosov, Andrey

    1986-09-01

    The distance dependence of the rate of photo-induced electron transfer reactions is studied. A quantum mechanical method CNDO/S is applied to a series of molecules recently investigated by Hush et al. experimentally. The calculations show a large interaction through the saturated bridge which connects the two chromophores. The electronic matrix element HAB decreases a factor 10 in about 4 Å. There is also a decrease of the rate due to less exothermicity for the longer molecule. The results are in fair agreement with the experimental results.

  6. Multispectral Palmprint Recognition Using a Quaternion Matrix

    PubMed Central

    Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng

    2012-01-01

    Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%. PMID:22666049

  7. Multispectral palmprint recognition using a quaternion matrix.

    PubMed

    Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng

    2012-01-01

    Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.

  8. Learning object correspondences with the observed transport shape measure.

    PubMed

    Pitiot, Alain; Delingette, Hervé; Toga, Arthur W; Thompson, Paul M

    2003-07-01

    We propose a learning method which introduces explicit knowledge to the object correspondence problem. Our approach uses an a priori learning set to compute a dense correspondence field between two objects, where the characteristics of the field bear close resemblance to those in the learning set. We introduce a new local shape measure we call the "observed transport measure", whose properties make it particularly amenable to the matching problem. From the values of our measure obtained at every point of the objects to be matched, we compute a distance matrix which embeds the correspondence problem in a highly expressive and redundant construct and facilitates its manipulation. We present two learning strategies that rely on the distance matrix and discuss their applications to the matching of a variety of 1-D, 2-D and 3-D objects, including the corpus callosum and ventricular surfaces.

  9. General transfer matrix formalism to calculate DNA-protein-drug binding in gene regulation: application to OR operator of phage lambda.

    PubMed

    Teif, Vladimir B

    2007-01-01

    The transfer matrix methodology is proposed as a systematic tool for the statistical-mechanical description of DNA-protein-drug binding involved in gene regulation. We show that a genetic system of several cis-regulatory modules is calculable using this method, considering explicitly the site-overlapping, competitive, cooperative binding of regulatory proteins, their multilayer assembly and DNA looping. In the methodological section, the matrix models are solved for the basic types of short- and long-range interactions between DNA-bound proteins, drugs and nucleosomes. We apply the matrix method to gene regulation at the O(R) operator of phage lambda. The transfer matrix formalism allowed the description of the lambda-switch at a single-nucleotide resolution, taking into account the effects of a range of inter-protein distances. Our calculations confirm previously established roles of the contact CI-Cro-RNAP interactions. Concerning long-range interactions, we show that while the DNA loop between the O(R) and O(L) operators is important at the lysogenic CI concentrations, the interference between the adjacent promoters P(R) and P(RM) becomes more important at small CI concentrations. A large change in the expression pattern may arise in this regime due to anticooperative interactions between DNA-bound RNA polymerases. The applicability of the matrix method to more complex systems is discussed.

  10. General transfer matrix formalism to calculate DNA–protein–drug binding in gene regulation: application to OR operator of phage λ

    PubMed Central

    Teif, Vladimir B.

    2007-01-01

    The transfer matrix methodology is proposed as a systematic tool for the statistical–mechanical description of DNA–protein–drug binding involved in gene regulation. We show that a genetic system of several cis-regulatory modules is calculable using this method, considering explicitly the site-overlapping, competitive, cooperative binding of regulatory proteins, their multilayer assembly and DNA looping. In the methodological section, the matrix models are solved for the basic types of short- and long-range interactions between DNA-bound proteins, drugs and nucleosomes. We apply the matrix method to gene regulation at the OR operator of phage λ. The transfer matrix formalism allowed the description of the λ-switch at a single-nucleotide resolution, taking into account the effects of a range of inter-protein distances. Our calculations confirm previously established roles of the contact CI–Cro–RNAP interactions. Concerning long-range interactions, we show that while the DNA loop between the OR and OL operators is important at the lysogenic CI concentrations, the interference between the adjacent promoters PR and PRM becomes more important at small CI concentrations. A large change in the expression pattern may arise in this regime due to anticooperative interactions between DNA-bound RNA polymerases. The applicability of the matrix method to more complex systems is discussed. PMID:17526526

  11. Mixed Pattern Matching-Based Traffic Abnormal Behavior Recognition

    PubMed Central

    Cui, Zhiming; Zhao, Pengpeng

    2014-01-01

    A motion trajectory is an intuitive representation form in time-space domain for a micromotion behavior of moving target. Trajectory analysis is an important approach to recognize abnormal behaviors of moving targets. Against the complexity of vehicle trajectories, this paper first proposed a trajectory pattern learning method based on dynamic time warping (DTW) and spectral clustering. It introduced the DTW distance to measure the distances between vehicle trajectories and determined the number of clusters automatically by a spectral clustering algorithm based on the distance matrix. Then, it clusters sample data points into different clusters. After the spatial patterns and direction patterns learned from the clusters, a recognition method for detecting vehicle abnormal behaviors based on mixed pattern matching was proposed. The experimental results show that the proposed technical scheme can recognize main types of traffic abnormal behaviors effectively and has good robustness. The real-world application verified its feasibility and the validity. PMID:24605045

  12. New numerical method for radiation heat transfer in nonhomogeneous participating media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, J.R.; Tan, Zhiqiang

    A new numerical method, which solves the exact integral equations of distance-angular integration form for radiation transfer, is introduced in this paper. By constructing and prestoring the numerical integral formulas for the distance integral for appropriate kernel functions, this method eliminates the time consuming evaluations of the kernels of the space integrals in the formal computations. In addition, when the number of elements in the system is large, the resulting coefficient matrix is quite sparse. Thus, either considerable time or much storage can be saved. A weakness of the method is discussed, and some remedies are suggested. As illustrations, somemore » one-dimensional and two-dimensional problems in both homogeneous and inhomogeneous emitting, absorbing, and linear anisotropic scattering media are studied. Some results are compared with available data. 13 refs.« less

  13. Acceleration of intensity-modulated radiotherapy dose calculation by importance sampling of the calculation matrices.

    PubMed

    Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas

    2002-05-01

    In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan.

  14. Derivation of stiffness matrix in constitutive modeling of magnetorheological elastomer

    NASA Astrophysics Data System (ADS)

    Leng, D.; Sun, L.; Sun, J.; Lin, Y.

    2013-02-01

    Magnetorheological elastomers (MREs) are a class of smart materials whose mechanical properties change instantly by the application of a magnetic field. Based on the specially orthotropic, transversely isotropic stress-strain relationships and effective permeability model, the stiffness matrix of constitutive equations for deformable chain-like MRE is considered. To valid the components of shear modulus in this stiffness matrix, the magnetic-structural simulations with finite element method (FEM) are presented. An acceptable agreement is illustrated between analytical equations and numerical simulations. For the specified magnetic field, sphere particle radius, distance between adjacent particles in chains and volume fractions of ferrous particles, this constitutive equation is effective to engineering application to estimate the elastic behaviour of chain-like MRE in an external magnetic field.

  15. Distance matrix-based approach to protein structure prediction.

    PubMed

    Kloczkowski, Andrzej; Jernigan, Robert L; Wu, Zhijun; Song, Guang; Yang, Lei; Kolinski, Andrzej; Pokarowski, Piotr

    2009-03-01

    Much structural information is encoded in the internal distances; a distance matrix-based approach can be used to predict protein structure and dynamics, and for structural refinement. Our approach is based on the square distance matrix D = [r(ij)(2)] containing all square distances between residues in proteins. This distance matrix contains more information than the contact matrix C, that has elements of either 0 or 1 depending on whether the distance r (ij) is greater or less than a cutoff value r (cutoff). We have performed spectral decomposition of the distance matrices D = sigma lambda(k)V(k)V(kT), in terms of eigenvalues lambda kappa and the corresponding eigenvectors v kappa and found that it contains at most five nonzero terms. A dominant eigenvector is proportional to r (2)--the square distance of points from the center of mass, with the next three being the principal components of the system of points. By predicting r (2) from the sequence we can approximate a distance matrix of a protein with an expected RMSD value of about 7.3 A, and by combining it with the prediction of the first principal component we can improve this approximation to 4.0 A. We can also explain the role of hydrophobic interactions for the protein structure, because r is highly correlated with the hydrophobic profile of the sequence. Moreover, r is highly correlated with several sequence profiles which are useful in protein structure prediction, such as contact number, the residue-wise contact order (RWCO) or mean square fluctuations (i.e. crystallographic temperature factors). We have also shown that the next three components are related to spatial directionality of the secondary structure elements, and they may be also predicted from the sequence, improving overall structure prediction. We have also shown that the large number of available HIV-1 protease structures provides a remarkable sampling of conformations, which can be viewed as direct structural information about the dynamics. After structure matching, we apply principal component analysis (PCA) to obtain the important apparent motions for both bound and unbound structures. There are significant similarities between the first few key motions and the first few low-frequency normal modes calculated from a static representative structure with an elastic network model (ENM) that is based on the contact matrix C (related to D), strongly suggesting that the variations among the observed structures and the corresponding conformational changes are facilitated by the low-frequency, global motions intrinsic to the structure. Similarities are also found when the approach is applied to an NMR ensemble, as well as to atomic molecular dynamics (MD) trajectories. Thus, a sufficiently large number of experimental structures can directly provide important information about protein dynamics, but ENM can also provide a similar sampling of conformations. Finally, we use distance constraints from databases of known protein structures for structure refinement. We use the distributions of distances of various types in known protein structures to obtain the most probable ranges or the mean-force potentials for the distances. We then impose these constraints on structures to be refined or include the mean-force potentials directly in the energy minimization so that more plausible structural models can be built. This approach has been successfully used by us in 2006 in the CASPR structure refinement (http://predictioncenter.org/caspR).

  16. Breeding Guild Determines Frog Distributions in Response to Edge Effects and Habitat Conversion in the Brazil's Atlantic Forest.

    PubMed

    Ferreira, Rodrigo B; Beard, Karen H; Crump, Martha L

    2016-01-01

    Understanding the response of species with differing life-history traits to habitat edges and habitat conversion helps predict their likelihood of persistence across changing landscape. In Brazil's Atlantic Forest, we evaluated frog richness and abundance by breeding guild at four distances from the edge of a reserve: i) 200 m inside the forest, ii) 50 m inside the forest, iii) at the forest edge, and iv) 50 m inside three different converted habitats (coffee plantation, non-native Eucalyptus plantation, and abandoned pastures, hereafter matrix types). By sampling a dry and a wet season, we recorded 622 individual frogs representing 29 species, of which three were undescribed. Breeding guild (i.e. bromeliad, leaf-litter, and water-body breeders) was the most important variable explaining frog distributions in relation to edge effects and matrix types. Leaf-litter and bromeliad breeders decreased in richness and abundance from the forest interior toward the matrix habitats. Water-body breeders increased in richness toward the matrix and remained relatively stable in abundance across distances. Number of large trees (i.e. DBH > 15 cm) and bromeliads best explained frog richness and abundance across distances. Twenty species found in the interior of the forest were not found in any matrix habitat. Richness and abundance across breeding guilds were higher in the rainy season but frog distributions were similar across the four distances in the two seasons. Across matrix types, leaf-litter species primarily used Eucalyptus plantations, whereas water-body species primarily used coffee plantations. Bromeliad breeders were not found inside any matrix habitat. Our study highlights the importance of primary forest for bromeliad and leaf-litter breeders. We propose that water-body breeders use edge and matrix habitats to reach breeding habitats along the valleys. Including life-history characteristics, such as breeding guild, can improve predictions of frog distributions in response to edge effect and matrix types, and can guide more effective management and conservation actions.

  17. Breeding Guild Determines Frog Distributions in Response to Edge Effects and Habitat Conversion in the Brazil’s Atlantic Forest

    PubMed Central

    Ferreira, Rodrigo B.; Beard, Karen H.; Crump, Martha L.

    2016-01-01

    Understanding the response of species with differing life-history traits to habitat edges and habitat conversion helps predict their likelihood of persistence across changing landscape. In Brazil’s Atlantic Forest, we evaluated frog richness and abundance by breeding guild at four distances from the edge of a reserve: i) 200 m inside the forest, ii) 50 m inside the forest, iii) at the forest edge, and iv) 50 m inside three different converted habitats (coffee plantation, non-native Eucalyptus plantation, and abandoned pastures, hereafter matrix types). By sampling a dry and a wet season, we recorded 622 individual frogs representing 29 species, of which three were undescribed. Breeding guild (i.e. bromeliad, leaf-litter, and water-body breeders) was the most important variable explaining frog distributions in relation to edge effects and matrix types. Leaf-litter and bromeliad breeders decreased in richness and abundance from the forest interior toward the matrix habitats. Water-body breeders increased in richness toward the matrix and remained relatively stable in abundance across distances. Number of large trees (i.e. DBH > 15 cm) and bromeliads best explained frog richness and abundance across distances. Twenty species found in the interior of the forest were not found in any matrix habitat. Richness and abundance across breeding guilds were higher in the rainy season but frog distributions were similar across the four distances in the two seasons. Across matrix types, leaf-litter species primarily used Eucalyptus plantations, whereas water-body species primarily used coffee plantations. Bromeliad breeders were not found inside any matrix habitat. Our study highlights the importance of primary forest for bromeliad and leaf-litter breeders. We propose that water-body breeders use edge and matrix habitats to reach breeding habitats along the valleys. Including life-history characteristics, such as breeding guild, can improve predictions of frog distributions in response to edge effect and matrix types, and can guide more effective management and conservation actions. PMID:27272328

  18. Symmetric nonnegative matrix factorization: algorithms and applications to probabilistic clustering.

    PubMed

    He, Zhaoshui; Xie, Shengli; Zdunek, Rafal; Zhou, Guoxu; Cichocki, Andrzej

    2011-12-01

    Nonnegative matrix factorization (NMF) is an unsupervised learning method useful in various applications including image processing and semantic analysis of documents. This paper focuses on symmetric NMF (SNMF), which is a special case of NMF decomposition. Three parallel multiplicative update algorithms using level 3 basic linear algebra subprograms directly are developed for this problem. First, by minimizing the Euclidean distance, a multiplicative update algorithm is proposed, and its convergence under mild conditions is proved. Based on it, we further propose another two fast parallel methods: α-SNMF and β -SNMF algorithms. All of them are easy to implement. These algorithms are applied to probabilistic clustering. We demonstrate their effectiveness for facial image clustering, document categorization, and pattern clustering in gene expression.

  19. Discriminant projective non-negative matrix factorization.

    PubMed

    Guan, Naiyang; Zhang, Xiang; Luo, Zhigang; Tao, Dacheng; Yang, Xuejun

    2013-01-01

    Projective non-negative matrix factorization (PNMF) projects high-dimensional non-negative examples X onto a lower-dimensional subspace spanned by a non-negative basis W and considers W(T) X as their coefficients, i.e., X≈WW(T) X. Since PNMF learns the natural parts-based representation Wof X, it has been widely used in many fields such as pattern recognition and computer vision. However, PNMF does not perform well in classification tasks because it completely ignores the label information of the dataset. This paper proposes a Discriminant PNMF method (DPNMF) to overcome this deficiency. In particular, DPNMF exploits Fisher's criterion to PNMF for utilizing the label information. Similar to PNMF, DPNMF learns a single non-negative basis matrix and needs less computational burden than NMF. In contrast to PNMF, DPNMF maximizes the distance between centers of any two classes of examples meanwhile minimizes the distance between any two examples of the same class in the lower-dimensional subspace and thus has more discriminant power. We develop a multiplicative update rule to solve DPNMF and prove its convergence. Experimental results on four popular face image datasets confirm its effectiveness comparing with the representative NMF and PNMF algorithms.

  20. Discriminant Projective Non-Negative Matrix Factorization

    PubMed Central

    Guan, Naiyang; Zhang, Xiang; Luo, Zhigang; Tao, Dacheng; Yang, Xuejun

    2013-01-01

    Projective non-negative matrix factorization (PNMF) projects high-dimensional non-negative examples X onto a lower-dimensional subspace spanned by a non-negative basis W and considers WT X as their coefficients, i.e., X≈WWT X. Since PNMF learns the natural parts-based representation Wof X, it has been widely used in many fields such as pattern recognition and computer vision. However, PNMF does not perform well in classification tasks because it completely ignores the label information of the dataset. This paper proposes a Discriminant PNMF method (DPNMF) to overcome this deficiency. In particular, DPNMF exploits Fisher's criterion to PNMF for utilizing the label information. Similar to PNMF, DPNMF learns a single non-negative basis matrix and needs less computational burden than NMF. In contrast to PNMF, DPNMF maximizes the distance between centers of any two classes of examples meanwhile minimizes the distance between any two examples of the same class in the lower-dimensional subspace and thus has more discriminant power. We develop a multiplicative update rule to solve DPNMF and prove its convergence. Experimental results on four popular face image datasets confirm its effectiveness comparing with the representative NMF and PNMF algorithms. PMID:24376680

  1. Application of Image Analysis for Characterization of Spatial Arrangements of Features in Microstructure

    NASA Technical Reports Server (NTRS)

    Louis, Pascal; Gokhale, Arun M.

    1995-01-01

    A number of microstructural processes are sensitive to the spatial arrangements of features in microstructure. However, very little attention has been given in the past to the experimental measurements of the descriptors of microstructural distance distributions due to the lack of practically feasible methods. We present a digital image analysis procedure to estimate the micro-structural distance distributions. The application of the technique is demonstrated via estimation of K function, radial distribution function, and nearest-neighbor distribution function of hollow spherical carbon particulates in a polymer matrix composite, observed in a metallographic section.

  2. Molecular analysis of genetic diversity among vine accessions using DNA markers.

    PubMed

    da Costa, A F; Teodoro, P E; Bhering, L L; Tardin, F D; Daher, R F; Campos, W F; Viana, A P; Pereira, M G

    2017-04-13

    Viticulture presents a number of economic and social advantages, such as increasing employment levels and fixing the labor force in rural areas. With the aim of initiating a program of genetic improvement in grapevine from the State University of the state of Rio de Janeiro North Darcy Ribeiro, genetic diversity between 40 genotypes (varieties, rootstock, and species of different subgenera) was evaluated using Random amplified polymorphic DNA (RAPD) molecular markers. We built a matrix of binary data, whereby the presence of a band was assigned as "1" and the absence of a band was assigned as "0." The genetic distance was calculated between pairs of genotypes based on the arithmetic complement from the Jaccard Index. The results revealed the presence of considerable variability in the collection. Analysis of the genetic dissimilarity matrix revealed that the most dissimilar genotypes were Rupestris du Lot and Vitis rotundifolia because they were the most genetically distant (0.5972). The most similar were genotypes 31 (unidentified) and Rupestris du lot, which showed zero distance, confirming the results of field observations. A duplicate was confirmed, consistent with field observations, and a short distance was found between the variety 'Italy' and its mutation, 'Ruby'. The grouping methods used were somewhat concordant.

  3. Degree of coherence for vectorial electromagnetic fields as the distance between correlation matrices.

    PubMed

    Luis, Alfredo

    2007-04-01

    We assess the degree of coherence of vectorial electromagnetic fields in the space-frequency domain as the distance between the cross-spectral density matrix and the identity matrix representing completely incoherent light. This definition is compared with previous approaches. It is shown that this distance provides an upper bound for the degree of coherence and visibility for any pair of scalar waves obtained by linear combinations of the original fields. This same approach emerges when applying a previous definition of global coherence to a Young interferometer.

  4. Generalized Mulliken-Hush analysis of electronic coupling interactions in compressed pi-stacked porphyrin-bridge-quinone systems.

    PubMed

    Zheng, Jieru; Kang, Youn K; Therien, Michael J; Beratan, David N

    2005-08-17

    Donor-acceptor interactions were investigated in a series of unusually rigid, cofacially compressed pi-stacked porphyrin-bridge-quinone systems. The two-state generalized Mulliken-Hush (GMH) approach was used to compute the coupling matrix elements. The theoretical coupling values evaluated with the GMH method were obtained from configuration interaction calculations using the INDO/S method. The results of this analysis are consistent with the comparatively soft distance dependences observed for both the charge separation and charge recombination reactions. Theoretical studies of model structures indicate that the phenyl units dominate the mediation of the donor-acceptor coupling and that the relatively weak exponential decay of rate with distance arises from the compression of this pi-electron stack.

  5. Dependence of Sum Frequency Generation (SFG) Spectral Features on the Mesoscale Arrangement of SFG-Active Crystalline Domains Interspersed in SFG-Inactive Matrix: A Case Study with Cellulose in Uniaxially Aligned Control Samples and Alkali-Treated Secondary Cell Walls of Plants

    DOE PAGES

    Makarem, Mohamadamin; Sawada, Daisuke; O'Neill, Hugh M.; ...

    2017-04-21

    Vibrational sum frequency generation (SFG) spectroscopy can selectively detect not only molecules at two-dimensional (2D) interfaces but also noncentrosymmetric domains interspersed in amorphous three-dimensional (3D) matrixes. However, the SFG analysis of 3D systems is more complicated than 2D systems because more variables are involved. One such variable is the distance between SFG-active domains in SFG-inactive matrixes. In this study, we fabricated control samples in which SFG-active cellulose crystals were uniaxially aligned in an amorphous matrix. Assuming uniform separation distances between cellulose crystals, the relative intensities of alkyl (CH) and hydroxyl (OH) SFG peaks of cellulose could be related to themore » intercrystallite distance. The experimentally measured CH/OH intensity ratio as a function of the intercrystallite distance could be explained reasonably well with a model constructed using the theoretically calculated hyperpolarizabilities of cellulose and the symmetry cancellation principle of dipoles antiparallel to each other. In conclusion, this comparison revealed physical insights into the intercrystallite distance dependence of the CH/OH SFG intensity ratio of cellulose, which can be used to interpret the SFG spectral features of plant cell walls in terms of mesoscale packing of cellulose microfibrils.« less

  6. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    PubMed

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  7. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    PubMed

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  8. Atmospheric pressure matrix-assisted laser desorption ionization as a plume diagnostic tool in laser evaporation methods

    NASA Astrophysics Data System (ADS)

    Callahan, John H.; Galicia, Marsha C.; Vertes, Akos

    2002-09-01

    Laser evaporation techniques, including matrix-assisted pulsed laser evaporation (MAPLE), are attracting increasing attention due to their ability to deposit thin layers of undegraded synthetic and biopolymers. Laser evaporation methods can be implemented in reflection geometry with the laser and the substrate positioned on the same side of the target. In some applications (e.g. direct write, DW), however, transmission geometry is used, i.e. the thin target is placed between the laser and the substrate. In this case, the laser pulse perforates the target and transfers some target material to the substrate. In order to optimize evaporation processes it is important to know the composition of the target plume and the material deposited from the plume. We used a recently introduced analytical method, atmospheric pressure matrix-assisted laser desorption ionization (AP-MALDI) to characterize the ionic components of the plume both in reflection and in transmission geometry. This technique can also be used to directly probe materials deposited on surfaces (such as glass slides) by laser evaporation methods. The test compound (small peptides, e.g. Angiotensin I, ATI or Substance P) was mixed with a MALDI matrix (α-cyano-4-hydroxycinnamic acid (CHCA), sinapinic acid (SA) or 2,5-dihydroxybenzoic acid (DHB)) and applied to the stainless steel (reflection geometry) or transparent conducting (transmission geometry) target holder. In addition to the classical dried droplet method, we also used electrospray target deposition to gain better control of crystallite size, thickness and homogeneity. The target was mounted in front of the inlet orifice of an ion trap mass spectrometer (IT-MS) that sampled the ionic components of the plume generated by a nitrogen laser. We studied the effect of several parameters, such as, the orifice to target distance, illumination geometry, extracting voltage distribution and sample preparation on the generated ions. Various analyte-matrix and matrix-matrix cluster ions were observed with relatively low abundance of the matrix ions.

  9. An active learning representative subset selection method using net analyte signal.

    PubMed

    He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi

    2018-05-05

    To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. An active learning representative subset selection method using net analyte signal

    NASA Astrophysics Data System (ADS)

    He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi

    2018-05-01

    To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.

  11. Universality of quantum information in chaotic CFTs

    NASA Astrophysics Data System (ADS)

    Lashkari, Nima; Dymarsky, Anatoly; Liu, Hong

    2018-03-01

    We study the Eigenstate Thermalization Hypothesis (ETH) in chaotic conformal field theories (CFTs) of arbitrary dimensions. Assuming local ETH, we compute the reduced density matrix of a ball-shaped subsystem of finite size in the infinite volume limit when the full system is an energy eigenstate. This reduced density matrix is close in trace distance to a density matrix, to which we refer as the ETH density matrix, that is independent of all the details of an eigenstate except its energy and charges under global symmetries. In two dimensions, the ETH density matrix is universal for all theories with the same value of central charge. We argue that the ETH density matrix is close in trace distance to the reduced density matrix of the (micro)canonical ensemble. We support the argument in higher dimensions by comparing the Von Neumann entropy of the ETH density matrix with the entropy of a black hole in holographic systems in the low temperature limit. Finally, we generalize our analysis to the coherent states with energy density that varies slowly in space, and show that locally such states are well described by the ETH density matrix.

  12. Microarray-based Resequencing of Multiple Bacillus anthracis Isolates

    DTIC Science & Technology

    2004-12-17

    generated an Unweighted Pair Group Method Arithmetic Mean ( UPGMA ) tree (see methods [56]; Figure 3). The strains group together in a manner broadly similar...was created using DNADIST, plotted as a UPGMA tree using NEIGHBOR and the tree plotted using DRAWGRAM [56]. The B1 strain A0465 was used as an...distance matrix was created using DNADIST, plotted as a UPGMA tree using NEIGHBOR and the tree plotted using DRAWGRAM [57]. Additional data files The

  13. IR-MALDESI MASS SPECTROMETRY IMAGING OF BIOLOGICAL TISSUE SECTIONS USING ICE AS A MATRIX

    PubMed Central

    Robichaud, Guillaume; Barry, Jeremy A.; Muddiman, David C.

    2014-01-01

    Infrared Matrix-Assisted Laser Desorption Electrospray Ionization (IR-MALDESI) Mass Spectrometry imaging of biological tissue sections using a layer of deposited ice as an energy absorbing matrix was investigated. Dynamics of plume ablation were first explored using a nanosecond exposure shadowgraphy system designed to simultaneously collect pictures of the plume with a camera and collect the FT-ICR mass spectrum corresponding to that same ablation event. Ablation of fresh tissue analyzed with and without using ice as a matrix were both compared using this technique. Effect of spot-to-spot distance, number of laser shots per pixel and tissue condition (matrix) on ion abundance was also investigated for 50 µm thick tissue sections. Finally, the statistical method called design of experiments was used to compare source parameters and determine the optimal conditions for IR-MALDESI of tissue sections using deposited ice as a matrix. With a better understanding of the fundamentals of ablation dynamics and a systematic approach to explore the experimental space, it was possible to improve ion abundance by nearly one order of magnitude. PMID:24385399

  14. Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations.

    PubMed

    Ahrari, Ali; Deb, Kalyanmoy; Preuss, Mike

    2017-01-01

    During the recent decades, many niching methods have been proposed and empirically verified on some available test problems. They often rely on some particular assumptions associated with the distribution, shape, and size of the basins, which can seldom be made in practical optimization problems. This study utilizes several existing concepts and techniques, such as taboo points, normalized Mahalanobis distance, and the Ursem's hill-valley function in order to develop a new tool for multimodal optimization, which does not make any of these assumptions. In the proposed method, several subpopulations explore the search space in parallel. Offspring of a subpopulation are forced to maintain a sufficient distance to the center of fitter subpopulations and the previously identified basins, which are marked as taboo points. The taboo points repel the subpopulation to prevent convergence to the same basin. A strategy to update the repelling power of the taboo points is proposed to address the challenge of basins of dissimilar size. The local shape of a basin is also approximated by the distribution of the subpopulation members converging to that basin. The proposed niching strategy is incorporated into the covariance matrix self-adaptation evolution strategy (CMSA-ES), a potent global optimization method. The resultant method, called the covariance matrix self-adaptation with repelling subpopulations (RS-CMSA), is assessed and compared to several state-of-the-art niching methods on a standard test suite for multimodal optimization. An organized procedure for parameter setting is followed which assumes a rough estimation of the desired/expected number of minima available. Performance sensitivity to the accuracy of this estimation is also studied by introducing the concept of robust mean peak ratio. Based on the numerical results using the available and the introduced performance measures, RS-CMSA emerges as the most successful method when robustness and efficiency are considered at the same time.

  15. An Upper Bound on Orbital Debris Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do no good because the analyst defaults to no knowledge of the combined, relative position error covariance matrix. It is reasonable to think, given an assumption of no covariance information, an analyst might still attempt to determine the error covariance matrix that results in an upper bound on the P (sub c). Without some guidance on limits to the shape, size and orientation of the unknown covariance matrix, the limiting case is a degenerate ellipse lying along the relative miss vector in the collision plane. Unless the miss position is exceptionally large or the at-risk object is exceptionally small, this method results in a maximum P (sub c) too large to be of practical use. For example, assuming that the miss distance is equal to the current ISS alert volume along-track (+ or -) distance of 25 kilometers and that the at-risk area has a 70 meter radius. The maximum (degenerate ellipse) P (sub c) is about 0.00136. At 40 kilometers, the maximum P (sub c) would be 0.00085 which is still almost an order of magnitude larger than the ISS maneuver threshold of 0.0001. In fact, a miss distance of almost 340 kilometers is necessary to reduce the maximum P (sub c) associated with this degenerate ellipse to the ISS maneuver threshold value. Such a result is frequently of no practical value to the analyst. Some improvement may be made with respect to this problem by realizing that while the position error covariance matrix of one of the objects (usually the debris object) may not be known the position error covariance matrix of the other object (usually the asset) is almost always available. Making use of the position error covariance information for the one object provides an improvement in finding a maximum P (sub c) which, in some cases, may offer real utility. The equations to be used are presented and their use discussed.

  16. Similarities among receptor pockets and among compounds: analysis and application to in silico ligand screening.

    PubMed

    Fukunishi, Yoshifumi; Mikami, Yoshiaki; Nakamura, Haruki

    2005-09-01

    We developed a new method to evaluate the distances and similarities between receptor pockets or chemical compounds based on a multi-receptor versus multi-ligand docking affinity matrix. The receptors were classified by a cluster analysis based on calculations of the distance between receptor pockets. A set of low homologous receptors that bind a similar compound could be classified into one cluster. Based on this line of reasoning, we proposed a new in silico screening method. According to this method, compounds in a database were docked to multiple targets. The new docking score was a slightly modified version of the multiple active site correction (MASC) score. Receptors that were at a set distance from the target receptor were not included in the analysis, and the modified MASC scores were calculated for the selected receptors. The choice of the receptors is important to achieve a good screening result, and our clustering of receptors is useful to this purpose. This method was applied to the analysis of a set of 132 receptors and 132 compounds, and the results demonstrated that this method achieves a high hit ratio, as compared to that of a uniform sampling, using a receptor-ligand docking program, Sievgene, which was newly developed with a good docking performance yielding 50.8% of the reconstructed complexes at a distance of less than 2 A RMSD.

  17. Structural model of dioxouranium(VI) with hydrazono ligands.

    PubMed

    Mubarak, Ahmed T

    2005-04-01

    Synthesis and characterization of several new coordination compounds of dioxouranium(VI) heterochelates with bidentate hydrazono compounds derived from 1-phenyl-3-methyl-5-pyrazolone are described. The ligands and uranayl complexes have been characterized by various physico-chemical techniques. The bond lengths and the force constant have been calculated from asymmetric stretching frequency of OUO groups. The infrared spectral studies showed a monobasic bidentate behaviour with the oxygen and hydrazo nitrogen donor system. The effect of Hammett's constant on the bond distances and the force constants were also discussed and drawn. Wilson's matrix method, Badger's formula, Jones and El-Sonbati equations were used to determine the stretching and interaction force constant from which the UO bond distances were calculated. The bond distances of these complexes were also investigated.

  18. Structural model of dioxouranium(VI) with hydrazono ligands

    NASA Astrophysics Data System (ADS)

    Mubarak, Ahmed T.

    2005-04-01

    Synthesis and characterization of several new coordination compounds of dioxouranium(VI) heterochelates with bidentate hydrazono compounds derived from 1-phenyl-3-methyl-5-pyrazolone are described. The ligands and uranayl complexes have been characterized by various physico-chemical techniques. The bond lengths and the force constant have been calculated from asymmetric stretching frequency of O sbnd U sbnd O groups. The infrared spectral studies showed a monobasic bidentate behaviour with the oxygen and hydrazo nitrogen donor system. The effect of Hammett's constant on the bond distances and the force constants were also discussed and drawn. Wilson's matrix method, Badger's formula, Jones and El-Sonbati equations were used to determine the stretching and interaction force constant from which the U sbnd O bond distances were calculated. The bond distances of these complexes were also investigated.

  19. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL

    PubMed Central

    Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi

    2017-01-01

    A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively. PMID:29051701

  20. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL.

    PubMed

    Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi

    2017-01-01

    A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.

  1. Lead-lag cross-sectional structure and detection of correlated anticorrelated regime shifts: Application to the volatilities of inflation and economic growth rates

    NASA Astrophysics Data System (ADS)

    Zhou, Wei-Xing; Sornette, Didier

    2007-07-01

    We have recently introduced the “thermal optimal path” (TOP) method to investigate the real-time lead-lag structure between two time series. The TOP method consists in searching for a robust noise-averaged optimal path of the distance matrix along which the two time series have the greatest similarity. Here, we generalize the TOP method by introducing a more general definition of distance which takes into account possible regime shifts between positive and negative correlations. This generalization to track possible changes of correlation signs is able to identify possible transitions from one convention (or consensus) to another. Numerical simulations on synthetic time series verify that the new TOP method performs as expected even in the presence of substantial noise. We then apply it to investigate changes of convention in the dependence structure between the historical volatilities of the USA inflation rate and economic growth rate. Several measures show that the new TOP method significantly outperforms standard cross-correlation methods.

  2. A new method for photon transport in Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sato, T.; Ogawa, K.

    1999-12-01

    Monte Carlo methods are used to evaluate data methods such as scatter and attenuation compensation in single photon emission CT (SPECT), treatment planning in radiation therapy, and in many industrial applications. In Monte Carlo simulation, photon transport requires calculating the distance from the location of the emitted photon to the nearest boundary of each uniform attenuating medium along its path of travel, and comparing this distance with the length of its path generated at emission. Here, the authors propose a new method that omits the calculation of the location of the exit point of the photon from each voxel and of the distance between the exit point and the original position. The method only checks the medium of each voxel along the photon's path. If the medium differs from that in the voxel from which the photon was emitted, the authors calculate the location of the entry point in the voxel, and the length of the path is compared with the mean free path length generated by a random number. Simulations using the MCAT phantom show that the ratios of the calculation time were 1.0 for the voxel-based method, and 0.51 for the proposed method with a 256/spl times/256/spl times/256 matrix image, thereby confirming the effectiveness of the algorithm.

  3. Methods for tape fabrication of continuous filament composite parts and articles of manufacture thereof

    DOEpatents

    Weisberg, Andrew H

    2013-10-01

    A method for forming a composite structure according to one embodiment includes forming a first ply; and forming a second ply above the first ply. Forming each ply comprises: applying a bonding material to a tape, the tape comprising a fiber and a matrix, wherein the bonding material has a curing time of less than about 1 second; and adding the tape to a substrate for forming adjacent tape winds having about a constant distance therebetween. Additional systems, methods and articles of manufacture are also presented.

  4. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  5. Size and shape measurement in contemporary cephalometrics.

    PubMed

    McIntyre, Grant T; Mossey, Peter A

    2003-06-01

    The traditional method of analysing cephalograms--conventional cephalometric analysis (CCA)--involves the calculation of linear distance measurements, angular measurements, area measurements, and ratios. Because shape information cannot be determined from these 'size-based' measurements, an increasing number of studies employ geometric morphometric tools in the cephalometric analysis of craniofacial morphology. Most of the discussions surrounding the appropriateness of CCA, Procrustes superimposition, Euclidean distance matrix analysis (EDMA), thin-plate spline analysis (TPS), finite element morphometry (FEM), elliptical Fourier functions (EFF), and medial axis analysis (MAA) have centred upon mathematical and statistical arguments. Surprisingly, little information is available to assist the orthodontist in the clinical relevance of each technique. This article evaluates the advantages and limitations of the above methods currently used to analyse the craniofacial morphology on cephalograms and investigates their clinical relevance and possible applications.

  6. A Simple Method for Computing Resistance Distance

    NASA Astrophysics Data System (ADS)

    Bapat, Ravindra B.; Gutmana, Ivan; Xiao, Wenjun

    2003-10-01

    The resistance distance ri j between two vertices vi and vj of a (connected, molecular) graph G is equal to the effective resistance between the respective two points of an electrical network, constructed so as to correspond to G, such that the resistance of any edge is unity. We show how rij can be computed from the Laplacian matrix L of the graph G: Let L(i) and L(i, j) be obtained from L by deleting its i-th row and column, and by deleting its i-th and j-th rows and columns, respectively. Then rij = detL(i, j)/detL(i).

  7. Robust Image Regression Based on the Extended Matrix Variate Power Exponential Distribution of Dependent Noise.

    PubMed

    Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu

    2017-09-01

    Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.

  8. CD-Based Indices for Link Prediction in Complex Network.

    PubMed

    Wang, Tao; Wang, Hongjue; Wang, Xiaoxia

    2016-01-01

    Lots of similarity-based algorithms have been designed to deal with the problem of link prediction in the past decade. In order to improve prediction accuracy, a novel cosine similarity index CD based on distance between nodes and cosine value between vectors is proposed in this paper. Firstly, node coordinate matrix can be obtained by node distances which are different from distance matrix and row vectors of the matrix are regarded as coordinates of nodes. Then, cosine value between node coordinates is used as their similarity index. A local community density index LD is also proposed. Then, a series of CD-based indices include CD-LD-k, CD*LD-k, CD-k and CDI are presented and applied in ten real networks. Experimental results demonstrate the effectiveness of CD-based indices. The effects of network clustering coefficient and assortative coefficient on prediction accuracy of indices are analyzed. CD-LD-k and CD*LD-k can improve prediction accuracy without considering the assortative coefficient of network is negative or positive. According to analysis of relative precision of each method on each network, CD-LD-k and CD*LD-k indices have excellent average performance and robustness. CD and CD-k indices perform better on positive assortative networks than on negative assortative networks. For negative assortative networks, we improve and refine CD index, referred as CDI index, combining the advantages of CD index and evolutionary mechanism of the network model BA. Experimental results reveal that CDI index can increase prediction accuracy of CD on negative assortative networks.

  9. CD-Based Indices for Link Prediction in Complex Network

    PubMed Central

    Wang, Tao; Wang, Hongjue; Wang, Xiaoxia

    2016-01-01

    Lots of similarity-based algorithms have been designed to deal with the problem of link prediction in the past decade. In order to improve prediction accuracy, a novel cosine similarity index CD based on distance between nodes and cosine value between vectors is proposed in this paper. Firstly, node coordinate matrix can be obtained by node distances which are different from distance matrix and row vectors of the matrix are regarded as coordinates of nodes. Then, cosine value between node coordinates is used as their similarity index. A local community density index LD is also proposed. Then, a series of CD-based indices include CD-LD-k, CD*LD-k, CD-k and CDI are presented and applied in ten real networks. Experimental results demonstrate the effectiveness of CD-based indices. The effects of network clustering coefficient and assortative coefficient on prediction accuracy of indices are analyzed. CD-LD-k and CD*LD-k can improve prediction accuracy without considering the assortative coefficient of network is negative or positive. According to analysis of relative precision of each method on each network, CD-LD-k and CD*LD-k indices have excellent average performance and robustness. CD and CD-k indices perform better on positive assortative networks than on negative assortative networks. For negative assortative networks, we improve and refine CD index, referred as CDI index, combining the advantages of CD index and evolutionary mechanism of the network model BA. Experimental results reveal that CDI index can increase prediction accuracy of CD on negative assortative networks. PMID:26752405

  10. Detection of white spot lesions by segmenting laser speckle images using computer vision methods.

    PubMed

    Gavinho, Luciano G; Araujo, Sidnei A; Bussadori, Sandra K; Silva, João V P; Deana, Alessandro M

    2018-05-05

    This paper aims to develop a method for laser speckle image segmentation of tooth surfaces for diagnosis of early stages caries. The method, applied directly to a raw image obtained by digital photography, is based on the difference between the speckle pattern of a carious lesion tooth surface area and that of a sound area. Each image is divided into blocks which are identified in a working matrix by their χ 2 distance between block histograms of the analyzed image and the reference histograms previously obtained by K-means from healthy (h_Sound) and lesioned (h_Decay) areas, separately. If the χ 2 distance between a block histogram and h_Sound is greater than the distance to h_Decay, this block is marked as decayed. The experiments showed that the method can provide effective segmentation for initial lesions. We used 64 images to test the algorithm and we achieved 100% accuracy in segmentation. Differences between the speckle pattern of a sound tooth surface region and a carious region, even in the early stage, can be evidenced by the χ 2 distance between histograms. This method proves to be more effective for segmenting the laser speckle image, which enhances the contrast between sound and lesioned tissues. The results were obtained with low computational cost. The method has the potential for early diagnosis in a clinical environment, through the development of low-cost portable equipment.

  11. Comparative test on several forms of background error covariance in 3DVar

    NASA Astrophysics Data System (ADS)

    Shao, Aimei

    2013-04-01

    The background error covariance matrix (Hereinafter referred to as B matrix) plays an important role in the three-dimensional variational (3DVar) data assimilation method. However, it is difficult to get B matrix accurately because true atmospheric state is unknown. Therefore, some methods were developed to estimate B matrix (e.g. NMC method, innovation analysis method, recursive filters, and ensemble method such as EnKF). Prior to further development and application of these methods, the function of several B matrixes estimated by these methods in 3Dvar is worth studying and evaluating. For this reason, NCEP reanalysis data and forecast data are used to test the effectiveness of the several B matrixes with VAF (Huang, 1999) method. Here the NCEP analysis is treated as the truth and in this case the forecast error is known. The data from 2006 to 2007 is used as the samples to estimate B matrix and the data in 2008 is used to verify the assimilation effects. The 48h and 24h forecast valid at the same time is used to estimate B matrix with NMC method. B matrix can be represented by a correlation part (a non-diagonal matrix) and a variance part (a diagonal matrix of variances). Gaussian filter function as an approximate approach is used to represent the variation of correlation coefficients with distance in numerous 3DVar systems. On the basis of the assumption, the following several forms of B matrixes are designed and test with VAF in the comparative experiments: (1) error variance and the characteristic lengths are fixed and setted to their mean value averaged over the analysis domain; (2) similar to (1), but the mean characteristic lengths reduce to 50 percent for the height and 60 percent for the temperature of the original; (3) similar to (2), but error variance calculated directly by the historical data is space-dependent; (4) error variance and characteristic lengths are all calculated directly by the historical data; (5) B matrix is estimated directly by the historical data; (6) similar to (5), but a localization process is performed; (7) B matrix is estimated by NMC method but error variance is reduced by 1.7 times in order that the value is close to that calculated from the true forecast error samples; (8) similar to (7), but the localization similar to (6) is performed. Experimental results with the different B matrixes show that for the Gaussian-type B matrix the characteristic lengths calculated from the true error samples don't bring a good analysis results. However, the reduced characteristic lengths (about half of the original one) can lead to a good analysis. If the B matrix estimated directly from the historical data is used in 3DVar, the assimilation effect can not reach to the best. The better assimilation results are generated with the application of reduced characteristic length and localization. Even so, it hasn't obvious advantage compared with Gaussian-type B matrix with the optimal characteristic length. It implies that the Gaussian-type B matrix, widely used for operational 3DVar system, can get a good analysis with the appropriate characteristic lengths. The crucial problem is how to determine the appropriate characteristic lengths. (This work is supported by the National Natural Science Foundation of China (41275102, 40875063), and the Fundamental Research Funds for the Central Universities (lzujbky-2010-9) )

  12. Positive edge effects on forest-interior cryptogams in clear-cuts.

    PubMed

    Caruso, Alexandro; Rudolphi, Jörgen; Rydin, Håkan

    2011-01-01

    Biological edge effects are often assessed in high quality focal habitats that are negatively influenced by human-modified low quality matrix habitats. A deeper understanding of the possibilities for positive edge effects in matrix habitats bordering focal habitats (e.g. spillover effects) is, however, essential for enhancing landscape-level resilience to human alterations. We surveyed epixylic (dead wood inhabiting) forest-interior cryptogams (lichens, bryophytes, and fungi) associated with mature old-growth forests in 30 young managed Swedish boreal forest stands bordering a mature forest of high conservation value. In each young stand we registered species occurrences on coarse dead wood in transects 0-50 m from the border between stand types. We quantified the effect of distance from the mature forest on the occurrence of forest-interior species in the young stands, while accounting for local environment and propagule sources. For comparison we also surveyed epixylic open-habitat (associated with open forests) and generalist cryptogams. Species composition of epixylic cryptogams in young stands differed with distance from the mature forest: the frequency of occurrence of forest-interior species decreased with increasing distance whereas it increased for open-habitat species. Generalists were unaffected by distance. Epixylic, boreal forest-interior cryptogams do occur in matrix habitats such as clear-cuts. In addition, they are associated with the matrix edge because of a favourable microclimate closer to the mature forest on southern matrix edges. Retention and creation of dead wood in clear-cuts along the edges to focal habitats is a feasible way to enhance the long-term persistence of epixylic habitat specialists in fragmented landscapes. The proposed management measures should be performed in the whole stand as it matures, since microclimatic edge effects diminish as the matrix habitat matures. We argue that management that aims to increase habitat quality in matrix habitats bordering focal habitats should increase the probability of long-term persistence of habitat specialists.

  13. Positive Edge Effects on Forest-Interior Cryptogams in Clear-Cuts

    PubMed Central

    Caruso, Alexandro; Rudolphi, Jörgen; Rydin, Håkan

    2011-01-01

    Biological edge effects are often assessed in high quality focal habitats that are negatively influenced by human-modified low quality matrix habitats. A deeper understanding of the possibilities for positive edge effects in matrix habitats bordering focal habitats (e.g. spillover effects) is, however, essential for enhancing landscape-level resilience to human alterations. We surveyed epixylic (dead wood inhabiting) forest-interior cryptogams (lichens, bryophytes, and fungi) associated with mature old-growth forests in 30 young managed Swedish boreal forest stands bordering a mature forest of high conservation value. In each young stand we registered species occurrences on coarse dead wood in transects 0–50 m from the border between stand types. We quantified the effect of distance from the mature forest on the occurrence of forest-interior species in the young stands, while accounting for local environment and propagule sources. For comparison we also surveyed epixylic open-habitat (associated with open forests) and generalist cryptogams. Species composition of epixylic cryptogams in young stands differed with distance from the mature forest: the frequency of occurrence of forest-interior species decreased with increasing distance whereas it increased for open-habitat species. Generalists were unaffected by distance. Epixylic, boreal forest-interior cryptogams do occur in matrix habitats such as clear-cuts. In addition, they are associated with the matrix edge because of a favourable microclimate closer to the mature forest on southern matrix edges. Retention and creation of dead wood in clear-cuts along the edges to focal habitats is a feasible way to enhance the long-term persistence of epixylic habitat specialists in fragmented landscapes. The proposed management measures should be performed in the whole stand as it matures, since microclimatic edge effects diminish as the matrix habitat matures. We argue that management that aims to increase habitat quality in matrix habitats bordering focal habitats should increase the probability of long-term persistence of habitat specialists. PMID:22114728

  14. Interval MULTIMOORA method with target values of attributes based on interval distance and preference degree: biomaterials selection

    NASA Astrophysics Data System (ADS)

    Hafezalkotob, Arian; Hafezalkotob, Ashkan

    2017-06-01

    A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.

  15. System matrix computation vs storage on GPU: A comparative study in cone beam CT.

    PubMed

    Matenine, Dmitri; Côté, Geoffroi; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2018-02-01

    Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersection distances between the trajectories of photons and the object, also called ray tracing or system matrix computation. This work focused on the thin-ray model is aimed at comparing different system matrix handling strategies using graphical processing units (GPUs). In this work, the system matrix is modeled by thin rays intersecting a regular grid of box-shaped voxels, known to be an accurate representation of the forward projection operator in CT. However, an uncompressed system matrix exceeds the random access memory (RAM) capacities of typical computers by one order of magnitude or more. Considering the RAM limitations of GPU hardware, several system matrix handling methods were compared: full storage of a compressed system matrix, on-the-fly computation of its coefficients, and partial storage of the system matrix with partial on-the-fly computation. These methods were tested on geometries mimicking a cone beam CT (CBCT) acquisition of a human head. Execution times of three routines of interest were compared: forward projection, backprojection, and ordered-subsets convex (OSC) iteration. A fully stored system matrix yielded the shortest backprojection and OSC iteration times, with a 1.52× acceleration for OSC when compared to the on-the-fly approach. Nevertheless, the maximum problem size was bound by the available GPU RAM and geometrical symmetries. On-the-fly coefficient computation did not require symmetries and was shown to be the fastest for forward projection. It also offered reasonable execution times of about 176.4 ms per view per OSC iteration for a detector of 512 × 448 pixels and a volume of 384 3 voxels, using commodity GPU hardware. Partial system matrix storage has shown a performance similar to the on-the-fly approach, while still relying on symmetries. Partial system matrix storage was shown to yield the lowest relative performance. On-the-fly ray tracing was shown to be the most flexible method, yielding reasonable execution times. A fully stored system matrix allowed for the lowest backprojection and OSC iteration times and may be of interest for certain performance-oriented applications. © 2017 American Association of Physicists in Medicine.

  16. Analysis of geological materials containing uranium using laser-induced breakdown spectroscopy (LIBS)

    NASA Astrophysics Data System (ADS)

    Barefield, James E.; Judge, Elizabeth J.; Campbell, Keri R.; Colgan, James P.; Kilcrease, David P.; Johns, Heather M.; Wiens, Roger C.; McInroy, Rhonda E.; Martinez, Ronald K.; Clegg, Samuel M.

    2016-06-01

    Laser induced breakdown spectroscopy (LIBS) is a rapid atomic emission spectroscopy technique that can be configured for a variety of applications including space, forensics, and industry. LIBS can also be configured for stand-off distances or in-situ, under vacuum, high pressure, atmospheric or different gas environments, and with different resolving-power spectrometers. The detection of uranium in a complex geological matrix under different measurement schemes is explored in this paper. Although many investigations have been completed in an attempt to detect and quantify uranium in different matrices at in-situ and standoff distances, this work detects and quantifies uranium in a complex matrix under Martian and ambient air conditions. Investigation of uranium detection using a low resolving-power LIBS system at stand-off distances (1.6 m) is also reported. The results are compared to an in-situ LIBS system with medium resolving power and under ambient air conditions. Uranium has many thousands of emission lines in the 200-800 nm spectral region. In the presence of other matrix elements and at lower concentrations, the limit of detection of uranium is significantly reduced. The two measurement methods (low and high resolving-power spectrometers) are compared for limit of detection (LOD). Of the twenty-one potential diagnostic uranium emission lines, seven (409, 424, 434, 435, 436, 591, and 682 nm) have been used to determine the LOD for pitchblende in a dunite matrix using the ChemCam test bed LIBS system. The LOD values determined for uranium transitions in air are 409.013 nm (24,700 ppm), 424.167 nm (23,780 ppm), 434.169 nm (24,390 ppm), 435.574 nm (35,880 ppm), 436.205 nm (19,340 ppm), 591.539 nm (47,310 ppm), and 682.692 nm (18,580 ppm). The corresponding LOD values determined for uranium transitions in 7 Torr CO2 are 424.167 nm (25,760 ppm), 434.169 nm (40,800 ppm), 436.205 nm (32,050 ppm), 591.539 nm (15,340 ppm), and 682.692 nm (29,080 ppm). The LOD values determine for uranium emission lines using the medium resolving power (10,000 λ/Δλ) LIBS system for the dunite matrix in air are 409.013 nm (6120 ppm), 424.167 nm (5356 ppm), 434.169 nm (5693 ppm), 435.574 nm (6329 ppm), 436.205 nm (2142 ppm), and 682.692 nm (10,741 ppm). The corresponding LOD values determined for uranium transitions in a SiO2 matrix are 409.013 nm (272 ppm), 424.167 nm (268 ppm), 434.169 nm (402 ppm), 435.574 nm (1067 ppm), 436.205 nm (482 ppm), and 682.692 nm (720 ppm). The impact of spectral resolution, atmospheric conditions, matrix elements, and measurement distances on LOD is discussed. The measurements will assist one in selecting the proper system components based upon the application and the required analytical performance.

  17. Optimization and Analysis of Laser Beam Machining Parameters for Al7075-TiB2 In-situ Composite

    NASA Astrophysics Data System (ADS)

    Manjoth, S.; Keshavamurthy, R.; Pradeep Kumar, G. S.

    2016-09-01

    The paper focuses on laser beam machining (LBM) of In-situ synthesized Al7075-TiB2 metal matrix composite. Optimization and influence of laser machining process parameters on surface roughness, volumetric material removal rate (VMRR) and dimensional accuracy of composites were studied. Al7075-TiB2 metal matrix composite was synthesized by in-situ reaction technique using stir casting process. Taguchi's L9 orthogonal array was used to design experimental trials. Standoff distance (SOD) (0.3 - 0.5mm), Cutting Speed (1000 - 1200 m/hr) and Gas pressure (0.5 - 0.7 bar) were considered as variable input parameters at three different levels, while power and nozzle diameter were maintained constant with air as assisting gas. Optimized process parameters for surface roughness, volumetric material removal rate (VMRR) and dimensional accuracy were calculated by generating the main effects plot for signal noise ratio (S/N ratio) for surface roughness, VMRR and dimensional error using Minitab software (version 16). The Significant of standoff distance (SOD), cutting speed and gas pressure on surface roughness, volumetric material removal rate (VMRR) and dimensional error were calculated using analysis of variance (ANOVA) method. Results indicate that, for surface roughness, cutting speed (56.38%) is most significant parameter followed by standoff distance (41.03%) and gas pressure (2.6%). For volumetric material removal (VMRR), gas pressure (42.32%) is most significant parameter followed by cutting speed (33.60%) and standoff distance (24.06%). For dimensional error, Standoff distance (53.34%) is most significant parameter followed by cutting speed (34.12%) and gas pressure (12.53%). Further, verification experiments were carried out to confirm performance of optimized process parameters.

  18. Learning a Mahalanobis Distance-Based Dynamic Time Warping Measure for Multivariate Time Series Classification.

    PubMed

    Mei, Jiangyuan; Liu, Meizhu; Wang, Yuan-Fang; Gao, Huijun

    2016-06-01

    Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.

  19. Application of agglomerative clustering for analyzing phylogenetically on bacterium of saliva

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Fitria, I.; Umam, K.

    2017-07-01

    Analyzing population of Streptococcus bacteria is important since these species can cause dental caries, periodontal, halitosis (bad breath) and more problems. This paper will discuss the phylogenetically relation between the bacterium Streptococcus in saliva using a phylogenetic tree of agglomerative clustering methods. Starting with the bacterium Streptococcus DNA sequence obtained from the GenBank, then performed characteristic extraction of DNA sequences. The characteristic extraction result is matrix form, then performed normalization using min-max normalization and calculate genetic distance using Manhattan distance. Agglomerative clustering technique consisting of single linkage, complete linkage and average linkage. In this agglomerative algorithm number of group is started with the number of individual species. The most similar species is grouped until the similarity decreases and then formed a single group. Results of grouping is a phylogenetic tree and branches that join an established level of distance, that the smaller the distance the more the similarity of the larger species implementation is using R, an open source program.

  20. A novel point cloud registration using 2D image features

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Chou; Tai, Yen-Chou; Lee, Jhong-Jin; Chen, Yong-Sheng

    2017-01-01

    Since a 3D scanner only captures a scene of a 3D object at a time, a 3D registration for multi-scene is the key issue of 3D modeling. This paper presents a novel and an efficient 3D registration method based on 2D local feature matching. The proposed method transforms the point clouds into 2D bearing angle images and then uses the 2D feature based matching method, SURF, to find matching pixel pairs between two images. The corresponding points of 3D point clouds can be obtained by those pixel pairs. Since the corresponding pairs are sorted by their distance between matching features, only the top half of the corresponding pairs are used to find the optimal rotation matrix by the least squares approximation. In this paper, the optimal rotation matrix is derived by orthogonal Procrustes method (SVD-based approach). Therefore, the 3D model of an object can be reconstructed by aligning those point clouds with the optimal transformation matrix. Experimental results show that the accuracy of the proposed method is close to the ICP, but the computation cost is reduced significantly. The performance is six times faster than the generalized-ICP algorithm. Furthermore, while the ICP requires high alignment similarity of two scenes, the proposed method is robust to a larger difference of viewing angle.

  1. A novel manifold-manifold distance index applied to looseness state assessment of viscoelastic sandwich structures

    NASA Astrophysics Data System (ADS)

    Sun, Chuang; Zhang, Zhousuo; Guo, Ting; Luo, Xue; Qu, Jinxiu; Zhang, Chenxuan; Cheng, Wei; Li, Bing

    2014-06-01

    Viscoelastic sandwich structures (VSS) are widely used in mechanical equipment; their state assessment is necessary to detect structural states and to keep equipment running with high reliability. This paper proposes a novel manifold-manifold distance-based assessment (M2DBA) method for assessing the looseness state in VSSs. In the M2DBA method, a manifold-manifold distance is viewed as a health index. To design the index, response signals from the structure are firstly acquired by condition monitoring technology and a Hankel matrix is constructed by using the response signals to describe state patterns of the VSS. Thereafter, a subspace analysis method, that is, principal component analysis (PCA), is performed to extract the condition subspace hidden in the Hankel matrix. From the subspace, pattern changes in dynamic structural properties are characterized. Further, a Grassmann manifold (GM) is formed by organizing a set of subspaces. The manifold is mapped to a reproducing kernel Hilbert space (RKHS), where support vector data description (SVDD) is used to model the manifold as a hypersphere. Finally, a health index is defined as the cosine of the angle between the hypersphere centers corresponding to the structural baseline state and the looseness state. The defined health index contains similarity information existing in the two structural states, so structural looseness states can be effectively identified. Moreover, the health index is derived by analysis of the global properties of subspace sets, which is different from traditional subspace analysis methods. The effectiveness of the health index for state assessment is validated by test data collected from a VSS subjected to different degrees of looseness. The results show that the health index is a very effective metric for detecting the occurrence and extension of structural looseness. Comparison results indicate that the defined index outperforms some existing state-of-the-art ones.

  2. Phylogeny of metabolic networks: a spectral graph theoretical approach.

    PubMed

    Deyasi, Krishanu; Banerjee, Anirban; Deb, Bony

    2015-10-01

    Many methods have been developed for finding the commonalities between different organisms in order to study their phylogeny. The structure of metabolic networks also reveals valuable insights into metabolic capacity of species as well as into the habitats where they have evolved. We constructed metabolic networks of 79 fully sequenced organisms and compared their architectures. We used spectral density of normalized Laplacian matrix for comparing the structure of networks. The eigenvalues of this matrix reflect not only the global architecture of a network but also the local topologies that are produced by different graph evolutionary processes like motif duplication or joining. A divergence measure on spectral densities is used to quantify the distances between various metabolic networks, and a split network is constructed to analyse the phylogeny from these distances. In our analysis, we focused on the species that belong to different classes, but appear more related to each other in the phylogeny. We tried to explore whether they have evolved under similar environmental conditions or have similar life histories. With this focus, we have obtained interesting insights into the phylogenetic commonality between different organisms.

  3. Does silvoagropecuary landscape fragmentation affect the genetic diversity of the sigmodontine rodent Oligoryzomys longicaudatus?

    PubMed Central

    Lazo-Cancino, Daniela; Musleh, Selim S.; Hernandez, Cristian E.; Palma, Eduardo

    2017-01-01

    Background Fragmentation of native forests is a highly visible result of human land-use throughout the world. In this study, we evaluated the effects of landscape fragmentation and matrix features on the genetic diversity and structure of Oligoryzomys longicaudatus, the natural reservoir of Hantavirus in southern South America. We focused our work in the Valdivian Rainforest where human activities have produced strong change of natural habitats, with an important number of human cases of Hantavirus. Methods We sampled specimens of O. longicaudatus from five native forest patches surrounded by silvoagropecuary matrix from Panguipulli, Los Rios Region, Chile. Using the hypervariable domain I (mtDNA), we characterized the genetic diversity and evaluated the effect of fragmentation and landscape matrix on the genetic structure of O. longicaudatus. For the latter, we used three approaches: (i) Isolation by Distance (IBD) as null model, (ii) Least-cost Path (LCP) where genetic distances between patch pairs increase with cost-weighted distances, and (iii) Isolation by Resistance (IBR) where the resistance distance is the average number of steps that is needed to commute between the patches during a random walk. Results We found low values of nucleotide diversity (π) for the five patches surveyed, ranging from 0.012 to 0.015, revealing that the 73 sampled specimens of this study belong to two populations but with low values of genetic distance (γST) ranging from 0.022 to 0.099. Likewise, we found that there are no significant associations between genetic distance and geographic distance for IBD and IBR. However, we found for the LCP approach, a significant positive relationship (r = 0.737, p = 0.05), with shortest least-cost paths traced through native forest and arborescent shrublands. Discussion In this work we found that, at this reduced geographical scale, Oligoryzomys longicaudatus shows genetic signs of fragmentation. In addition, we found that connectivity between full growth native forest remnants is mediated by the presence of dense shrublands and native forest corridors. In this sense, our results are important because they show how native forest patches and associated routes act as source of vector species in silvoagropecuary landscape, increasing the infection risk on human population. This study is the first approach to understand the epidemiological spatial context of silvoagropecuary risk of Hantavirus emergence. Further studies are needed to elucidate the effects of landscape fragmentation in order to generate new predictive models based on vector intrinsic attributes and landscape features. PMID:28975057

  4. A hybrid method for determination of the acoustic impedance of an unflanged cylindrical duct for multimode wave

    NASA Astrophysics Data System (ADS)

    Snakowska, Anna; Jurkiewicz, Jerzy; Gorazd, Łukasz

    2017-05-01

    The paper presents derivation of the impedance matrix based on the rigorous solution of the wave equation obtained by the Wiener-Hopf technique for a semi-infinite unflanged cylindrical duct. The impedance matrix allows, in turn, calculate the acoustic impedance along the duct and, as a special case, the radiation impedance. The analysis is carried out for a multimode incident wave accounting for modes coupling on the duct outlet not only qualitatively but also quantitatively for a selected source operating inside. The quantitative evaluation of the acoustic impedance requires setting of modes amplitudes which has been obtained applying the mode decomposition method to the far-field pressure radiation measurements and theoretical formulae for single mode directivity characteristics for an unflanged duct. Calculation of the acoustic impedance for a non-uniform distribution of the sound pressure and the sound velocity on a duct cross section requires determination of the acoustic power transmitted along/radiated from a duct. In the paper, the impedance matrix, the power, and the acoustic impedance were derived as functions of Helmholtz number and distance from the outlet.

  5. A General Exponential Framework for Dimensionality Reduction.

    PubMed

    Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan

    2014-02-01

    As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.

  6. Relation of Cloud Occurrence Frequency, Overlap, and Effective Thickness Derived from CALIPSO and CloudSat Merged Cloud Vertical Profiles

    NASA Technical Reports Server (NTRS)

    Kato, Seiji; Sun-Mack, Sunny; Miller, Walter F.; Rose, Fred G.; Chen, Yan; Minnis, Patrick; Wielicki, Bruce A.

    2009-01-01

    A cloud frequency of occurrence matrix is generated using merged cloud vertical profile derived from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and Cloud Profiling Radar (CPR). The matrix contains vertical profiles of cloud occurrence frequency as a function of the uppermost cloud top. It is shown that the cloud fraction and uppermost cloud top vertical pro les can be related by a set of equations when the correlation distance of cloud occurrence, which is interpreted as an effective cloud thickness, is introduced. The underlying assumption in establishing the above relation is that cloud overlap approaches the random overlap with increasing distance separating cloud layers and that the probability of deviating from the random overlap decreases exponentially with distance. One month of CALIPSO and CloudSat data support these assumptions. However, the correlation distance sometimes becomes large, which might be an indication of precipitation. The cloud correlation distance is equivalent to the de-correlation distance introduced by Hogan and Illingworth [2000] when cloud fractions of both layers in a two-cloud layer system are the same.

  7. Manifold learning-based subspace distance for machinery damage assessment

    NASA Astrophysics Data System (ADS)

    Sun, Chuang; Zhang, Zhousuo; He, Zhengjia; Shen, Zhongjie; Chen, Binqiang

    2016-03-01

    Damage assessment is very meaningful to keep safety and reliability of machinery components, and vibration analysis is an effective way to carry out the damage assessment. In this paper, a damage index is designed by performing manifold distance analysis on vibration signal. To calculate the index, vibration signal is collected firstly, and feature extraction is carried out to obtain statistical features that can capture signal characteristics comprehensively. Then, manifold learning algorithm is utilized to decompose feature matrix to be a subspace, that is, manifold subspace. The manifold learning algorithm seeks to keep local relationship of the feature matrix, which is more meaningful for damage assessment. Finally, Grassmann distance between manifold subspaces is defined as a damage index. The Grassmann distance reflecting manifold structure is a suitable metric to measure distance between subspaces in the manifold. The defined damage index is applied to damage assessment of a rotor and the bearing, and the result validates its effectiveness for damage assessment of machinery component.

  8. The kinetic energy operator for distance-dependent effective nuclear masses: Derivation for a triatomic molecule.

    PubMed

    Khoma, Mykhaylo; Jaquet, Ralph

    2017-09-21

    The kinetic energy operator for triatomic molecules with coordinate or distance-dependent nuclear masses has been derived. By combination of the chain rule method and the analysis of infinitesimal variations of molecular coordinates, a simple and general technique for the construction of the kinetic energy operator has been proposed. The asymptotic properties of the Hamiltonian have been investigated with respect to the ratio of the electron and proton mass. We have demonstrated that an ad hoc introduction of distance (and direction) dependent nuclear masses in Cartesian coordinates preserves the total rotational invariance of the problem. With the help of Wigner rotation functions, an effective Hamiltonian for nuclear motion can be derived. In the derivation, we have focused on the effective trinuclear Hamiltonian. All necessary matrix elements are given in closed analytical form. Preliminary results for the influence of non-adiabaticity on vibrational band origins are presented for H 3 + .

  9. Cultural interaction and biological distance in postclassic period Mexico.

    PubMed

    Ragsdale, Corey S; Edgar, Heather J H

    2015-05-01

    Economic, political, and cultural relationships connected virtually every population throughout Mexico during Postclassic period (AD 900-1520). Much of what is known about population interaction in prehistoric Mexico is based on archaeological or ethnohistoric data. What is unclear, especially for the Postclassic period, is how these data correlate with biological population structure. We address this by assessing biological (phenotypic) distances among 28 samples based upon a comparison of dental morphology trait frequencies, which serve as a proxy for genetic variation, from 810 individuals. These distances were compared with models representing geographic and cultural relationships among the same groups. Results of Mantel and partial Mantel matrix correlation tests show that shared migration and trade are correlated with biological distances, but geographic distance is not. Trade and political interaction are also correlated with biological distance when combined in a single matrix. These results indicate that trade and political relationships affected population structure among Postclassic Mexican populations. We suggest that trade likely played a major role in shaping patterns of interaction between populations. This study also shows that the biological distance data support the migration histories described in ethnohistoric sources. © 2015 Wiley Periodicals, Inc.

  10. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  11. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  12. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loef, P.A.; Smed, T.; Andersson, G.

    The minimum singular value of the power flow Jacobian matrix has been used as a static voltage stability index, indicating the distance between the studied operating point and the steady state voltage stability limit. In this paper a fast method to calculate the minimum singular value and the corresponding (left and right) singular vectors is presented. The main advantages of the developed algorithm are the small amount of computation time needed, and that it only requires information available from an ordinary program for power flow calculations. Furthermore, the proposed method fully utilizes the sparsity of the power flow Jacobian matrixmore » and hence the memory requirements for the computation are low. These advantages are preserved when applied to various submatrices of the Jacobian matrix, which can be useful in constructing special voltage stability indices. The developed algorithm was applied to small test systems as well as to a large (real size) system with over 1000 nodes, with satisfactory results.« less

  14. Limited Rank Matrix Learning, discriminative dimension reduction and visualization.

    PubMed

    Bunte, Kerstin; Schneider, Petra; Hammer, Barbara; Schleif, Frank-Michael; Villmann, Thomas; Biehl, Michael

    2012-02-01

    We present an extension of the recently introduced Generalized Matrix Learning Vector Quantization algorithm. In the original scheme, adaptive square matrices of relevance factors parameterize a discriminative distance measure. We extend the scheme to matrices of limited rank corresponding to low-dimensional representations of the data. This allows to incorporate prior knowledge of the intrinsic dimension and to reduce the number of adaptive parameters efficiently. In particular, for very large dimensional data, the limitation of the rank can reduce computation time and memory requirements significantly. Furthermore, two- or three-dimensional representations constitute an efficient visualization method for labeled data sets. The identification of a suitable projection is not treated as a pre-processing step but as an integral part of the supervised training. Several real world data sets serve as an illustration and demonstrate the usefulness of the suggested method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Surname distribution in France: a distance analysis by a distorted geographical map.

    PubMed

    Mourrieras, B; Darlu, P; Hochez, J; Hazout, S

    1995-01-01

    The distribution of surnames in 90 distinct regions in France during two successive periods, 1889-1915 and 1916-1940, is analysed from the civil birth registers of the 36,500 administrative units in France. A new approach, called 'Mobile Site Method' (MSM), is developed to allow representation of a surname distance matrix by a distorted geographical map. A surname distance matrix between the various regions in France is first calculated, then a distorted geographical map called the 'surname similarity map' is built up from the surname distances between regions. To interpret this map we draw (a) successive map contours obtained during the step-by-step distortion process, revealing zones of high surname dissimilarity, and (b) maps in grey levels representing the displacement magnitude, and allowing the segmentation of the geographical and surname maps into 'homogeneous surname zones'. By integrating geography and surname information in the same analysis, and by comparing results obtained for the two successive periods, the MSM approach produces convenient maps showing: (a) 'regionalism' of some peripheral populations such as Pays Basque, Alsace, Corsica and Brittany; (b) the presence of preferential axes of communications (Rhodanian corridor, Garonne valley); (c) barriers such as the Central Massif, Vosges; (d) the weak modifications of the distorted maps associated with the two periods studied suggest an extension (but limited) of the tendency of surname uniformity in France. These results are interpreted, in the nineteenth- and twentieth century context, as the consequences of a slow process of local migrations occurring over a long period of time.

  16. Sorting points into neighborhoods (SPIN): data analysis and visualization by ordering distance matrices.

    PubMed

    Tsafrir, D; Tsafrir, I; Ein-Dor, L; Zuk, O; Notterman, D A; Domany, E

    2005-05-15

    We introduce a novel unsupervised approach for the organization and visualization of multidimensional data. At the heart of the method is a presentation of the full pairwise distance matrix of the data points, viewed in pseudocolor. The ordering of points is iteratively permuted in search of a linear ordering, which can be used to study embedded shapes. Several examples indicate how the shapes of certain structures in the data (elongated, circular and compact) manifest themselves visually in our permuted distance matrix. It is important to identify the elongated objects since they are often associated with a set of hidden variables, underlying continuous variation in the data. The problem of determining an optimal linear ordering is shown to be NP-Complete, and therefore an iterative search algorithm with O(n3) step-complexity is suggested. By using sorting points into neighborhoods, i.e. SPIN to analyze colon cancer expression data we were able to address the serious problem of sample heterogeneity, which hinders identification of metastasis related genes in our data. Our methodology brings to light the continuous variation of heterogeneity--starting with homogeneous tumor samples and gradually increasing the amount of another tissue. Ordering the samples according to their degree of contamination by unrelated tissue allows the separation of genes associated with irrelevant contamination from those related to cancer progression. Software package will be available for academic users upon request.

  17. Gray-level co-occurrence matrix analysis of several cell types in mouse brain using resolution-enhanced photothermal microscopy

    NASA Astrophysics Data System (ADS)

    Kobayashi, Takayoshi; Sundaram, Durga; Nakata, Kazuaki; Tsurui, Hiromichi

    2017-03-01

    Qualifications of intracellular structure were performed for the first time using the gray-level co-occurrence matrix (GLCM) method for images of cells obtained by resolution-enhanced photothermal imaging. The GLCM method has been used to extract five parameters of texture features for five different types of cells in mouse brain; pyramidal neurons and glial cells in the basal nucleus (BGl), dentate gyrus granule cells, cerebellar Purkinje cells, and cerebellar granule cells. The parameters are correlation, contrast, angular second moment (ASM), inverse difference moment (IDM), and entropy for the images of cells of interest in a mouse brain. The parameters vary depending on the pixel distance taken in the analysis method. Based on the obtained results, we identified that the most suitable GLCM parameter is IDM for pyramidal neurons and BGI, granule cells in the dentate gyrus, Purkinje cells and granule cells in the cerebellum. It was also found that the ASM is the most appropriate for neurons in the basal nucleus.

  18. Synthesis and crystalline properties of CdS incorporated polyvinylidene fluoride (PVDF) composite film

    NASA Astrophysics Data System (ADS)

    Patel, Arunendra Kumar; Sunder, Aishwarya; Mishra, Shweta; Bajpai, Rakesh

    2018-05-01

    This paper gives an insight on the synthesis and crystalline properties of Polyvinylidene Fluoride (PVDF) (host matrix) composites impregnated with Cadmium Sulphide (CdS) using Dimethyl formamide (DMF) as the base, prepared by the well known solvent casting technique. The effect of doping concentration of CdS in to the PVDF matrix was studied using X-ray diffraction technique. The structural properties like crystallinity Cr, interplanar distance d, average size of the crystalline region (D), and average inter crystalline separation (R) have been estimated for the developed composite. The crystallinity index, crystallite size and inter crystalline separation is increasing with increase in the concentration of CdS in to the PVDF matrix while the interplanar distance d is decreasing.

  19. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  20. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  1. Analytical Wave Functions for Ultracold Collisions.

    NASA Astrophysics Data System (ADS)

    Cavagnero, M. J.

    1998-05-01

    Secular perturbation theory of long-range interactions(M. J. Cavagnero, PRA 50) 2841, (1994). has been generalized to yield accurate wave functions for near threshold processes, including low-energy scattering processes of interest at ultracold temperatures. In particular, solutions of Schrödinger's equation have been obtained for motion in the combined r-6, r-8, and r-10 potentials appropriate for describing an utlracold collision of two neutral ground state atoms. Scattering lengths and effective ranges appropriate to such potentials are readily calculated at distances comparable to the LeRoy radius, where exchange forces can be neglected, thereby eliminating the need to integrate Schrödinger's equation to large internuclear distances. Our method yields accurate base pair solutions well beyond the energy range of effective range theories, making possible the application of multichannel quantum defect theory [MQDT] and R-matrix methods to the study of ultracold collisions.

  2. EEG character identification using stimulus sequences designed to maximize mimimal hamming distance.

    PubMed

    Fukami, Tadanori; Shimada, Takamasa; Forney, Elliott; Anderson, Charles W

    2012-01-01

    In this study, we have improved upon the P300 speller Brain-Computer Interface paradigm by introducing a new character encoding method. Our concept in detection of the intended character is not based on a classification of target and nontarget responses, but based on an identifaction of the character which maximize the difference between P300 amplitudes in target and nontarget stimuli. Each bit included in the code corresponds to flashing character, '1', and non-flashing, '0'. Here, the codes were constructed in order to maximize the minimum hamming distance between the characters. Electroencephalography was used to identify the characters using a waveform calculated by adding and subtracting the response of the target and non-target stimulus according the codes respectively. This stimulus presentation method was applied to a 3×3 character matrix, and the results were compared with that of a conventional P300 speller of the same size. Our method reduced the time until the correct character was obtained by 24%.

  3. Predictive model to describe water migration in cellular solid foods during storage.

    PubMed

    Voogt, Juliën A; Hirte, Anita; Meinders, Marcel B J

    2011-11-01

    Water migration in cellular solid foods during storage causes loss of crispness. To improve crispness retention, physical understanding of this process is needed. Mathematical models are suitable tools to gain this physical knowledge. Water migration in cellular solid foods involves migration through both the air cells and the solid matrix. For systems in which the water migration distance is large compared with the cell wall thickness of the solid matrix, the overall water flux through the system is dominated by the flux through the air. For these systems, water migration can be approximated well by a Fickian diffusion model. The effective diffusion coefficient can be expressed in terms of the material properties of the solid matrix (i.e. the density, sorption isotherm and diffusion coefficient of water in the solid matrix) and the morphological properties of the cellular structure (i.e. water vapour permeability and volume fraction of the solid matrix). The water vapour permeability is estimated from finite element method modelling using a simplified model for the cellular structure. It is shown that experimentally observed dynamical water profiles of bread rolls that differ in crust permeability are predicted well by the Fickian diffusion model. Copyright © 2011 Society of Chemical Industry.

  4. A novel chaos-based image encryption algorithm using DNA sequence operations

    NASA Astrophysics Data System (ADS)

    Chai, Xiuli; Chen, Yiran; Broyde, Lucie

    2017-01-01

    An image encryption algorithm based on chaotic system and deoxyribonucleic acid (DNA) sequence operations is proposed in this paper. First, the plain image is encoded into a DNA matrix, and then a new wave-based permutation scheme is performed on it. The chaotic sequences produced by 2D Logistic chaotic map are employed for row circular permutation (RCP) and column circular permutation (CCP). Initial values and parameters of the chaotic system are calculated by the SHA 256 hash of the plain image and the given values. Then, a row-by-row image diffusion method at DNA level is applied. A key matrix generated from the chaotic map is used to fuse the confused DNA matrix; also the initial values and system parameters of the chaotic system are renewed by the hamming distance of the plain image. Finally, after decoding the diffused DNA matrix, we obtain the cipher image. The DNA encoding/decoding rules of the plain image and the key matrix are determined by the plain image. Experimental results and security analyses both confirm that the proposed algorithm has not only an excellent encryption result but also resists various typical attacks.

  5. Protein structure similarity from Principle Component Correlation analysis.

    PubMed

    Zhou, Xiaobo; Chou, James; Wong, Stephen T C

    2006-01-25

    Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD) in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC) analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum eigenvalues can be highly effective in clustering structurally or topologically similar proteins. We believe that the PCC analysis of interaction matrix is highly flexible in adopting various structural parameters for protein structure comparison.

  6. A robust bi-orthogonal/dynamically-orthogonal method using the covariance pseudo-inverse with application to stochastic flow problems

    NASA Astrophysics Data System (ADS)

    Babaee, Hessam; Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em

    2017-09-01

    We develop a new robust methodology for the stochastic Navier-Stokes equations based on the dynamically-orthogonal (DO) and bi-orthogonal (BO) methods [1-3]. Both approaches are variants of a generalized Karhunen-Loève (KL) expansion in which both the stochastic coefficients and the spatial basis evolve according to system dynamics, hence, capturing the low-dimensional structure of the solution. The DO and BO formulations are mathematically equivalent [3], but they exhibit computationally complimentary properties. Specifically, the BO formulation may fail due to crossing of the eigenvalues of the covariance matrix, while both BO and DO become unstable when there is a high condition number of the covariance matrix or zero eigenvalues. To this end, we combine the two methods into a robust hybrid framework and in addition we employ a pseudo-inverse technique to invert the covariance matrix. The robustness of the proposed method stems from addressing the following issues in the DO/BO formulation: (i) eigenvalue crossing: we resolve the issue of eigenvalue crossing in the BO formulation by switching to the DO near eigenvalue crossing using the equivalence theorem and switching back to BO when the distance between eigenvalues is larger than a threshold value; (ii) ill-conditioned covariance matrix: we utilize a pseudo-inverse strategy to invert the covariance matrix; (iii) adaptivity: we utilize an adaptive strategy to add/remove modes to resolve the covariance matrix up to a threshold value. In particular, we introduce a soft-threshold criterion to allow the system to adapt to the newly added/removed mode and therefore avoid repetitive and unnecessary mode addition/removal. When the total variance approaches zero, we show that the DO/BO formulation becomes equivalent to the evolution equation of the Optimally Time-Dependent modes [4]. We demonstrate the capability of the proposed methodology with several numerical examples, namely (i) stochastic Burgers equation: we analyze the performance of the method in the presence of eigenvalue crossing and zero eigenvalues; (ii) stochastic Kovasznay flow: we examine the method in the presence of a singular covariance matrix; and (iii) we examine the adaptivity of the method for an incompressible flow over a cylinder where for large stochastic forcing thirteen DO/BO modes are active.

  7. Probing the smearing effect by a pointlike graviton in the plane-wave matrix model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Bum-Hoon; Nam, Siyoung; Shin, Hyeonjoon

    2010-08-15

    We investigate the interaction between a flat membrane and pointlike graviton in the plane-wave matrix model. The one-loop effective potential in the large-distance limit is computed and is shown to be of r{sup -3} type where r is the distance between two objects. This type of interaction has been interpreted as the one incorporating the smearing effect due to the configuration of a flat membrane in a plane-wave background. Our results support this interpretation and provide more evidence about it.

  8. Supertrees Based on the Subtree Prune-and-Regraft Distance

    PubMed Central

    Whidden, Christopher; Zeh, Norbert; Beiko, Robert G.

    2014-01-01

    Supertree methods reconcile a set of phylogenetic trees into a single structure that is often interpreted as a branching history of species. A key challenge is combining conflicting evolutionary histories that are due to artifacts of phylogenetic reconstruction and phenomena such as lateral gene transfer (LGT). Many supertree approaches use optimality criteria that do not reflect underlying processes, have known biases, and may be unduly influenced by LGT. We present the first method to construct supertrees by using the subtree prune-and-regraft (SPR) distance as an optimality criterion. Although calculating the rooted SPR distance between a pair of trees is NP-hard, our new maximum agreement forest-based methods can reconcile trees with hundreds of taxa and > 50 transfers in fractions of a second, which enables repeated calculations during the course of an iterative search. Our approach can accommodate trees in which uncertain relationships have been collapsed to multifurcating nodes. Using a series of benchmark datasets simulated under plausible rates of LGT, we show that SPR supertrees are more similar to correct species histories than supertrees based on parsimony or Robinson–Foulds distance criteria. We successfully constructed an SPR supertree from a phylogenomic dataset of 40,631 gene trees that covered 244 genomes representing several major bacterial phyla. Our SPR-based approach also allowed direct inference of highways of gene transfer between bacterial classes and genera. A Small number of these highways connect genera in different phyla and can highlight specific genes implicated in long-distance LGT. [Lateral gene transfer; matrix representation with parsimony; phylogenomics; prokaryotic phylogeny; Robinson–Foulds; subtree prune-and-regraft; supertrees.] PMID:24695589

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James; Kuruganti, Teja

    Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less

  10. Locating sources within a dense sensor array using graph clustering

    NASA Astrophysics Data System (ADS)

    Gerstoft, P.; Riahi, N.

    2017-12-01

    We develop a model-free technique to identify weak sources within dense sensor arrays using graph clustering. No knowledge about the propagation medium is needed except that signal strengths decay to insignificant levels within a scale that is shorter than the aperture. We then reinterpret the spatial coherence matrix of a wave field as a matrix whose support is a connectivity matrix of a graph with sensors as vertices. In a dense network, well-separated sources induce clusters in this graph. The geographic spread of these clusters can serve to localize the sources. The support of the covariance matrix is estimated from limited-time data using a hypothesis test with a robust phase-only coherence test statistic combined with a physical distance criterion. The latter criterion ensures graph sparsity and thus prevents clusters from forming by chance. We verify the approach and quantify its reliability on a simulated dataset. The method is then applied to data from a dense 5200 element geophone array that blanketed of the city of Long Beach (CA). The analysis exposes a helicopter traversing the array and oil production facilities.

  11. Opportunity potential matrix for Atlantic Canadians

    Treesearch

    Greg Danchuk; Ed Thomson

    1992-01-01

    Opportunity for provision of Parks Service benefit to Atlantic Canadians was investigated by mapping travel behaviour into a matrix in terms of origin, season, purpose, distance, time, and destination. Findings identified potential for benefit in several activity areas, particularly within residents' own province.

  12. Double-β decay matrix elements from lattice quantum chromodynamics

    NASA Astrophysics Data System (ADS)

    Tiburzi, Brian C.; Wagman, Michael L.; Winter, Frank; Chang, Emmanuel; Davoudi, Zohreh; Detmold, William; Orginos, Kostas; Savage, Martin J.; Shanahan, Phiala E.; Nplqcd Collaboration

    2017-09-01

    A lattice quantum chromodynamics (LQCD) calculation of the nuclear matrix element relevant to the n n →p p e e ν¯eν¯e transition is described in detail, expanding on the results presented in Ref. [P. E. Shanahan et al., Phys. Rev. Lett. 119, 062003 (2017), 10.1103/PhysRevLett.119.062003]. This matrix element, which involves two insertions of the weak axial current, is an important input for phenomenological determinations of double-β decay rates of nuclei. From this exploratory study, performed using unphysical values of the quark masses, the long-distance deuteron-pole contribution to the matrix element is separated from shorter-distance hadronic contributions. This polarizability, which is only accessible in double-weak processes, cannot be constrained from single-β decay of nuclei, and is found to be smaller than the long-distance contributions in this calculation, but non-negligible. In this work, technical aspects of the LQCD calculations, and of the relevant formalism in the pionless effective field theory, are described. Further calculations of the isotensor axial polarizability, in particular near and at the physical values of the light-quark masses, are required for precise determinations of both two-neutrino and neutrinoless double-β decay rates in heavy nuclei.

  13. Quantitative evaluation of polymer concentration profile during swelling of hydrophilic matrix tablets using 1H NMR and MRI methods.

    PubMed

    Baumgartner, Sasa; Lahajnar, Gojmir; Sepe, Ana; Kristl, Julijana

    2005-02-01

    Many pharmaceutical tablets are based on hydrophilic polymers, which, after exposure to water, form a gel layer around the tablet that limits the dissolution and diffusion of the drug and provides a mechanism for controlled drug release. Our aim was to determine the thickness of the swollen gel layer of matrix tablets and to develop a method for calculating the polymer concentration profile across the gel layer. MR imaging has been used to investigate the in situ swelling behaviour of cellulose ether matrix tablets and NMR spectroscopy experiments were performed on homogeneous hydrogels with known polymer concentration. The MRI results show that the thickest gel layer was observed for hydroxyethylcellulose tablets, followed by definitely thinner but almost equal gel layer for hydroxypropylcellulose and hydroxypropylmethylcellulose of both molecular weights. The water proton NMR relaxation parameters were combined with the MRI data to obtain a quantitative description of the swelling process on the basis of the concentrations and mobilities of water and polymer as functions of time and distance. The different concentration profiles observed after the same swelling time are the consequence of the different polymer characteristics. The procedure developed here could be used as a general method for calculating polymer concentration profiles on other similar polymeric systems.

  14. Environmental Impact Assessment of the Industrial Estate Development Plan with the Geographical Information System and Matrix Methods

    PubMed Central

    Ghasemian, Mohammad; Poursafa, Parinaz; Amin, Mohammad Mehdi; Ziarati, Mohammad; Ghoddousi, Hamid; Momeni, Seyyed Alireza; Rezaei, Amir Hossein

    2012-01-01

    Background. The purpose of this study is environmental impact assessment of the industrial estate development planning. Methods. This cross-sectional study was conducted in 2010 in Isfahan province, Iran. GIS and matrix methods were applied. Data analysis was done to identify the current situation of the region, zoning vulnerable areas, and scoping the region. Quantitative evaluation was done by using matrix of Wooten and Rau. Results. The net score for impact of industrial units operation on air quality of the project area was (−3). According to the transition of industrial estate pollutants, residential places located in the radius of 2500 meters of the city were expected to be affected more. The net score for impact of construction of industrial units on plant species of the project area was (−2). Environmental protected areas were not affected by the air and soil pollutants because of their distance from industrial estate. Conclusion. Positive effects of project activities outweigh the drawbacks and the sum scores allocated to the project activities on environmental factor was (+37). Totally it does not have detrimental effects on the environment and residential neighborhood. EIA should be considered as an anticipatory, participatory environmental management tool before determining a plan application. PMID:22272210

  15. Short-distance matrix elements for $D$-meson mixing for 2+1 lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Chia Cheng

    2015-01-01

    We study the short-distance hadronic matrix elements for D-meson mixing with partially quenched N f = 2+1 lattice QCD. We use a large set of the MIMD Lattice Computation Collaboration's gauge configurations with a 2 tadpole-improved staggered sea quarks and tadpole-improved Lüscher-Weisz gluons. We use the a 2 tadpole-improved action for valence light quarks and the Sheikoleslami-Wohlert action with the Fermilab interpretation for the valence charm quark. Our calculation covers the complete set of five operators needed to constrain new physics models for D-meson mixing. We match our matrix elements to the MS-NDR scheme evaluated at 3 GeV. We reportmore » values for the Beneke-Buchalla-Greub-Lenz-Nierste choice of evanescent operators.« less

  16. New way for determining electron energy levels in quantum dots arrays using finite difference method

    NASA Astrophysics Data System (ADS)

    Dujardin, F.; Assaid, E.; Feddi, E.

    2018-06-01

    Electronic states are investigated in quantum dots arrays, depending on the type of cubic Bravais lattice (primitive, body centered or face centered) according to which the dots are arranged, the size of the dots and the interdot distance. It is shown that the ground state energy level can undergo significant variations when these parameters are modified. The results were obtained by means of finite difference method which has proved to be easily adaptable, efficient and precise. The symmetry properties of the lattice have been used to reduce the size of the Hamiltonian matrix.

  17. Preparation of laser micropore porcine acellular dermal matrix for skin graft: an experimental study.

    PubMed

    Chai, Jia-Ke; Liang, Li-Ming; Yang, Hong-Ming; Feng, Rui; Yin, Hui-Nan; Li, Feng-Yu; Sheng, Zhi-Yong

    2007-09-01

    In our previous study, we used composite grafts consisting of meshed porcine acellular dermal matrix (PADM) and thin split-thickness autologous epidermis to cover full thickness burn wounds in clinical practice. However, a certain degree of contraction might occur because the distribution of dermal matrix was not uniform in burn wound. In this study, we prepare a composite skin graft consisting of PADM with the aid of laser to improve the quality of healing of burn wound. PADM was prepared by the trypsin/Triton X-100 method. Micropores were produced on the PADM with a laser punch. The distance between micropores varied from 0.8, 1.0, 1.2 to 1.5mm. Full thickness defect wounds were created on the back of 144 SD rats. The rats were randomly divided into six groups: micropore groups I-IV in which the wound were grafted with PADM with micropores, in four different distances, respectively and split-thickness autograft; mesh group rats received meshed PADM graft and split-thickness autograft; control group received simple split-thickness autografting. The status of wound healing was histologically observed at regular time points after surgery. The wound healing rate and contraction rate were calculated. The wound healing rate in micropore groups I and II was not statistically different from that in control group, but was significantly higher than that in mesh group 6 weeks after grafting. The wound healing rate in micropore groups III and IV was lower than that in mesh and control groups 4 and 6 weeks after grafting. The wound contraction rate in micropore groups I and II was remarkably lower than that in control group 4 and 6 weeks after surgery and it was significantly much lower than that in mesh group 6 weeks after surgery. Histological examination revealed good epithelization, regularly arranged collagenous fibers and integral structure of basement membrane. Laser micropore PADM (0.8 or 1.0mm in distance) grafting in combination with split-thickness autografting can improve wound healing. The PADM with laser micropores in 1.0mm distance is the better choice.

  18. Investigation of orifice aeroacoustics by means of multi-port methods

    NASA Astrophysics Data System (ADS)

    Sack, Stefan; Åbom, Mats

    2017-10-01

    Comprehensive methods to cascade active multi-ports, e.g., for acoustic network prediction, have until now only been available for plane waves. This paper presents procedures to combine multi-ports with an arbitrary number of considered duct modes. A multi-port method is used to extract complex mode amplitudes from experimental data of single and tandem in-duct orifice plates for Helmholtz numbers up to around 4 and, hence, beyond the cut-on of several higher order modes. The theory of connecting single multi-ports to linear cascades is derived for the passive properties (the scattering of the system) and the active properties (the source cross-spectrum matrix of the system). One scope of this paper is to investigate the influence of the hydrodynamic near field on the accuracy of both the passive and the active predictions in multi-port cascades. The scattering and the source cross-spectrum matrix of tandem orifice configurations is measured for three cases, namely, with a distance between the plates of 10 duct diameter, for which the downstream orifice is outside the jet of the upstream orifice, 4 duct diameter, and 2 duct diameter (both inside the jet). The results are compared with predictions from single orifice measurements. It is shown that the scattering is only sensitive to disturbed inflow in certain frequency ranges where coupling between the flow and sound field exists, whereas the source cross-spectrum matrix is very sensitive to disturbed inflow for all frequencies. An important part of the analysis is based on an eigenvalue analysis of the scattering matrix and the source cross-spectrum matrix to evaluate the potential of sound amplification and dominant source mechanisms.

  19. A Study of Feature Extraction Using Divergence Analysis of Texture Features

    NASA Technical Reports Server (NTRS)

    Hallada, W. A.; Bly, B. G.; Boyd, R. K.; Cox, S.

    1982-01-01

    An empirical study of texture analysis for feature extraction and classification of high spatial resolution remotely sensed imagery (10 meters) is presented in terms of specific land cover types. The principal method examined is the use of spatial gray tone dependence (SGTD). The SGTD method reduces the gray levels within a moving window into a two-dimensional spatial gray tone dependence matrix which can be interpreted as a probability matrix of gray tone pairs. Haralick et al (1973) used a number of information theory measures to extract texture features from these matrices, including angular second moment (inertia), correlation, entropy, homogeneity, and energy. The derivation of the SGTD matrix is a function of: (1) the number of gray tones in an image; (2) the angle along which the frequency of SGTD is calculated; (3) the size of the moving window; and (4) the distance between gray tone pairs. The first three parameters were varied and tested on a 10 meter resolution panchromatic image of Maryville, Tennessee using the five SGTD measures. A transformed divergence measure was used to determine the statistical separability between four land cover categories forest, new residential, old residential, and industrial for each variation in texture parameters.

  20. Environmental impact assessment of the industrial estate development plan with the geographical information system and matrix methods.

    PubMed

    Ghasemian, Mohammad; Poursafa, Parinaz; Amin, Mohammad Mehdi; Ziarati, Mohammad; Ghoddousi, Hamid; Momeni, Seyyed Alireza; Rezaei, Amir Hossein

    2012-01-01

    The purpose of this study is environmental impact assessment of the industrial estate development planning. This cross-sectional study was conducted in 2010 in Isfahan province, Iran. GIS and matrix methods were applied. Data analysis was done to identify the current situation of the region, zoning vulnerable areas, and scoping the region. Quantitative evaluation was done by using matrix of Wooten and Rau. The net score for impact of industrial units operation on air quality of the project area was (-3). According to the transition of industrial estate pollutants, residential places located in the radius of 2500 meters of the city were expected to be affected more. The net score for impact of construction of industrial units on plant species of the project area was (-2). Environmental protected areas were not affected by the air and soil pollutants because of their distance from industrial estate. Positive effects of project activities outweigh the drawbacks and the sum scores allocated to the project activities on environmental factor was (+37). Totally it does not have detrimental effects on the environment and residential neighborhood. EIA should be considered as an anticipatory, participatory environmental management tool before determining a plan application.

  1. Resonant electronic excitation energy transfer by exchange mechanism in the quantum dot system

    NASA Astrophysics Data System (ADS)

    Chikalova-Luzina, O. P.; Samosvat, D. M.; Vyatkin, V. M.; Zegrya, G. G.

    2017-11-01

    A microscopic theory of nonradiative resonance energy transfer between spherical A3B5 semiconductor quantum dots by the exchange mechanism is suggested. The interdot Coulomb interaction is taken into consideration. It is assumed that the quantum dot-donor and the quantum dot-acceptor are made from the same A3B5 compound and are embedded in the matrix of another material that produces potential barriers for electrons and holes. The dependences of the energy transfer rate on the quantum-dot system parameters are found in the frame of the Kane model that provides the most adequate description of the real spectra of A3B5 semiconductors. The analytical treatment is carried out with using the density matrix method, which enabled us to perform an energy transfer analysis both in the weak-interaction approximation and in the strong-interaction approximation. The numerical calculations showed the saturation of the energy transfer rate at the distances between the donor and the acceptor approaching the contact one. The contributions of the exchange and direct Coulomb intractions can be of the same order at the small distances and can have the same value in the saturation range.

  2. SU-E-T-644: Evaluation of Angular Dependence Correction for 2D Array Detector Using for Quality Assurance of Volumetric Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karthikeyan, N; Ganesh, K M; Vikraman, S

    2014-06-15

    Purpose: To evaluate the angular dependence correction for Matrix Evolution 2D array detector in quality assurance of volumetric modulated arc therapy(VMAT). Methods: Total ten patients comprising of different sites were planned for VMAT and taken for the study. Each plan was exposed on Matrix Evolution 2D array detector with Omnipro IMRT software based on the following three different methods using 6MV photon beams from Elekta Synergy linear accelerator. First method, VMAT plan was delivered on Matrix Evolution detector as it gantry mounted with dedicated holder with build-up of 2.3cm. Second, the VMAT plan was delivered with the static gantry anglemore » on to the table mounted setup. Third, the VMAT plan was delivered with actual gantry angle on Matrix Evolution detector fixed in Multicube phantom with gantry angle sensor and angular dependence correction were applied to quantify the plan quality. For all these methods, the corresponding QA plans were generated in TPS and the dose verification was done for both point and 2D fluence analysis with pass criteria of 3% dose difference and 3mm distance to agreement. Results: The measured point dose variation for the first method was observed as 1.58±0.6% of mean and SD with TPS calculated. For second and third method, the mean and standard deviation(SD) was observed as 1.67±0.7% and 1.85±0.8% respectively. The 2D fluence analysis of measured and TPS calculated has the mean and SD of 97.9±1.1%, 97.88±1.2% and 97.55±1.3% for first, second and third methods respectively. The calculated two-tailed Pvalue for point dose and 2D fluence analysis shows the insignificance with values of 0.9316 and 0.9015 respectively, among the different methods of QA. Conclusion: The qualitative evaluation of angular dependence correction for Matrix Evolution 2D array detector shows its competency in accuracy of quality assurance measurement of composite dose distribution of volumetric modulated arc therapy.« less

  3. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    NASA Astrophysics Data System (ADS)

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.

  4. IVisTMSA: Interactive Visual Tools for Multiple Sequence Alignments.

    PubMed

    Pervez, Muhammad Tariq; Babar, Masroor Ellahi; Nadeem, Asif; Aslam, Naeem; Naveed, Nasir; Ahmad, Sarfraz; Muhammad, Shah; Qadri, Salman; Shahid, Muhammad; Hussain, Tanveer; Javed, Maryam

    2015-01-01

    IVisTMSA is a software package of seven graphical tools for multiple sequence alignments. MSApad is an editing and analysis tool. It can load 409% more data than Jalview, STRAP, CINEMA, and Base-by-Base. MSA comparator allows the user to visualize consistent and inconsistent regions of reference and test alignments of more than 21-MB size in less than 12 seconds. MSA comparator is 5,200% efficient and more than 40% efficient as compared to BALiBASE c program and FastSP, respectively. MSA reconstruction tool provides graphical user interfaces for four popular aligners and allows the user to load several sequence files at a time. FASTA generator converts seven formats of alignments of unlimited size into FASTA format in a few seconds. MSA ID calculator calculates identity matrix of more than 11,000 sequences with a sequence length of 2,696 base pairs in less than 100 seconds. Tree and Distance Matrix calculation tools generate phylogenetic tree and distance matrix, respectively, using neighbor joining% identity and BLOSUM 62 matrix.

  5. Robust infrared targets tracking with covariance matrix representation

    NASA Astrophysics Data System (ADS)

    Cheng, Jian

    2009-07-01

    Robust infrared target tracking is an important and challenging research topic in many military and security applications, such as infrared imaging guidance, infrared reconnaissance, scene surveillance, etc. To effectively tackle the nonlinear and non-Gaussian state estimation problems, particle filtering is introduced to construct the theory framework of infrared target tracking. Under this framework, the observation probabilistic model is one of main factors for infrared targets tracking performance. In order to improve the tracking performance, covariance matrices are introduced to represent infrared targets with the multi-features. The observation probabilistic model can be constructed by computing the distance between the reference target's and the target samples' covariance matrix. Because the covariance matrix provides a natural tool for integrating multiple features, and is scale and illumination independent, target representation with covariance matrices can hold strong discriminating ability and robustness. Two experimental results demonstrate the proposed method is effective and robust for different infrared target tracking, such as the sensor ego-motion scene, and the sea-clutter scene.

  6. Transport properties of dilute α -Fe (X ) solid solutions (X = C, N, O)

    NASA Astrophysics Data System (ADS)

    Schuler, Thomas; Nastar, Maylise

    2016-06-01

    We extend the self-consistent mean field (SCMF) method to the calculation of the Onsager matrix of Fe-based interstitial solid solutions. Both interstitial jumps and substitutional atom-vacancy exchanges are accounted for. A general procedure is introduced to split the Onsager matrix of a dilute solid solution into intrinsic cluster Onsager matrices, and extract from them flux-coupling ratios, mobilities, and association-dissociation rates for each cluster. The formalism is applied to vacancy-interstitial solute pairs in α -Fe (V X pairs, X = C, N, O), with ab initio based thermodynamic and kinetic parameters. Convergence of the cluster mobility contribution gives a controlled estimation of the cluster definition distance, taking into account both its thermodynamic and kinetic properties. Then, the flux-coupling behavior of each V X pair is discussed, and qualitative understanding is achieved from the comparison between various contributions to the Onsager matrix. Also, the effect of low-activation energy second-nearest-neighbor interstitial solute jumps around a vacancy on these results is addressed.

  7. Giant oscillating magnetoresistance in silicene-based structures

    NASA Astrophysics Data System (ADS)

    Oubram, O.; Navarro, O.; Rodríguez-Vargas, I.; Guzman, E. J.; Cisneros-Villalobos, L.; Velásquez-Aguilar, J. G.

    2018-02-01

    Ballistic electron transport in a silicene structure, composed of a pair of magnetic gates, in the ferromagnetic and an-tiferromagnetic configuration is studied. This theoretical study has been done using the matrix transfer method to calculate the transmission, the conductance for parallel and antiparallel magnetic alignment and the magnetoresistance. Results show that conductance and magnetoresistance oscillate as a function of the length between the two magnetic domains. The forbidden transmission region also increases as a function of the barrier separation distance.

  8. Continuous filament composite parts and articles of manufacture thereof

    DOEpatents

    Weisberg, Andrew H.

    2016-06-28

    An article of manufacture according to one embodiment includes a plurality of plies in a stacked configuration, where each ply includes a plurality of tape winds having edges. A distance between the edges of adjacent tape winds in the same ply is about constant along a length of the wind. Each tape wind comprises elongated fibers and a matrix, axes of the fibers being oriented about parallel to a longitudinal axis of the tape wind. Additional systems, methods and articles of manufacture are also presented.

  9. Research in Computational Aeroscience Applications Implemented on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Wigton, Larry

    1996-01-01

    Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.

  10. Basis for substrate recognition and distinction by matrix metalloproteinases

    PubMed Central

    Ratnikov, Boris I.; Cieplak, Piotr; Gramatikoff, Kosi; Pierce, James; Eroshkin, Alexey; Igarashi, Yoshinobu; Kazanov, Marat; Sun, Qing; Godzik, Adam; Osterman, Andrei; Stec, Boguslaw; Strongin, Alex; Smith, Jeffrey W.

    2014-01-01

    Genomic sequencing and structural genomics produced a vast amount of sequence and structural data, creating an opportunity for structure–function analysis in silico [Radivojac P, et al. (2013) Nat Methods 10(3):221–227]. Unfortunately, only a few large experimental datasets exist to serve as benchmarks for function-related predictions. Furthermore, currently there are no reliable means to predict the extent of functional similarity among proteins. Here, we quantify structure–function relationships among three phylogenetic branches of the matrix metalloproteinase (MMP) family by comparing their cleavage efficiencies toward an extended set of phage peptide substrates that were selected from ∼64 million peptide sequences (i.e., a large unbiased representation of substrate space). The observed second-order rate constants [k(obs)] across the substrate space provide a distance measure of functional similarity among the MMPs. These functional distances directly correlate with MMP phylogenetic distance. There is also a remarkable and near-perfect correlation between the MMP substrate preference and sequence identity of 50–57 discontinuous residues surrounding the catalytic groove. We conclude that these residues represent the specificity-determining positions (SDPs) that allowed for the expansion of MMP proteolytic function during evolution. A transmutation of only a few selected SDPs proximal to the bound substrate peptide, and contributing the most to selectivity among the MMPs, is sufficient to enact a global change in the substrate preference of one MMP to that of another, indicating the potential for the rational and focused redesign of cleavage specificity in MMPs. PMID:25246591

  11. Development of an in-situ multi-component reinforced Al-based metal matrix composite by direct metal laser sintering technique — Optimization of process parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Subrata Kumar, E-mail: subratagh82@gmail.com; Bandyopadhyay, Kaushik; Saha, Partha

    2014-07-01

    In the present investigation, an in-situ multi-component reinforced aluminum based metal matrix composite was fabricated by the combination of self-propagating high-temperature synthesis and direct metal laser sintering process. The different mixtures of Al, TiO{sub 2} and B{sub 4}C powders were used to initiate and maintain the self-propagating high-temperature synthesis by laser during the sintering process. It was found from the X-ray diffraction analysis and scanning electron microscopy that the reinforcements like Al{sub 2}O{sub 3}, TiC, and TiB{sub 2} were formed in the composite. The scanning electron microscopy revealed the distribution of the reinforcement phases in the composite and phase identities.more » The variable parameters such as powder layer thickness, laser power, scanning speed, hatching distance and composition of the powder mixture were optimized for higher density, lower porosity and higher microhardness using Taguchi method. Experimental investigation shows that the density of the specimen mainly depends upon the hatching distance, composition and layer thickness. On the other hand, hatching distance, layer thickness and laser power are the significant parameters which influence the porosity. The composition, laser power and layer thickness are the key influencing parameters for microhardness. - Highlights: • The reinforcements such as Al{sub 2}O{sub 3}, TiC, and TiB{sub 2} were produced in Al-MMC through SHS. • The density is mainly influenced by the material composition and hatching distance. • Hatching distance is the major influencing parameter on porosity. • The material composition is the significant parameter to enhance the microhardness. • The SEM micrographs reveal the distribution of TiC, TiB{sub 2} and Al{sub 2}O{sub 3} in the composite.« less

  12. Clustering Multivariate Time Series Using Hidden Markov Models

    PubMed Central

    Ghassempour, Shima; Girosi, Federico; Maeder, Anthony

    2014-01-01

    In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs), where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers. PMID:24662996

  13. GAGA: a new algorithm for genomic inference of geographic ancestry reveals fine level population substructure in Europeans.

    PubMed

    Lao, Oscar; Liu, Fan; Wollstein, Andreas; Kayser, Manfred

    2014-02-01

    Attempts to detect genetic population substructure in humans are troubled by the fact that the vast majority of the total amount of observed genetic variation is present within populations rather than between populations. Here we introduce a new algorithm for transforming a genetic distance matrix that reduces the within-population variation considerably. Extensive computer simulations revealed that the transformed matrix captured the genetic population differentiation better than the original one which was based on the T1 statistic. In an empirical genomic data set comprising 2,457 individuals from 23 different European subpopulations, the proportion of individuals that were determined as a genetic neighbour to another individual from the same sampling location increased from 25% with the original matrix to 52% with the transformed matrix. Similarly, the percentage of genetic variation explained between populations by means of Analysis of Molecular Variance (AMOVA) increased from 1.62% to 7.98%. Furthermore, the first two dimensions of a classical multidimensional scaling (MDS) using the transformed matrix explained 15% of the variance, compared to 0.7% obtained with the original matrix. Application of MDS with Mclust, SPA with Mclust, and GemTools algorithms to the same dataset also showed that the transformed matrix gave a better association of the genetic clusters with the sampling locations, and particularly so when it was used in the AMOVA framework with a genetic algorithm. Overall, the new matrix transformation introduced here substantially reduces the within population genetic differentiation, and can be broadly applied to methods such as AMOVA to enhance their sensitivity to reveal population substructure. We herewith provide a publically available (http://www.erasmusmc.nl/fmb/resources/GAGA) model-free method for improved genetic population substructure detection that can be applied to human as well as any other species data in future studies relevant to evolutionary biology, behavioural ecology, medicine, and forensics.

  14. Analysis of a Non-resonant Ultrasonic Levitation Device

    NASA Astrophysics Data System (ADS)

    Andrade, Marco A. B.; Pérez, Nicolás; Adamowski, Julio C.

    In this study, a non-resonant configuration of ultrasonic levitation device is presented, which is formed by a small diameter ultrasonic transducer and a concave reflector. The influence of different levitator parameters on the levitation performance is investigated by using a numerical model that combines the Gor'kov theory with a matrix method based on the Rayleigh integral. In contrast with traditional acoustic levitators, the non-resonant ultrasonic levitation device allows the separation distance between the transducer and the reflector to be adjusted continually, without requiring the separation distance to be set to a multiple of half-wavelength. It is also demonstrated, both numerically and experimentally, that the levitating particle can be manipulated by maintaining the transducer in a fixed position in space and moving the reflector in respect to the transducer.

  15. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  16. Effects of landscape matrix on population connectivity of an arboreal mammal, Petaurus breviceps.

    PubMed

    Malekian, Mansoureh; Cooper, Steven J B; Saint, Kathleen M; Lancaster, Melanie L; Taylor, Andrea C; Carthew, Susan M

    2015-09-01

    Ongoing habitat loss and fragmentation is considered a threat to biodiversity as it can create small, isolated populations that are at increased risk of extinction. Tree-dependent species are predicted to be highly sensitive to forest and woodland loss and fragmentation, but few studies have tested the influence of different types of landscape matrix on gene flow and population structure of arboreal species. Here, we examine the effects of landscape matrix on population structure of the sugar glider (Petaurus breviceps) in a fragmented landscape in southeastern South Australia. We collected 250 individuals across 12 native Eucalyptus forest remnants surrounded by cleared agricultural land or exotic Pinus radiata plantations and a large continuous eucalypt forest. Fifteen microsatellite loci were genotyped and analyzed to infer levels of population differentiation and dispersal. Genetic differentiation among most forest patches was evident. We found evidence for female philopatry and restricted dispersal distances for females relative to males, suggesting there is male-biased dispersal. Among the environmental variables, spatial variables including geographic location, minimum distance to neighboring patch, and degree of isolation were the most important in explaining genetic variation. The permeability of a cleared agricultural matrix to dispersing gliders was significantly higher than that of a pine matrix, with the gliders dispersing shorter distances across the latter. Our results added to previous findings for other species of restricted dispersal and connectivity due to habitat fragmentation in the same region, providing valuable information for the development of strategies to improve the connectivity of populations in the future.

  17. Lexical evolution rates derived from automated stability measures

    NASA Astrophysics Data System (ADS)

    Petroni, Filippo; Serva, Maurizio

    2010-03-01

    Phylogenetic trees can be reconstructed from the matrix which contains the distances between all pairs of languages in a family. Recently, we proposed a new method which uses normalized Levenshtein distances among words with the same meaning and averages over all the items of a given list. Decisions about the number of items in the input lists for language comparison have been debated since the beginning of glottochronology. The point is that words associated with some of the meanings have a rapid lexical evolution. Therefore, a large vocabulary comparison is only apparently more accurate than a smaller one, since many of the words do not carry any useful information. In principle, one should find the optimal length of the input lists, studying the stability of the different items. In this paper we tackle the problem with an automated methodology based only on our normalized Levenshtein distance. With this approach, the program of an automated reconstruction of language relationships is completed.

  18. Clinical evaluation of a collagen matrix to enhance the width of keratinized gingiva around dental implants

    PubMed Central

    Lee, Kang-Ho; Kim, Byung-Ock

    2010-01-01

    Purpose The purpose of this study was to evaluate the effect of collagen matrix with apically positioned flap (APF) on the width of keratinized gingiva, comparing to the results of APF only and APF combined with free gingival graft (FGG) at the second implant surgery. Methods Nine patients were selected from those who had received treatments at the Department of Periodontics, Chosun University Dental Hospital, Gwangju, Korea. We performed APF, APF combined with FGG, and APF combined with collagen matrix coverage respectively. Clinical evaluation of keratinized gingival was performed by measuring the distance from the gingival crest to the mucogingival junction at the mid-buccal point, using a periodontal probe before and after the surgery. Results The ratio of an increase was 0.3, 0.6, and 0.6 for the three subjects in the APF cases, 3, 5, and 7 for the three in the APF combined with FGG case, and 1.5, 0.5, and 3 for the three in the APF combined with collagen matrix coverage case. Conclusions This study suggests that the collagen matrix when used as a soft tissue substitute with the aim of increasing the width of keratinized tissue or mucosa, was as effective and predictable as the FGG. PMID:20498767

  19. A comparison of human and porcine acellularized dermis: interactions with human fibroblasts in vitro.

    PubMed

    Armour, Alexis D; Fish, Joel S; Woodhouse, Kimberly A; Semple, John L

    2006-03-01

    Dermal substitutes derived from xenograft materials require elaborate processing at a considerable cost. Acellularized porcine dermis is a readily available material associated with minimal immunogenicity. The objective of this study was to evaluate acellularized pig dermis as a scaffold for human fibroblasts. In vitro methods were used to evaluate fibroblast adherence, proliferation, and migration on pig acellularized dermal matrix. Acellular human dermis was used as a control. Pig acellularized dermal matrix was found to be inferior to human acellularized dermal matrix as a scaffold for human fibroblasts. Significantly more samples of human acellularized dermal matrix (83 percent, n = 24; p < 0.05) demonstrated fibroblast infiltration below the cell-seeded surface than pig acellularized dermal matrix (31 percent, n = 49). Significantly more (p < 0.05) fibroblasts infiltrated below the surface of human acellularized dermal matrix (mean, 1072 +/- 80 cells per section; n = 16 samples) than pig acellularized dermal matrix (mean, 301 +/- 48 cells per section; n = 16 samples). Fibroblasts migrated significantly less (p < 0.05) distance from the cell-seeded pig acellularized dermal matrix surface than in the human acellularized dermal matrix (78.8 percent versus 38.3 percent cells within 150 mum from the surface, respectively; n = 5). Fibroblasts proliferated more rapidly (p < 0.05) on pig acellularized dermal matrix (n = 9) than on the human acellularized dermal matrix (7.4-fold increase in cell number versus 1.8-fold increase, respectively; n = 9 for human acellularized dermal matrix). There was no difference between the two materials with respect to fibroblast adherence (8120 versus 7436 average adherent cells per section, for pig and human acellularized dermal matrix, respectively; n = 20 in each group; p > 0.05). Preliminary findings suggest that substantial differences may exist between human fibroblast behavior in cell-matrix interactions of porcine and human acellularized dermis.

  20. A Process for Manufacturing Metal-Ceramic Cellular Materials with Designed Mesostructure

    NASA Astrophysics Data System (ADS)

    Snelling, Dean Andrew, Jr.

    The goal of this work is to develop and characterize a manufacturing process that is able to create metal matrix composites with complex cellular geometries. The novel manufacturing method uses two distinct additive manufacturing processes: i) fabrication of patternless molds for cellular metal castings and ii) printing an advanced cellular ceramic for embedding in a metal matrix. However, while the use of AM greatly improves the freedom in the design of MMCs, it is important to identify the constraints imposed by the process and its process relationships. First, the author investigates potential differences in material properties (microstructure, porosity, mechanical strength) of A356 - T6 castings resulting from two different commercially available Binder Jetting media and traditional "no-bake" silica sand. It was determined that they yielded statistically equivalent results in four of the seven tests performed: dendrite arm spacing, porosity, surface roughness, and tensile strength. They differed in sand tensile strength, hardness, and density. Additionally, two critical sources of process constraints on part geometry are examined: (i) depowdering unbound material from intricate casting channels and (ii) metal flow and solidification distances through complex mold geometries. A Taguchi Design of Experiments is used to determine the relationships of important independent variables of each constraint. For depowdering, a minimum cleaning diameter of 3 mm was determined along with an equation relating cleaning distance as a function of channel diameter. Furthermore, for metal flow, choke diameter was found to be significantly significant variable. Finally, the author presents methods to process complex ceramic structure from precursor powders via Binder Jetting AM technology to incorporate into a bonded sand mold and the subsequently casted metal matrix. Through sintering experiments, a sintering temperature of 1375°C was established for the ceramic insert (78% cordierite). Upon printing and sintering the iii ceramic, three point bend tests showed the MMCs had less strength than the matrix material likely due to the relatively high porosity developed in the body. Additionally, it was found that the ceramic metal interface had minimal mechanical interlocking and chemical bonding limiting the strength of the final MMCs.

  1. Mirroring co-evolving trees in the light of their topologies.

    PubMed

    Hajirasouliha, Iman; Schönhuth, Alexander; de Juan, David; Valencia, Alfonso; Sahinalp, S Cenk

    2012-05-01

    Determining the interaction partners among protein/domain families poses hard computational problems, in particular in the presence of paralogous proteins. Available approaches aim to identify interaction partners among protein/domain families through maximizing the similarity between trimmed versions of their phylogenetic trees. Since maximization of any natural similarity score is computationally difficult, many approaches employ heuristics to evaluate the distance matrices corresponding to the tree topologies in question. In this article, we devise an efficient deterministic algorithm which directly maximizes the similarity between two leaf labeled trees with edge lengths, obtaining a score-optimal alignment of the two trees in question. Our algorithm is significantly faster than those methods based on distance matrix comparison: 1 min on a single processor versus 730 h on a supercomputer. Furthermore, we outperform the current state-of-the-art exhaustive search approach in terms of precision, while incurring acceptable losses in recall. A C implementation of the method demonstrated in this article is available at http://compbio.cs.sfu.ca/mirrort.htm

  2. Radiofrequency exposure on fast patrol boats in the Royal Norwegian Navy--an approach to a dose assessment.

    PubMed

    Baste, Valborg; Mild, Kjell Hansson; Moen, Bente E

    2010-07-01

    Epidemiological studies related to radiofrequency (RF) electromagnetic fields (EMF) have mainly used crude proxies for exposure, such as job titles, distance to, or use of different equipment emitting RF EMF. The Royal Norwegian Navy (RNoN) has measured RF field emitted from high-frequency antennas and radars on several spots where the crew would most likely be located aboard fast patrol boats (FPB). These boats are small, with short distance between the crew and the equipment emitting RF field. We have described the measured RF exposure aboard FPB and suggested different methods for calculations of total exposure and annual dose. Linear and spatial average in addition to percentage of ICNIRP and squared deviation of ICNIRP has been used. The methods will form the basis of a job exposure matrix where relative differences in exposure between groups of crew members can be used in further epidemiological studies of reproductive health. 2010 Wiley-Liss, Inc.

  3. Data-driven cluster reinforcement and visualization in sparsely-matched self-organizing maps.

    PubMed

    Manukyan, Narine; Eppstein, Margaret J; Rizzo, Donna M

    2012-05-01

    A self-organizing map (SOM) is a self-organized projection of high-dimensional data onto a typically 2-dimensional (2-D) feature map, wherein vector similarity is implicitly translated into topological closeness in the 2-D projection. However, when there are more neurons than input patterns, it can be challenging to interpret the results, due to diffuse cluster boundaries and limitations of current methods for displaying interneuron distances. In this brief, we introduce a new cluster reinforcement (CR) phase for sparsely-matched SOMs. The CR phase amplifies within-cluster similarity in an unsupervised, data-driven manner. Discontinuities in the resulting map correspond to between-cluster distances and are stored in a boundary (B) matrix. We describe a new hierarchical visualization of cluster boundaries displayed directly on feature maps, which requires no further clustering beyond what was implicitly accomplished during self-organization in SOM training. We use a synthetic benchmark problem and previously published microbial community profile data to demonstrate the benefits of the proposed methods.

  4. EvolQG - An R package for evolutionary quantitative genetics

    PubMed Central

    Melo, Diogo; Garcia, Guilherme; Hubbe, Alex; Assis, Ana Paula; Marroig, Gabriel

    2016-01-01

    We present an open source package for performing evolutionary quantitative genetics analyses in the R environment for statistical computing. Evolutionary theory shows that evolution depends critically on the available variation in a given population. When dealing with many quantitative traits this variation is expressed in the form of a covariance matrix, particularly the additive genetic covariance matrix or sometimes the phenotypic matrix, when the genetic matrix is unavailable and there is evidence the phenotypic matrix is sufficiently similar to the genetic matrix. Given this mathematical representation of available variation, the \\textbf{EvolQG} package provides functions for calculation of relevant evolutionary statistics; estimation of sampling error; corrections for this error; matrix comparison via correlations, distances and matrix decomposition; analysis of modularity patterns; and functions for testing evolutionary hypotheses on taxa diversification. PMID:27785352

  5. Topology of foreign exchange markets using hierarchical structure methods

    NASA Astrophysics Data System (ADS)

    Naylor, Michael J.; Rose, Lawrence C.; Moyle, Brendan J.

    2007-08-01

    This paper uses two physics derived hierarchical techniques, a minimal spanning tree and an ultrametric hierarchical tree, to extract a topological influence map for major currencies from the ultrametric distance matrix for 1995-2001. We find that these two techniques generate a defined and robust scale free network with meaningful taxonomy. The topology is shown to be robust with respect to method, to time horizon and is stable during market crises. This topology, appropriately used, gives a useful guide to determining the underlying economic or regional causal relationships for individual currencies and to understanding the dynamics of exchange rate price determination as part of a complex network.

  6. `Inter-Arrival Time' Inspired Algorithm and its Application in Clustering and Molecular Phylogeny

    NASA Astrophysics Data System (ADS)

    Kolekar, Pandurang S.; Kale, Mohan M.; Kulkarni-Kale, Urmila

    2010-10-01

    Bioinformatics, being multidisciplinary field, involves applications of various methods from allied areas of Science for data mining using computational approaches. Clustering and molecular phylogeny is one of the key areas in Bioinformatics, which help in study of classification and evolution of organisms. Molecular phylogeny algorithms can be divided into distance based and character based methods. But most of these methods are dependent on pre-alignment of sequences and become computationally intensive with increase in size of data and hence demand alternative efficient approaches. `Inter arrival time distribution' (IATD) is a popular concept in the theory of stochastic system modeling but its potential in molecular data analysis has not been fully explored. The present study reports application of IATD in Bioinformatics for clustering and molecular phylogeny. The proposed method provides IATDs of nucleotides in genomic sequences. The distance function based on statistical parameters of IATDs is proposed and distance matrix thus obtained is used for the purpose of clustering and molecular phylogeny. The method is applied on a dataset of 3' non-coding region sequences (NCR) of Dengue virus type 3 (DENV-3), subtype III, reported in 2008. The phylogram thus obtained revealed the geographical distribution of DENV-3 isolates. Sri Lankan DENV-3 isolates were further observed to be clustered in two sub-clades corresponding to pre and post Dengue hemorrhagic fever emergence groups. These results are consistent with those reported earlier, which are obtained using pre-aligned sequence data as an input. These findings encourage applications of the IATD based method in molecular phylogenetic analysis in particular and data mining in general.

  7. Efficient dual approach to distance metric learning.

    PubMed

    Shen, Chunhua; Kim, Junae; Liu, Fayao; Wang, Lei; van den Hengel, Anton

    2014-02-01

    Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.

  8. High-order distance-based multiview stochastic learning in image classification.

    PubMed

    Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng

    2014-12-01

    How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification.

  9. Computing the shape of brain networks using graph filtration and Gromov-Hausdorff metric.

    PubMed

    Lee, Hyekyoung; Chung, Moo K; Kang, Hyejin; Kim, Boong-Nyun; Lee, Dong Soo

    2011-01-01

    The difference between networks has been often assessed by the difference of global topological measures such as the clustering coefficient, degree distribution and modularity. In this paper, we introduce a new framework for measuring the network difference using the Gromov-Hausdorff (GH) distance, which is often used in shape analysis. In order to apply the GH distance, we define the shape of the brain network by piecing together the patches of locally connected nearest neighbors using the graph filtration. The shape of the network is then transformed to an algebraic form called the single linkage matrix. The single linkage matrix is subsequently used in measuring network differences using the GH distance. As an illustration, we apply the proposed framework to compare the FDG-PET based functional brain networks out of 24 attention deficit hyperactivity disorder (ADHD) children, 26 autism spectrum disorder (ASD) children and 11 pediatric control subjects.

  10. Calculation of electronic coupling matrix elements for ground and excited state electron transfer reactions: Comparison of the generalized Mulliken-Hush and block diagonalization methods

    NASA Astrophysics Data System (ADS)

    Cave, Robert J.; Newton, Marshall D.

    1997-06-01

    Two independent methods are presented for the nonperturbative calculation of the electronic coupling matrix element (Hab) for electron transfer reactions using ab initio electronic structure theory. The first is based on the generalized Mulliken-Hush (GMH) model, a multistate generalization of the Mulliken Hush formalism for the electronic coupling. The second is based on the block diagonalization (BD) approach of Cederbaum, Domcke, and co-workers. Detailed quantitative comparisons of the two methods are carried out based on results for (a) several states of the system Zn2OH2+ and (b) the low-lying states of the benzene-Cl atom complex and its contact ion pair. Generally good agreement between the two methods is obtained over a range of geometries. Either method can be applied at an arbitrary nuclear geometry and, as a result, may be used to test the validity of the Condon approximation. Examples of nonmonotonic behavior of the electronic coupling as a function of nuclear coordinates are observed for Zn2OH2+. Both methods also yield a natural definition of the effective distance (rDA) between donor (D) and acceptor (A) sites, in contrast to earlier approaches which required independent estimates of rDA, generally based on molecular structure data.

  11. Specification of matrix cleanup goals in fractured porous media.

    PubMed

    Rodríguez, David J; Kueper, Bernard H

    2013-01-01

    Semianalytical transient solutions have been developed to evaluate what level of fractured porous media (e.g., bedrock or clay) matrix cleanup must be achieved in order to achieve compliance of fracture pore water concentrations within a specified time at specified locations of interest. The developed mathematical solutions account for forward and backward diffusion in a fractured porous medium where the initial condition comprises a spatially uniform, nonzero matrix concentration throughout the domain. Illustrative simulations incorporating the properties of mudstone fractured bedrock demonstrate that the time required to reach a desired fracture pore water concentration is a function of the distance between the point of compliance and the upgradient face of the domain where clean groundwater is inflowing. Shorter distances correspond to reduced times required to reach compliance, implying that shorter treatment zones will respond more favorably to remediation than longer treatment zones in which back-diffusion dominates the fracture pore water response. For a specified matrix cleanup goal, compliance of fracture pore water concentrations will be reached sooner for decreased fracture spacing, increased fracture aperture, higher matrix fraction organic carbon, lower matrix porosity, shorter aqueous phase decay half-life, and a higher hydraulic gradient. The parameters dominating the response of the system can be measured using standard field and laboratory techniques. © 2012, The Author(s). Ground Water © 2012, National Ground Water Association.

  12. Two-dimensional PCA-based human gait identification

    NASA Astrophysics Data System (ADS)

    Chen, Jinyan; Wu, Rongteng

    2012-11-01

    It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.

  13. The Evolution of Interfacial Sliding Stresses During Cyclic Push-in Testing of C- and BN-Coated Hi-Nicalon Fiber-Reinforced CMCs

    NASA Technical Reports Server (NTRS)

    Eldridge, J. I.; Bansal, N. P.; Bhatt, R. T.

    1998-01-01

    Interfacial debond cracks and fiber/matrix sliding stresses in ceramic matrix composites (CMCs) can evolve under cyclic fatigue conditions as well as with changes in the environment, strongly affecting the crack growth behavior, and therefore, the useful service lifetime of the composite. In this study, room temperature cyclic fiber push-in testing was applied to monitor the evolution of frictional sliding stresses and fiber sliding distances with continued cycling in both C- and BN-coated Hi-Nicalon SiC fiber-reinforced CMCs. A SiC matrix composite reinforced with C-coated Hi-Nical on fibers as well as barium strontium aluminosilicate (BSAS) matrix composites reinforced with BN-coated (four different deposition processes compared) Hi-Nicalon fibers were examined. For failure at a C interface, test results indicated progressive increases in fiber sliding distances during cycling in room air but not in nitrogen. These results suggest the presence of moisture will promote crack growth when interfacial failure occurs at a C interface. While short-term testing environmental effects were not apparent for failure at the BN interfaces, long-term exposure of partially debonded BN-coated fibers to humid air resulted in large increases in fiber sliding distances and decreases in interfacial sliding stresses for all the BN coatings, presumably due to moisture attack. A wide variation was observed in debond and frictional sliding stresses among the different BN coatings.

  14. How the distance between regional and human mobility behavior affect the epidemic spreading

    NASA Astrophysics Data System (ADS)

    Wu, Minna; Han, She; Sun, Mei; Han, Dun

    2018-02-01

    The distance between different regions has a lot of impact on the individuals' mobility behavior. Meanwhile, the individuals' mobility could greatly affect the epidemic propagation way. By researching the individuals' mobility behavior, we establish the coupled dynamic model for individual mobility and transmission of infectious disease. The basic reproduction number is theoretically obtained according to the next-generation matrix method. Through this study, we may get that the stability state of the epidemic system will be prolonged under a higher commuting level. The infection density is almost the same in different regions over a sufficiently long time. The results show that, due to the individual movement, the origin of virus can only speed up or delay the outbreak of infectious diseases, however, it have little impact on the final infection size.

  15. Relationships among cloud occurrence frequency, overlap, and effective thickness derived from CALIPSO and CloudSat merged cloud vertical profiles

    NASA Astrophysics Data System (ADS)

    Kato, Seiji; Sun-Mack, Sunny; Miller, Walter F.; Rose, Fred G.; Chen, Yan; Minnis, Patrick; Wielicki, Bruce A.

    2010-01-01

    A cloud frequency of occurrence matrix is generated using merged cloud vertical profiles derived from the satellite-borne Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and cloud profiling radar. The matrix contains vertical profiles of cloud occurrence frequency as a function of the uppermost cloud top. It is shown that the cloud fraction and uppermost cloud top vertical profiles can be related by a cloud overlap matrix when the correlation length of cloud occurrence, which is interpreted as an effective cloud thickness, is introduced. The underlying assumption in establishing the above relation is that cloud overlap approaches random overlap with increasing distance separating cloud layers and that the probability of deviating from random overlap decreases exponentially with distance. One month of Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) and CloudSat data (July 2006) support these assumptions, although the correlation length sometimes increases with separation distance when the cloud top height is large. The data also show that the correlation length depends on cloud top hight and the maximum occurs when the cloud top height is 8 to 10 km. The cloud correlation length is equivalent to the decorrelation distance introduced by Hogan and Illingworth (2000) when cloud fractions of both layers in a two-cloud layer system are the same. The simple relationships derived in this study can be used to estimate the top-of-atmosphere irradiance difference caused by cloud fraction, uppermost cloud top, and cloud thickness vertical profile differences.

  16. A hemagglutinating variant of Prevotella melaninogenica isolated from the oral cavity.

    PubMed

    Haraldsson, G; Holbrook, W P

    1998-12-01

    Strains resembling Prevotella melaninogenica were isolated from healthy subjects and patients with periodontal disease and were identified using: a 5-test phenotypic screen; commercial identification kits; and a 16S rRNA-based polymerase chain reaction (PCR) method. Eleven clinical isolates closely resembling P. melaninogenica, and all from patients with periodontitis, were able to agglutinate erythrocytes. In the electron microscope, hemagglutinating isolates showed fimbria-like structures, that were not seen on non-hemagglutinating isolates. Some strains were further classified with PCR-restriction fragment-length polymorphism (RFLP) of 16S rRNA genes. Amplified 16S rDNA was digested using five different endonucleases, separated with agarose gel electrophoresis, stained and photographed. Photographs were then scanned, digitized and a distance matrix calculated using Dice coefficient, where the presence or absence of a band was used as a character. The distance matrix was plotted as a phenogram. At 70% similarity six clusters were seen. Type strains of separate Prevotella species did not fall into any cluster. Hemagglutinating isolates fell into three clusters: four clustered with the type strains of P. melaninogenica and Prevotella veroralis; four with other P. melaninogenica isolates and two hemagglutinating isolates clustered together Prevotella loescheii. The PCR-RFLP results showed that the hemagglutinating strains did not form a homogenous group inside the Prevotella genus.

  17. A study of acoustic-to-articulatory inversion of speech by analysis-by-synthesis using chain matrices and the Maeda articulatory model

    PubMed Central

    Panchapagesan, Sankaran; Alwan, Abeer

    2011-01-01

    In this paper, a quantitative study of acoustic-to-articulatory inversion for vowel speech sounds by analysis-by-synthesis using the Maeda articulatory model is performed. For chain matrix calculation of vocal tract (VT) acoustics, the chain matrix derivatives with respect to area function are calculated and used in a quasi-Newton method for optimizing articulatory trajectories. The cost function includes a distance measure between natural and synthesized first three formants, and parameter regularization and continuity terms. Calibration of the Maeda model to two speakers, one male and one female, from the University of Wisconsin x-ray microbeam (XRMB) database, using a cost function, is discussed. Model adaptation includes scaling the overall VT and the pharyngeal region and modifying the outer VT outline using measured palate and pharyngeal traces. The inversion optimization is initialized by a fast search of an articulatory codebook, which was pruned using XRMB data to improve inversion results. Good agreement between estimated midsagittal VT outlines and measured XRMB tongue pellet positions was achieved for several vowels and diphthongs for the male speaker, with average pellet-VT outline distances around 0.15 cm, smooth articulatory trajectories, and less than 1% average error in the first three formants. PMID:21476670

  18. Comparative Issues and Methods in Organizational Diagnosis. Report II. The Decision Tree Approach.

    DTIC Science & Technology

    organizational diagnosis . The advantages and disadvantages of the decision-tree approach generally, and in this study specifically, are examined. A pre-test, using a civilian sample of 174 work groups with Survey of Organizations data, was conducted to assess various decision-tree classification criteria, in terms of their similarity to the distance function used by Bowers and Hausser (1977). The results suggested the use of a large developmental sample, which should result in more distinctly defined boundary lines between classification profiles. Also, the decision matrix

  19. A Comparison of Accuracy of Matrix Impression System with Putty Reline Technique and Multiple Mix Technique: An In Vitro Study

    PubMed Central

    Kumar, M Praveen; Patil, Suneel G; Dheeraj, Bhandari; Reddy, Keshav; Goel, Dinker; Krishna, Gopi

    2015-01-01

    Background: The difficulty in obtaining an acceptable impression increases exponentially as the number of abutments increases. Accuracy of the impression material and the use of a suitable impression technique are of utmost importance in the fabrication of a fixed partial denture. This study compared the accuracy of the matrix impression system with conventional putty reline and multiple mix technique for individual dies by comparing the inter-abutment distance in the casts obtained from the impressions. Materials and Methods: Three groups, 10 impressions each with three impression techniques (matrix impression system, putty reline technique and multiple mix technique) were made of a master die. Typodont teeth were embedded in a maxillary frasaco model base. The left first premolar was removed to create a three-unit fixed partial denture situation and the left canine and second premolar were prepared conservatively, and hatch marks were made on the abutment teeth. The final casts obtained from the impressions were examined under a profile projector and the inter-abutment distance was calculated for all the casts and compared. Results: The results from this study showed that in the mesiodistal dimensions the percentage deviation from master model in Group I was 0.1 and 0.2, in Group II was 0.9 and 0.3, and Group III was 1.6 and 1.5, respectively. In the labio-palatal dimensions the percentage deviation from master model in Group I was 0.01 and 0.4, Group II was 1.9 and 1.3, and Group III was 2.2 and 2.0, respectively. In the cervico-incisal dimensions the percentage deviation from the master model in Group I was 1.1 and 0.2, Group II was 3.9 and 1.7, and Group III was 1.9 and 3.0, respectively. In the inter-abutment dimension of dies, percentage deviation from master model in Group I was 0.1, Group II was 0.6, and Group III was 1.0. Conclusion: The matrix impression system showed more accuracy of reproduction for individual dies when compared with putty reline technique and multiple mix technique in all the three directions, as well as the inter-abutment distance. PMID:26124599

  20. Prediction of fatigue-related driver performance from EEG data by deep Riemannian model.

    PubMed

    Hajinoroozi, Mehdi; Jianqiu Zhang; Yufei Huang

    2017-07-01

    Prediction of the drivers' drowsy and alert states is important for safety purposes. The prediction of drivers' drowsy and alert states from electroencephalography (EEG) using shallow and deep Riemannian methods is presented. For shallow Riemannian methods, the minimum distance to Riemannian mean (mdm) and Log-Euclidian metric are investigated, where it is shown that Log-Euclidian metric outperforms the mdm algorithm. In addition the SPDNet, a deep Riemannian model, that takes the EEG covariance matrix as the input is investigated. It is shown that SPDNet outperforms all tested shallow and deep classification methods. Performance of SPDNet is 6.02% and 2.86% higher than the best performance by the conventional Euclidian classifiers and shallow Riemannian models, respectively.

  1. Semi-automated brain tumor segmentation on multi-parametric MRI using regularized non-negative matrix factorization.

    PubMed

    Sauwen, Nicolas; Acou, Marjan; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Huffel, Sabine Van

    2017-05-04

    Segmentation of gliomas in multi-parametric (MP-)MR images is challenging due to their heterogeneous nature in terms of size, appearance and location. Manual tumor segmentation is a time-consuming task and clinical practice would benefit from (semi-) automated segmentation of the different tumor compartments. We present a semi-automated framework for brain tumor segmentation based on non-negative matrix factorization (NMF) that does not require prior training of the method. L1-regularization is incorporated into the NMF objective function to promote spatial consistency and sparseness of the tissue abundance maps. The pathological sources are initialized through user-defined voxel selection. Knowledge about the spatial location of the selected voxels is combined with tissue adjacency constraints in a post-processing step to enhance segmentation quality. The method is applied to an MP-MRI dataset of 21 high-grade glioma patients, including conventional, perfusion-weighted and diffusion-weighted MRI. To assess the effect of using MP-MRI data and the L1-regularization term, analyses are also run using only conventional MRI and without L1-regularization. Robustness against user input variability is verified by considering the statistical distribution of the segmentation results when repeatedly analyzing each patient's dataset with a different set of random seeding points. Using L1-regularized semi-automated NMF segmentation, mean Dice-scores of 65%, 74 and 80% are found for active tumor, the tumor core and the whole tumor region. Mean Hausdorff distances of 6.1 mm, 7.4 mm and 8.2 mm are found for active tumor, the tumor core and the whole tumor region. Lower Dice-scores and higher Hausdorff distances are found without L1-regularization and when only considering conventional MRI data. Based on the mean Dice-scores and Hausdorff distances, segmentation results are competitive with state-of-the-art in literature. Robust results were found for most patients, although careful voxel selection is mandatory to avoid sub-optimal segmentation.

  2. Quantum mechanical/molecular mechanical/continuum style solvation model: second order Møller-Plesset perturbation theory.

    PubMed

    Thellamurege, Nandun M; Si, Dejun; Cui, Fengchao; Li, Hui

    2014-05-07

    A combined quantum mechanical/molecular mechanical/continuum (QM/MM/C) style second order Møller-Plesset perturbation theory (MP2) method that incorporates induced dipole polarizable force field and induced surface charge continuum solvation model is established. The Z-vector method is modified to include induced dipoles and induced surface charges to determine the MP2 response density matrix, which can be used to evaluate MP2 properties. In particular, analytic nuclear gradient is derived and implemented for this method. Using the Assisted Model Building with Energy Refinement induced dipole polarizable protein force field, the QM/MM/C style MP2 method is used to study the hydrogen bonding distances and strengths of the photoactive yellow protein chromopore in the wild type and the Glu46Gln mutant.

  3. A Forward Search Procedure for Identifying Influential Observations in the Estimation of a Covariance Matrix

    ERIC Educational Resources Information Center

    Poon, Wai-Yin; Wong, Yuen-Kwan

    2004-01-01

    This study uses a Cook's distance type diagnostic statistic to identify unusual observations in a data set that unduly influence the estimation of a covariance matrix. Similar to many other deletion-type diagnostic statistics, this proposed measure is susceptible to masking or swamping effect in the presence of several unusual observations. In…

  4. Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.

    PubMed

    Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng

    2013-01-01

    Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.

  5. Calculating Path-Dependent Travel Time Prediction Variance and Covariance for the SALSA3D Global Tomographic P-Velocity Model with a Distributed Parallel Multi-Core Computer

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Encarnacao, A.; Ballard, S.; Young, C. J.; Phillips, W. S.; Begnaud, M. L.

    2011-12-01

    Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P-velocity model (SALSA3D) that provides superior first P travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we show a methodology for accomplishing this by exploiting the full model covariance matrix. Our model has on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiply methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix we solve for the travel-time covariance associated with arbitrary ray-paths by integrating the model covariance along both ray paths. Setting the paths equal gives variance for that path. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  6. Path-Dependent Travel Time Prediction Variance and Covariance for a Global Tomographic P- and S-Velocity Model

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Ballard, S.; Begnaud, M. L.; Encarnacao, A. V.; Young, C. J.; Phillips, W. S.

    2015-12-01

    Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P- and S-velocity model (SALSA3D) that provides superior first P and first S travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from our latest tomographic model. Typical global 3D SALSA3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes a prior model covariance constraint) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.

  7. Long-distance dispersal of non-native pine bark beetles from host resources

    Treesearch

    Kevin Chase; Dave Kelly; Andrew M. Liebhold; Martin K.-F. Bader; Eckehard G. Brockerhoff

    2017-01-01

    Dispersal and host detection are behaviours promoting the spread of invading populations in a landscape matrix. In fragmented landscapes, the spatial arrangement of habitat structure affects the dispersal success of organisms. The aim of the present study was to determine the long distance dispersal capabilities of two non-native pine bark beetles (Hylurgus...

  8. Radio frequency shielding behaviour of silane treated Fe2O3/E-glass fibre reinforced epoxy hybrid composite

    NASA Astrophysics Data System (ADS)

    Arun prakash, V. R.; Rajadurai, A.

    2016-10-01

    In this work, radio frequency shielding behaviour of polymer (epoxy) matrixes composed of E-glass fibres and Fe2O3 fillers have been studied. The principal aim of this project is to prepare suitable shielding material for RFID application. When RFID unit is pasted on a metal plate without shielding material, the sensing distance is reduced, resulting in a less than useful RFID system. To improve RF shielding of epoxy, fibres and fillers were utilized. Magnetic behaviour of epoxy polymer composites was measured by hysteresis graphs (B-H) followed by radio frequency identifier setup. Fe2O3 particles of sizes 800, 200 and 100 nm and E-glass fibre woven mat of 600 g/m2 were used to make composites. Particle sizes of 800 nm and 200 nm were prepared by high-energy ball milling, whereas particles of 100 nm were prepared by sol-gel method. To enhance better dispersion of particles within the epoxy matrix, a surface modification process was carried out on fillers by an amino functional coupling agent called 3-Aminopropyltrimethoxysilane (APTMS). Crystalline and functional groups of siliconized Fe2O3 particles were characterized by XRD and FTIR spectroscopy analysis. Variable quantity of E-glass fibre (25, 35, and 45 vol%) was laid down along with 0.5 and 1.0 vol% of 800, 200, and 100 nm size Fe2O3 particles into the matrix, to fabricate the hybrid composites. Scanning electron microscopy and transmission electron microscopy images reveal the shape and size of Fe2O3 particles for different milling times and particle dispersion in the epoxy matrix. The maximum improved sensing distance of 45.2, 39.4 and 43.5 % was observed for low-, high-, and ultra-high radio frequency identifier setup along with shielding composite consist of epoxy, 1 vol% 200 nm Fe2O3 particles and 45 vol% of E-glass fibre.

  9. Cloud field classification based on textural features

    NASA Technical Reports Server (NTRS)

    Sengupta, Sailes Kumar

    1989-01-01

    An essential component in global climate research is accurate cloud cover and type determination. Of the two approaches to texture-based classification (statistical and textural), only the former is effective in the classification of natural scenes such as land, ocean, and atmosphere. In the statistical approach that was adopted, parameters characterizing the stochastic properties of the spatial distribution of grey levels in an image are estimated and then used as features for cloud classification. Two types of textural measures were used. One is based on the distribution of the grey level difference vector (GLDV), and the other on a set of textural features derived from the MaxMin cooccurrence matrix (MMCM). The GLDV method looks at the difference D of grey levels at pixels separated by a horizontal distance d and computes several statistics based on this distribution. These are then used as features in subsequent classification. The MaxMin tectural features on the other hand are based on the MMCM, a matrix whose (I,J)th entry give the relative frequency of occurrences of the grey level pair (I,J) that are consecutive and thresholded local extremes separated by a given pixel distance d. Textural measures are then computed based on this matrix in much the same manner as is done in texture computation using the grey level cooccurrence matrix. The database consists of 37 cloud field scenes from LANDSAT imagery using a near IR visible channel. The classification algorithm used is the well known Stepwise Discriminant Analysis. The overall accuracy was estimated by the percentage or correct classifications in each case. It turns out that both types of classifiers, at their best combination of features, and at any given spatial resolution give approximately the same classification accuracy. A neural network based classifier with a feed forward architecture and a back propagation training algorithm is used to increase the classification accuracy, using these two classes of features. Preliminary results based on the GLDV textural features alone look promising.

  10. Grey situation group decision-making method based on prospect theory.

    PubMed

    Zhang, Na; Fang, Zhigeng; Liu, Xiaqing

    2014-01-01

    This paper puts forward a grey situation group decision-making method on the basis of prospect theory, in view of the grey situation group decision-making problems that decisions are often made by multiple decision experts and those experts have risk preferences. The method takes the positive and negative ideal situation distance as reference points, defines positive and negative prospect value function, and introduces decision experts' risk preference into grey situation decision-making to make the final decision be more in line with decision experts' psychological behavior. Based on TOPSIS method, this paper determines the weight of each decision expert, sets up comprehensive prospect value matrix for decision experts' evaluation, and finally determines the optimal situation. At last, this paper verifies the effectiveness and feasibility of the method by means of a specific example.

  11. Grey Situation Group Decision-Making Method Based on Prospect Theory

    PubMed Central

    Zhang, Na; Fang, Zhigeng; Liu, Xiaqing

    2014-01-01

    This paper puts forward a grey situation group decision-making method on the basis of prospect theory, in view of the grey situation group decision-making problems that decisions are often made by multiple decision experts and those experts have risk preferences. The method takes the positive and negative ideal situation distance as reference points, defines positive and negative prospect value function, and introduces decision experts' risk preference into grey situation decision-making to make the final decision be more in line with decision experts' psychological behavior. Based on TOPSIS method, this paper determines the weight of each decision expert, sets up comprehensive prospect value matrix for decision experts' evaluation, and finally determines the optimal situation. At last, this paper verifies the effectiveness and feasibility of the method by means of a specific example. PMID:25197706

  12. A Poisson nonnegative matrix factorization method with parameter subspace clustering constraint for endmember extraction in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Ma, Jun; Yang, Gang; Du, Bo; Zhang, Liangpei

    2017-06-01

    A new Bayesian method named Poisson Nonnegative Matrix Factorization with Parameter Subspace Clustering Constraint (PNMF-PSCC) has been presented to extract endmembers from Hyperspectral Imagery (HSI). First, the method integrates the liner spectral mixture model with the Bayesian framework and it formulates endmember extraction into a Bayesian inference problem. Second, the Parameter Subspace Clustering Constraint (PSCC) is incorporated into the statistical program to consider the clustering of all pixels in the parameter subspace. The PSCC could enlarge differences among ground objects and helps finding endmembers with smaller spectrum divergences. Meanwhile, the PNMF-PSCC method utilizes the Poisson distribution as the prior knowledge of spectral signals to better explain the quantum nature of light in imaging spectrometer. Third, the optimization problem of PNMF-PSCC is formulated into maximizing the joint density via the Maximum A Posterior (MAP) estimator. The program is finally solved by iteratively optimizing two sub-problems via the Alternating Direction Method of Multipliers (ADMM) framework and the FURTHESTSUM initialization scheme. Five state-of-the art methods are implemented to make comparisons with the performance of PNMF-PSCC on both the synthetic and real HSI datasets. Experimental results show that the PNMF-PSCC outperforms all the five methods in Spectral Angle Distance (SAD) and Root-Mean-Square-Error (RMSE), and especially it could identify good endmembers for ground objects with smaller spectrum divergences.

  13. Effect of Various Retrogression Regimes on Aging Behavior and Precipitates Characterization of a High Zn-Containing Al-Zn-Mg-Cu Alloy

    NASA Astrophysics Data System (ADS)

    Wen, Kai; Xiong, Baiqing; Zhang, Yongan; Li, Zhihui; Li, Xiwu; Huang, Shuhui; Yan, Lizhen; Yan, Hongwei; Liu, Hongwei

    2018-03-01

    In the present work, the influence of various retrogression treatments on hardness, electrical conductivity and mechanical properties of a high Zn-containing Al-Zn-Mg-Cu alloy is investigated and several retrogression regimes subjected to a same strength level are proposed. The precipitates are qualitatively investigated by means of transmission electron microscopy (TEM) and high-resolution transmission electron microscopy techniques. Based on the matrix precipitate observations, the distributions of precipitate size and nearest inter-precipitate distance are extracted from bright-field TEM images projected along <110>Al orientation with the aid of an imaging analysis and an arithmetic method. The results show that GP zones and η' precipitates are the major precipitates and the precipitate size and its distribution range continuously enlarge with the retrogression regime expands to an extent of high temperature. The nearest inter-precipitate distance ranges obtained are quite the same and the average distance of nearest inter-precipitates show a slight increase. The influence of precipitates on mechanical properties is discussed through the interaction relationship between precipitates and dislocations.

  14. Effect of Various Retrogression Regimes on Aging Behavior and Precipitates Characterization of a High Zn-Containing Al-Zn-Mg-Cu Alloy

    NASA Astrophysics Data System (ADS)

    Wen, Kai; Xiong, Baiqing; Zhang, Yongan; Li, Zhihui; Li, Xiwu; Huang, Shuhui; Yan, Lizhen; Yan, Hongwei; Liu, Hongwei

    2018-05-01

    In the present work, the influence of various retrogression treatments on hardness, electrical conductivity and mechanical properties of a high Zn-containing Al-Zn-Mg-Cu alloy is investigated and several retrogression regimes subjected to a same strength level are proposed. The precipitates are qualitatively investigated by means of transmission electron microscopy (TEM) and high-resolution transmission electron microscopy techniques. Based on the matrix precipitate observations, the distributions of precipitate size and nearest inter-precipitate distance are extracted from bright-field TEM images projected along <110>Al orientation with the aid of an imaging analysis and an arithmetic method. The results show that GP zones and η' precipitates are the major precipitates and the precipitate size and its distribution range continuously enlarge with the retrogression regime expands to an extent of high temperature. The nearest inter-precipitate distance ranges obtained are quite the same and the average distance of nearest inter-precipitates show a slight increase. The influence of precipitates on mechanical properties is discussed through the interaction relationship between precipitates and dislocations.

  15. Real-time inextensible surgical thread simulation.

    PubMed

    Xu, Lang; Liu, Qian

    2018-03-27

    This paper discusses a real-time simulation method of inextensible surgical thread based on the Cosserat rod theory using position-based dynamics (PBD). The method realizes stable twining and knotting of surgical thread while including inextensibility, bending, twisting and coupling effects. The Cosserat rod theory is used to model the nonlinear elastic behavior of surgical thread. The surgical thread model is solved with PBD to achieve a real-time, extremely stable simulation. Due to the one-dimensional linear structure of surgical thread, the direct solution of the distance constraint based on tridiagonal matrix algorithm is used to enhance stretching resistance in every constraint projection iteration. In addition, continuous collision detection and collision response guarantee a large time step and high performance. Furthermore, friction is integrated into the constraint projection process to stabilize the twining of multiple threads and complex contact situations. Through comparisons with existing methods, the surgical thread maintains constant length under large deformation after applying the direct distance constraint in our method. The twining and knotting of multiple threads correspond to stable solutions to contact and friction forces. A surgical suture scene is also modeled to demonstrate the practicality and simplicity of our method. Our method achieves stable and fast simulation of inextensible surgical thread. Benefiting from the unified particle framework, the rigid body, elastic rod, and soft body can be simultaneously simulated. The method is appropriate for applications in virtual surgery that require multiple dynamic bodies.

  16. Detection of Q-Matrix Misspecification Using Two Criteria for Validation of Cognitive Structures under the Least Squares Distance Model

    ERIC Educational Resources Information Center

    Romero, Sonia J.; Ordoñez, Xavier G.; Ponsoda, Vincente; Revuelta, Javier

    2014-01-01

    Cognitive Diagnostic Models (CDMs) aim to provide information about the degree to which individuals have mastered specific attributes that underlie the success of these individuals on test items. The Q-matrix is a key element in the application of CDMs, because contains links item-attributes representing the cognitive structure proposed for solve…

  17. System and method for the adaptive mapping of matrix data to sets of polygons

    NASA Technical Reports Server (NTRS)

    Burdon, David (Inventor)

    2003-01-01

    A system and method for converting bitmapped data, for example, weather data or thermal imaging data, to polygons is disclosed. The conversion of the data into polygons creates smaller data files. The invention is adaptive in that it allows for a variable degree of fidelity of the polygons. Matrix data is obtained. A color value is obtained. The color value is a variable used in the creation of the polygons. A list of cells to check is determined based on the color value. The list of cells to check is examined in order to determine a boundary list. The boundary list is then examined to determine vertices. The determination of the vertices is based on a prescribed maximum distance. When drawn, the ordered list of vertices create polygons which depict the cell data. The data files which include the vertices for the polygons are much smaller than the corresponding cell data files. The fidelity of the polygon representation can be adjusted by repeating the logic with varying fidelity values to achieve a given maximum file size or a maximum number of vertices per polygon.

  18. An Optical Sensor for Measuring the Position and Slanting Direction of Flat Surfaces

    PubMed Central

    Chen, Yu-Ta; Huang, Yen-Sheng; Liu, Chien-Sheng

    2016-01-01

    Automated optical inspection is a very important technique. For this reason, this study proposes an optical non-contact slanting surface measuring system. The essential features of the measurement system are obtained through simulations using the optical design software Zemax. The actual propagation of laser beams within the measurement system is traced by using a homogeneous transformation matrix (HTM), the skew-ray tracing method, and a first-order Taylor series expansion. Additionally, a complete mathematical model that describes the variations in light spots on photoelectric sensors and the corresponding changes in the sample orientation and distance was established. Finally, a laboratory prototype system was constructed on an optical bench to verify experimentally the proposed system. This measurement system can simultaneously detect the slanting angles (x, z) in the x and z directions of the sample and the distance (y) between the biconvex lens and the flat sample surface. PMID:27409619

  19. An Optical Sensor for Measuring the Position and Slanting Direction of Flat Surfaces.

    PubMed

    Chen, Yu-Ta; Huang, Yen-Sheng; Liu, Chien-Sheng

    2016-07-09

    Automated optical inspection is a very important technique. For this reason, this study proposes an optical non-contact slanting surface measuring system. The essential features of the measurement system are obtained through simulations using the optical design software Zemax. The actual propagation of laser beams within the measurement system is traced by using a homogeneous transformation matrix (HTM), the skew-ray tracing method, and a first-order Taylor series expansion. Additionally, a complete mathematical model that describes the variations in light spots on photoelectric sensors and the corresponding changes in the sample orientation and distance was established. Finally, a laboratory prototype system was constructed on an optical bench to verify experimentally the proposed system. This measurement system can simultaneously detect the slanting angles (x, z) in the x and z directions of the sample and the distance (y) between the biconvex lens and the flat sample surface.

  20. Sequence comparison alignment-free approach based on suffix tree and L-words frequency.

    PubMed

    Soares, Inês; Goios, Ana; Amorim, António

    2012-01-01

    The vast majority of methods available for sequence comparison rely on a first sequence alignment step, which requires a number of assumptions on evolutionary history and is sometimes very difficult or impossible to perform due to the abundance of gaps (insertions/deletions). In such cases, an alternative alignment-free method would prove valuable. Our method starts by a computation of a generalized suffix tree of all sequences, which is completed in linear time. Using this tree, the frequency of all possible words with a preset length L-L-words--in each sequence is rapidly calculated. Based on the L-words frequency profile of each sequence, a pairwise standard Euclidean distance is then computed producing a symmetric genetic distance matrix, which can be used to generate a neighbor joining dendrogram or a multidimensional scaling graph. We present an improvement to word counting alignment-free approaches for sequence comparison, by determining a single optimal word length and combining suffix tree structures to the word counting tasks. Our approach is, thus, a fast and simple application that proved to be efficient and powerful when applied to mitochondrial genomes. The algorithm was implemented in Python language and is freely available on the web.

  1. Overall Performance Evaluation of Tubular Scraper Conveyors Using a TOPSIS-Based Multiattribute Decision-Making Method

    PubMed Central

    Yao, Yanping; Kou, Ziming; Meng, Wenjun; Han, Gang

    2014-01-01

    Properly evaluating the overall performance of tubular scraper conveyors (TSCs) can increase their overall efficiency and reduce economic investments, but such methods have rarely been studied. This study evaluated the overall performance of TSCs based on the technique for order of preference by similarity to ideal solution (TOPSIS). Three conveyors of the same type produced in the same factory were investigated. Their scraper space, material filling coefficient, and vibration coefficient of the traction components were evaluated. A mathematical model of the multiattribute decision matrix was constructed; a weighted judgment matrix was obtained using the DELPHI method. The linguistic positive-ideal solution (LPIS), the linguistic negative-ideal solution (LNIS), and the distance from each solution to the LPIS and the LNIS, that is, the approximation degrees, were calculated. The optimal solution was determined by ordering the approximation degrees for each solution. The TOPSIS-based results were compared with the measurement results provided by the manufacturer. The ordering result based on the three evaluated parameters was highly consistent with the result provided by the manufacturer. The TOPSIS-based method serves as a suitable evaluation tool for the overall performance of TSCs. It facilitates the optimal deployment of TSCs for industrial purposes. PMID:24991646

  2. Keyhole imaging method for dynamic objects behind the occlusion area

    NASA Astrophysics Data System (ADS)

    Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong

    2018-01-01

    A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .

  3. Inverse MDS: Inferring Dissimilarity Structure from Multiple Item Arrangements

    PubMed Central

    Kriegeskorte, Nikolaus; Mur, Marieke

    2012-01-01

    The pairwise dissimilarities of a set of items can be intuitively visualized by a 2D arrangement of the items, in which the distances reflect the dissimilarities. Such an arrangement can be obtained by multidimensional scaling (MDS). We propose a method for the inverse process: inferring the pairwise dissimilarities from multiple 2D arrangements of items. Perceptual dissimilarities are classically measured using pairwise dissimilarity judgments. However, alternative methods including free sorting and 2D arrangements have previously been proposed. The present proposal is novel (a) in that the dissimilarity matrix is estimated by “inverse MDS” based on multiple arrangements of item subsets, and (b) in that the subsets are designed by an adaptive algorithm that aims to provide optimal evidence for the dissimilarity estimates. The subject arranges the items (represented as icons on a computer screen) by means of mouse drag-and-drop operations. The multi-arrangement method can be construed as a generalization of simpler methods: It reduces to pairwise dissimilarity judgments if each arrangement contains only two items, and to free sorting if the items are categorically arranged into discrete piles. Multi-arrangement combines the advantages of these methods. It is efficient (because the subject communicates many dissimilarity judgments with each mouse drag), psychologically attractive (because dissimilarities are judged in context), and can characterize continuous high-dimensional dissimilarity structures. We present two procedures for estimating the dissimilarity matrix: a simple weighted-aligned-average of the partial dissimilarity matrices and a computationally intensive algorithm, which estimates the dissimilarity matrix by iteratively minimizing the error of MDS-predictions of the subject’s arrangements. The Matlab code for interactive arrangement and dissimilarity estimation is available from the authors upon request. PMID:22848204

  4. Variance-Based Cluster Selection Criteria in a K-Means Framework for One-Mode Dissimilarity Data.

    PubMed

    Vera, J Fernando; Macías, Rodrigo

    2017-06-01

    One of the main problems in cluster analysis is that of determining the number of groups in the data. In general, the approach taken depends on the cluster method used. For K-means, some of the most widely employed criteria are formulated in terms of the decomposition of the total point scatter, regarding a two-mode data set of N points in p dimensions, which are optimally arranged into K classes. This paper addresses the formulation of criteria to determine the number of clusters, in the general situation in which the available information for clustering is a one-mode [Formula: see text] dissimilarity matrix describing the objects. In this framework, p and the coordinates of points are usually unknown, and the application of criteria originally formulated for two-mode data sets is dependent on their possible reformulation in the one-mode situation. The decomposition of the variability of the clustered objects is proposed in terms of the corresponding block-shaped partition of the dissimilarity matrix. Within-block and between-block dispersion values for the partitioned dissimilarity matrix are derived, and variance-based criteria are subsequently formulated in order to determine the number of groups in the data. A Monte Carlo experiment was carried out to study the performance of the proposed criteria. For simulated clustered points in p dimensions, greater efficiency in recovering the number of clusters is obtained when the criteria are calculated from the related Euclidean distances instead of the known two-mode data set, in general, for unequal-sized clusters and for low dimensionality situations. For simulated dissimilarity data sets, the proposed criteria always outperform the results obtained when these criteria are calculated from their original formulation, using dissimilarities instead of distances.

  5. Euclidean commute time distance embedding and its application to spectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Albano, James A.; Messinger, David W.

    2012-06-01

    Spectral image analysis problems often begin by performing a preprocessing step composed of applying a transformation that generates an alternative representation of the spectral data. In this paper, a transformation based on a Markov-chain model of a random walk on a graph is introduced. More precisely, we quantify the random walk using a quantity known as the average commute time distance and find a nonlinear transformation that embeds the nodes of a graph in a Euclidean space where the separation between them is equal to the square root of this quantity. This has been referred to as the Commute Time Distance (CTD) transformation and it has the important characteristic of increasing when the number of paths between two nodes decreases and/or the lengths of those paths increase. Remarkably, a closed form solution exists for computing the average commute time distance that avoids running an iterative process and is found by simply performing an eigendecomposition on the graph Laplacian matrix. Contained in this paper is a discussion of the particular graph constructed on the spectral data for which the commute time distance is then calculated from, an introduction of some important properties of the graph Laplacian matrix, and a subspace projection that approximately preserves the maximal variance of the square root commute time distance. Finally, RX anomaly detection and Topological Anomaly Detection (TAD) algorithms will be applied to the CTD subspace followed by a discussion of their results.

  6. Estimation of cardiac motion in cine-MRI sequences by correlation transform optical flow of monogenic features distance

    NASA Astrophysics Data System (ADS)

    Gao, Bin; Liu, Wanyu; Wang, Liang; Liu, Zhengjun; Croisille, Pierre; Delachartre, Philippe; Clarysse, Patrick

    2016-12-01

    Cine-MRI is widely used for the analysis of cardiac function in clinical routine, because of its high soft tissue contrast and relatively short acquisition time in comparison with other cardiac MRI techniques. The gray level distribution in cardiac cine-MRI is relatively homogenous within the myocardium, and can therefore make motion quantification difficult. To ensure that the motion estimation problem is well posed, more image features have to be considered. This work is inspired by a method previously developed for color image processing. The monogenic signal provides a framework to estimate the local phase, orientation, and amplitude, of an image, three features which locally characterize the 2D intensity profile. The independent monogenic features are combined into a 3D matrix for motion estimation. To improve motion estimation accuracy, we chose the zero-mean normalized cross-correlation as a matching measure, and implemented a bilateral filter for denoising and edge-preservation. The monogenic features distance is used in lieu of the color space distance in the bilateral filter. Results obtained from four realistic simulated sequences outperformed two other state of the art methods even in the presence of noise. The motion estimation errors (end point error) using our proposed method were reduced by about 20% in comparison with those obtained by the other tested methods. The new methodology was evaluated on four clinical sequences from patients presenting with cardiac motion dysfunctions and one healthy volunteer. The derived strain fields were analyzed favorably in their ability to identify myocardial regions with impaired motion.

  7. Fault Network Reconstruction using Agglomerative Clustering: Applications to South Californian Seismicity

    NASA Astrophysics Data System (ADS)

    Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen

    2014-05-01

    We present applications of a new clustering method for fault network reconstruction based on the spatial distribution of seismicity. Unlike common approaches that start from the simplest large scale and gradually increase the complexity trying to explain the small scales, our method uses a bottom-up approach, by an initial sampling of the small scales and then reducing the complexity. The new approach also exploits the location uncertainty associated with each event in order to obtain a more accurate representation of the spatial probability distribution of the seismicity. For a given dataset, we first construct an agglomerative hierarchical cluster (AHC) tree based on Ward's minimum variance linkage. Such a tree starts out with one cluster and progressively branches out into an increasing number of clusters. To atomize the structure into its constitutive protoclusters, we initialize a Gaussian Mixture Modeling (GMM) at a given level of the hierarchical clustering tree. We then let the GMM converge using an Expectation Maximization (EM) algorithm. The kernels that become ill defined (less than 4 points) at the end of the EM are discarded. By incrementing the number of initialization clusters (by atomizing at increasingly populated levels of the AHC tree) and repeating the procedure above, we are able to determine the maximum number of Gaussian kernels the structure can hold. The kernels in this configuration constitute our protoclusters. In this setting, merging of any pair will lessen the likelihood (calculated over the pdf of the kernels) but in turn will reduce the model's complexity. The information loss/gain of any possible merging can thus be quantified based on the Minimum Description Length (MDL) principle. Similar to an inter-distance matrix, where the matrix element di,j gives the distance between points i and j, we can construct a MDL gain/loss matrix where mi,j gives the information gain/loss resulting from the merging of kernels i and j. Based on this matrix, merging events resulting in MDL gain are performed in descending order until no gainful merging is possible anymore. We envision that the results of this study could lead to a better understanding of the complex interactions within the Californian fault system and hopefully use the acquired insights for earthquake forecasting.

  8. Pulsed single-blow regenerator testing

    NASA Technical Reports Server (NTRS)

    Oldson, J. C.; Knowles, T. R.; Rauch, J.

    1992-01-01

    A pulsed single-blow method has been developed for testing of Stirling regenerator materials performance. The method uses a tubular flow arrangement with a steady gas flow passing through a regenerator matrix sample that packs the flow channel for a short distance. A wire grid heater spanning the gas flow channel is used to heat a plug of gas by approximately 2 K for approximately 350 ms. Foil thermocouples monitor the gas temperature entering and leaving the sample. Data analysis based on a 1D incompressible-flow thermal model allows the extraction of Stanton number. A figure of merit involving heat transfer and pressure drop is used to present results for steel screens and steel felt. The observations show a lower figure of merit for the materials tested than is expected based on correlations obtained by other methods.

  9. Quantum mechanical/molecular mechanical/continuum style solvation model: Second order Møller-Plesset perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thellamurege, Nandun M.; Si, Dejun; Cui, Fengchao

    A combined quantum mechanical/molecular mechanical/continuum (QM/MM/C) style second order Møller-Plesset perturbation theory (MP2) method that incorporates induced dipole polarizable force field and induced surface charge continuum solvation model is established. The Z-vector method is modified to include induced dipoles and induced surface charges to determine the MP2 response density matrix, which can be used to evaluate MP2 properties. In particular, analytic nuclear gradient is derived and implemented for this method. Using the Assisted Model Building with Energy Refinement induced dipole polarizable protein force field, the QM/MM/C style MP2 method is used to study the hydrogen bonding distances and strengths ofmore » the photoactive yellow protein chromopore in the wild type and the Glu46Gln mutant.« less

  10. Resonance Energy Transfer Studies from Derivatives of Thiophene Substituted 1,3,4-Oxadiazoles to Coumarin-334 Dye in Liquid and Dye-Doped Polymer Media

    NASA Astrophysics Data System (ADS)

    Naik, Lohit; Deshapande, Narahari; Khazi, Imtiyaz Ahamed M.; Malimath, G. H.

    2018-02-01

    In the present work, we have carried out energy transfer studies using newly synthesised derivatives of thiophene substituted 1,3,4-oxadiazoles namely, 2-(-4-(thiophene-3-yl)phenyl)-5-(5-(thiophene-3-yl)thiophene-2-yl)-1,3,4-oxadiazole [TTO], 2-(-4-(benzo[b]thiophene-2-yl)phenyl)-5-(5-(benzo[b]thiophene-2-yl)-1,3,4-oxadiozole [TBO] and 2-(4-(4-(trifluoromethyl)phenyl)phenyl)-5-(5-(4-(trifluoromethyl)phenyl)thiophen-2-yl)-1,3,4-oxadiazole [TMO] as donors and laser dye coumarin-334 as acceptor in ethanol and dye-doped polymer (poly(methyl methacrylate) (PMMA)) media following steady-state and time-resolved fluorescence methods. Bimolecular quenching constant ( k q), translation diffusion rate parameter ( k d), diffusion length ( D l), critical transfer distance ( R 0), donor- acceptor distance ( r) and energy transfer efficiency ( E T) are calculated. It is observed that, critical transfer distance is more than the diffusion length for all the pairs. Further, bimolecular quenching constant is also more than the translation diffusion rate parameter. Hence, our experimental findings suggest that overall energy transfer is due to Förster resonance energy transfer (FRET) between donor and acceptor in both the media and for all the pairs. In addition, considerable increase in fluorescence intensity and energy transfer efficiency is observed in dye-doped polymer matrix systems as compared to liquid media. This suggests that, these donor-acceptor pairs doped in PMMA matrix may be used for applications such as energy transfer dye lasers (ETDL) to improve the efficiency and photostability, to enhance tunability and for plastic scintillation detectors.

  11. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  12. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE PAGES

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    2017-04-06

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  13. Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu

    2016-11-01

    Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.

  14. Planning and Analysis of Fractured Rock Injection Tests in the Cerro Brillador Underground Laboratory, Northern Chile

    NASA Astrophysics Data System (ADS)

    Fairley, J. P., Jr.; Oyarzún L, R.; Villegas, G.

    2015-12-01

    Early theories of fluid migration in unsaturated fractured rock hypothesized that matrix suction would dominate flow up to the point of matrix saturation. However, experiments in underground laboratories such as the ESF (Yucca Mountain, NV) have demonstrated that liquid water can migrate significant distances through fractures in an unsaturated porous medium, suggesting limited interaction between fractures and unsaturated matrix blocks and potentially rapid transmission of recharge to the sat- urated zone. Determining the conditions under which this rapid recharge may take place is an important factor in understanding deep percolation processes in arid areas with thick unsaturated zones. As part of an on-going, Fondecyt-funded project (award 11150587) to study mountain block hydrological processes in arid regions, we are plan- ning a series of in-situ fracture flow injection tests in the Cerro Brillador/Mina Escuela, an underground laboratory and teaching facility belonging to the Universidad la Serena, Chile. Planning for the tests is based on an analytical model and curve-matching method, originally developed to evaluate data from injection tests at Yucca Mountain (Fairley, J.P., 2010, WRR 46:W08542), that uses a known rate of liquid injection to a fracture (for example, from a packed-off section of borehole) and the observed rate of seepage discharging from the fracture to estimate effective fracture aperture, matrix sorptivity, fracture/matrix flow partitioning, and the wetted fracture/matrix interac- tion area between the injection and recovery points. We briefly review the analytical approach and its application to test planning and analysis, and describe the proposed tests and their goals.

  15. A satellite relative motion model including J_2 and J_3 via Vinti's intermediary

    NASA Astrophysics Data System (ADS)

    Biria, Ashley D.; Russell, Ryan P.

    2018-03-01

    Vinti's potential is revisited for analytical propagation of the main satellite problem, this time in the context of relative motion. A particular version of Vinti's spheroidal method is chosen that is valid for arbitrary elliptical orbits, encapsulating J_2, J_3, and generally a partial J_4 in an orbit propagation theory without recourse to perturbation methods. As a child of Vinti's solution, the proposed relative motion model inherits these properties. Furthermore, the problem is solved in oblate spheroidal elements, leading to large regions of validity for the linearization approximation. After offering several enhancements to Vinti's solution, including boosts in accuracy and removal of some singularities, the proposed model is derived and subsequently reformulated so that Vinti's solution is piecewise differentiable. While the model is valid for the critical inclination and nonsingular in the element space, singularities remain in the linear transformation from Earth-centered inertial coordinates to spheroidal elements when the eccentricity is zero or for nearly equatorial orbits. The new state transition matrix is evaluated against numerical solutions including the J_2 through J_5 terms for a wide range of chief orbits and separation distances. The solution is also compared with side-by-side simulations of the original Gim-Alfriend state transition matrix, which considers the J_2 perturbation. Code for computing the resulting state transition matrix and associated reference frame and coordinate transformations is provided online as supplementary material.

  16. Robust Averaging of Covariances for EEG Recordings Classification in Motor Imagery Brain-Computer Interfaces.

    PubMed

    Uehara, Takashi; Sartori, Matteo; Tanaka, Toshihisa; Fiori, Simone

    2017-06-01

    The estimation of covariance matrices is of prime importance to analyze the distribution of multivariate signals. In motor imagery-based brain-computer interfaces (MI-BCI), covariance matrices play a central role in the extraction of features from recorded electroencephalograms (EEGs); therefore, correctly estimating covariance is crucial for EEG classification. This letter discusses algorithms to average sample covariance matrices (SCMs) for the selection of the reference matrix in tangent space mapping (TSM)-based MI-BCI. Tangent space mapping is a powerful method of feature extraction and strongly depends on the selection of a reference covariance matrix. In general, the observed signals may include outliers; therefore, taking the geometric mean of SCMs as the reference matrix may not be the best choice. In order to deal with the effects of outliers, robust estimators have to be used. In particular, we discuss and test the use of geometric medians and trimmed averages (defined on the basis of several metrics) as robust estimators. The main idea behind trimmed averages is to eliminate data that exhibit the largest distance from the average covariance calculated on the basis of all available data. The results of the experiments show that while the geometric medians show little differences from conventional methods in terms of classification accuracy in the classification of electroencephalographic recordings, the trimmed averages show significant improvement for all subjects.

  17. WKB solution 4×4 for electromagnetic waves in a planar magnetically anisotropic inhomogeneous layer

    NASA Astrophysics Data System (ADS)

    Moiseeva, Natalya Michailovna; Moiseev, Anton Vladimirovich

    2018-04-01

    In the paper, an oblique incidence of a plane electromagnetic wave on a planar magnetically anisotropic inhomogeneous layer is considered. We consider the case when all the components of the magnetic permeability tensor are non zero and vary with distance from the interface of media. The WKB method gives a matrix 4 × 4 solution for the projections of the electromagnetic wave fields during its propagation. The dependence of the cross-polarized components on the orientation of the anisotropic medium relative to the plane of incidence of the medium is analyzed.

  18. Graph distance for complex networks

    NASA Astrophysics Data System (ADS)

    Shimada, Yutaka; Hirata, Yoshito; Ikeguchi, Tohru; Aihara, Kazuyuki

    2016-10-01

    Networks are widely used as a tool for describing diverse real complex systems and have been successfully applied to many fields. The distance between networks is one of the most fundamental concepts for properly classifying real networks, detecting temporal changes in network structures, and effectively predicting their temporal evolution. However, this distance has rarely been discussed in the theory of complex networks. Here, we propose a graph distance between networks based on a Laplacian matrix that reflects the structural and dynamical properties of networked dynamical systems. Our results indicate that the Laplacian-based graph distance effectively quantifies the structural difference between complex networks. We further show that our approach successfully elucidates the temporal properties underlying temporal networks observed in the context of face-to-face human interactions.

  19. Proton-Proton Fusion and Tritium β Decay from Lattice Quantum Chromodynamics

    NASA Astrophysics Data System (ADS)

    Savage, Martin J.; Shanahan, Phiala E.; Tiburzi, Brian C.; Wagman, Michael L.; Winter, Frank; Beane, Silas R.; Chang, Emmanuel; Davoudi, Zohreh; Detmold, William; Orginos, Kostas; Nplqcd Collaboration

    2017-08-01

    The nuclear matrix element determining the p p →d e+ν fusion cross section and the Gamow-Teller matrix element contributing to tritium β decay are calculated with lattice quantum chromodynamics for the first time. Using a new implementation of the background field method, these quantities are calculated at the SU(3) flavor-symmetric value of the quark masses, corresponding to a pion mass of mπ˜806 MeV . The Gamow-Teller matrix element in tritium is found to be 0.979(03)(10) at these quark masses, which is within 2 σ of the experimental value. Assuming that the short-distance correlated two-nucleon contributions to the matrix element (meson-exchange currents) depend only mildly on the quark masses, as seen for the analogous magnetic interactions, the calculated p p →d e+ν transition matrix element leads to a fusion cross section at the physical quark masses that is consistent with its currently accepted value. Moreover, the leading two-nucleon axial counterterm of pionless effective field theory is determined to be L1 ,A=3.9 (0.2 )(1.0 )(0.4 )(0.9 ) fm3 at a renormalization scale set by the physical pion mass, also agreeing within the accepted phenomenological range. This work concretely demonstrates that weak transition amplitudes in few-nucleon systems can be studied directly from the fundamental quark and gluon degrees of freedom and opens the way for subsequent investigations of many important quantities in nuclear physics.

  20. On the rank-distance median of 3 permutations.

    PubMed

    Chindelevitch, Leonid; Pereira Zanetti, João Paulo; Meidanis, João

    2018-05-08

    Recently, Pereira Zanetti, Biller and Meidanis have proposed a new definition of a rearrangement distance between genomes. In this formulation, each genome is represented as a matrix, and the distance d is the rank distance between these matrices. Although defined in terms of matrices, the rank distance is equal to the minimum total weight of a series of weighted operations that leads from one genome to the other, including inversions, translocations, transpositions, and others. The computational complexity of the median-of-three problem according to this distance is currently unknown. The genome matrices are a special kind of permutation matrices, which we study in this paper. In their paper, the authors provide an [Formula: see text] algorithm for determining three candidate medians, prove the tight approximation ratio [Formula: see text], and provide a sufficient condition for their candidates to be true medians. They also conduct some experiments that suggest that their method is accurate on simulated and real data. In this paper, we extend their results and provide the following: Three invariants characterizing the problem of finding the median of 3 matrices A sufficient condition for uniqueness of medians that can be checked in O(n) A faster, [Formula: see text] algorithm for determining the median under this condition A new heuristic algorithm for this problem based on compressed sensing A [Formula: see text] algorithm that exactly solves the problem when the inputs are orthogonal matrices, a class that includes both permutations and genomes as special cases. Our work provides the first proof that, with respect to the rank distance, the problem of finding the median of 3 genomes, as well as the median of 3 permutations, is exactly solvable in polynomial time, a result which should be contrasted with its NP-hardness for the DCJ (double cut-and-join) distance and most other families of genome rearrangement operations. This result, backed by our experimental tests, indicates that the rank distance is a viable alternative to the DCJ distance widely used in genome comparisons.

  1. A Routing Protocol for Packet Radio Networks

    DTIC Science & Technology

    1995-01-01

    table of node K is a matrix containing, for each destination L and each neighbor of K (say M ), the distance to L ( NEOPRQ ) and the predecessor ( S OP Q...identifier T The distance to the destination ( N OP ) T The predecessor of the shortest path chosen toward L ( S OP ) T The successor ( U OP ) of the shortest...P and the predecessor is updated as S OP À ¾ S Q P . Thus, a node can determine whether or not an update received from M affects its other distance

  2. Fully automated segmentation of the pectoralis muscle boundary in breast MR images

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Filippatos, Konstantinos; Friman, Ola; Hahn, Horst K.

    2011-03-01

    Dynamic Contrast Enhanced MRI (DCE-MRI) of the breast is emerging as a novel tool for early tumor detection and diagnosis. The segmentation of the structures in breast DCE-MR images, such as the nipple, the breast-air boundary and the pectoralis muscle, serves as a fundamental step for further computer assisted diagnosis (CAD) applications, e.g. breast density analysis. Moreover, the previous clinical studies show that the distance between the posterior breast lesions and the pectoralis muscle can be used to assess the extent of the disease. To enable automatic quantification of the distance from a breast tumor to the pectoralis muscle, a precise delineation of the pectoralis muscle boundary is required. We present a fully automatic segmentation method based on the second derivative information represented by the Hessian matrix. The voxels proximal to the pectoralis muscle boundary exhibit roughly the same Eigen value patterns as a sheet-like object in 3D, which can be enhanced and segmented by a Hessian-based sheetness filter. A vector-based connected component filter is then utilized such that only the pectoralis muscle is preserved by extracting the largest connected component. The proposed method was evaluated quantitatively with a test data set which includes 30 breast MR images by measuring the average distances between the segmented boundary and the annotated surfaces in two ground truth sets, and the statistics showed that the mean distance was 1.434 mm with the standard deviation of 0.4661 mm, which shows great potential for integration of the approach in the clinical routine.

  3. A study of polaritonic transparency in couplers made from excitonic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Mahi R.; Racknor, Chris

    2015-03-14

    We have studied light matter interaction in quantum dot and exciton-polaritonic coupler hybrid systems. The coupler is made by embedding two slabs of an excitonic material (CdS) into a host excitonic material (ZnO). An ensemble of non-interacting quantum dots is doped in the coupler. The bound exciton polariton states are calculated in the coupler using the transfer matrix method in the presence of the coupling between the external light (photons) and excitons. These bound exciton-polaritons interact with the excitons present in the quantum dots and the coupler is acting as a reservoir. The Schrödinger equation method has been used tomore » calculate the absorption coefficient in quantum dots. It is found that when the distance between two slabs (CdS) is greater than decay length of evanescent waves the absorption spectrum has two peaks and one minimum. The minimum corresponds to a transparent state in the system. However, when the distance between the slabs is smaller than the decay length of evanescent waves, the absorption spectra has three peaks and two transparent states. In other words, one transparent state can be switched to two transparent states when the distance between the two layers is modified. This could be achieved by applying stress and strain fields. It is also found that transparent states can be switched on and off by applying an external control laser field.« less

  4. Improved iris localization by using wide and narrow field of view cameras for iris recognition

    NASA Astrophysics Data System (ADS)

    Kim, Yeong Gon; Shin, Kwang Yong; Park, Kang Ryoung

    2013-10-01

    Biometrics is a method of identifying individuals by their physiological or behavioral characteristics. Among other biometric identifiers, iris recognition has been widely used for various applications that require a high level of security. When a conventional iris recognition camera is used, the size and position of the iris region in a captured image vary according to the X, Y positions of a user's eye and the Z distance between a user and the camera. Therefore, the searching area of the iris detection algorithm is increased, which can inevitably decrease both the detection speed and accuracy. To solve these problems, we propose a new method of iris localization that uses wide field of view (WFOV) and narrow field of view (NFOV) cameras. Our study is new as compared to previous studies in the following four ways. First, the device used in our research acquires three images, one each of the face and both irises, using one WFOV and two NFOV cameras simultaneously. The relation between the WFOV and NFOV cameras is determined by simple geometric transformation without complex calibration. Second, the Z distance (between a user's eye and the iris camera) is estimated based on the iris size in the WFOV image and anthropometric data of the size of the human iris. Third, the accuracy of the geometric transformation between the WFOV and NFOV cameras is enhanced by using multiple matrices of the transformation according to the Z distance. Fourth, the searching region for iris localization in the NFOV image is significantly reduced based on the detected iris region in the WFOV image and the matrix of geometric transformation corresponding to the estimated Z distance. Experimental results showed that the performance of the proposed iris localization method is better than that of conventional methods in terms of accuracy and processing time.

  5. Dynamical mechanism in aero-engine gas path system using minimum spanning tree and detrended cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Dong, Keqiang; Zhang, Hong; Gao, You

    2017-01-01

    Identifying the mutual interaction in aero-engine gas path system is a crucial problem that facilitates the understanding of emerging structures in complex system. By employing the multiscale multifractal detrended cross-correlation analysis method to aero-engine gas path system, the cross-correlation characteristics between gas path system parameters are established. Further, we apply multiscale multifractal detrended cross-correlation distance matrix and minimum spanning tree to investigate the mutual interactions of gas path variables. The results can infer that the low-spool rotor speed (N1) and engine pressure ratio (EPR) are main gas path parameters. The application of proposed method contributes to promote our understanding of the internal mechanisms and structures of aero-engine dynamics.

  6. Machine learning with quantum relative entropy

    NASA Astrophysics Data System (ADS)

    Tsuda, Koji

    2009-12-01

    Density matrices are a central tool in quantum physics, but it is also used in machine learning. A positive definite matrix called kernel matrix is used to represent the similarities between examples. Positive definiteness assures that the examples are embedded in an Euclidean space. When a positive definite matrix is learned from data, one has to design an update rule that maintains the positive definiteness. Our update rule, called matrix exponentiated gradient update, is motivated by the quantum relative entropy. Notably, the relative entropy is an instance of Bregman divergences, which are asymmetric distance measures specifying theoretical properties of machine learning algorithms. Using the calculus commonly used in quantum physics, we prove an upperbound of the generalization error of online learning.

  7. Effect of polarity and elongational flow on the morphology and properties of a new nanobiocomposite

    NASA Astrophysics Data System (ADS)

    Paolo, La Mantia Francesco; Manuela, Ceraulo; Chiara, Mistretta Maria; Fiorenza, Sutera; Laura, Ascione

    2015-12-01

    Nanobiocomposites are a new class of biodegradable polymer materials that shows very interesting properties and the biodegradability of the matrix. In this work the effect of the polarity of the organomodified montmorillonite and of the elongational flow on the morphology and the rheological and mechanical properties of a new nanobiocomposite having as a matrix a biodegradable copolyester based blend has been investigated. The mechanical properties increase in presence of the nanofiller and this increase is larger and larger with increasing the orientation. Moreover, a brittle-to-ductile transition is observed in the anisotropic sample and this effect is again larger for the nanocomposite. The increase of the interlayer distance is larger for the more polar montmorillonite, even if the two nanocomposites show about the same final interlayer distance.

  8. Relationship between second-generation frequency doubling technology and standard automated perimetry in patients with glaucoma.

    PubMed

    Zarkovic, Andrea; Mora, Justin; McKelvie, James; Gamble, Greg

    2007-12-01

    The aim of the study was to establish the correlation between visual filed loss as shown by second-generation Frequency Doubling Technology (Humphrey Matrix) and Standard Automated Perimetry (Humphrey Field Analyser) in patients with glaucoma. Also, compared were the test duration and reliability. Forty right eyes from glaucoma patients from a private ophthalmology practice were included in this prospective study. All participants had tests within an 8-month period. Pattern deviation plots and mean deviation were compared to establish the correlation between the two perimetry tests. Overall correlation and correlation between hemifields, quadrants and individual test locations were assessed. Humphrey Field Analyser tests were slightly more reliable (37/40 vs. 34/40 for Matrix)) but overall of longer duration. There was good correlation (0.69) between mean deviations. Superior hemifields and superonasal quadrants had the highest correlation (0.88 [95% CI 0.79, 0.94]). Correlation between individual points was independent of distance from the macula. Generally, the Matrix and Humphrey Field Analyser perimetry correlate well; however, each machine utilizes a different method of analysing data and thus the direct comparison should be made with caution.

  9. Parity among interpretation methods of MLEE patterns and disparity among clustering methods in epidemiological typing of Candida albicans.

    PubMed

    Boriollo, Marcelo Fabiano Gomes; Rosa, Edvaldo Antonio Ribeiro; Gonçalves, Reginaldo Bruno; Höfling, José Francisco

    2006-03-01

    The typing of C. albicans by MLEE (multilocus enzyme electrophoresis) is dependent on the interpretation of enzyme electrophoretic patterns, and the study of the epidemiological relationships of these yeasts can be conducted by cluster analysis. Therefore, the aims of the present study were to first determine the discriminatory power of genetic interpretation (deduction of the allelic composition of diploid organisms) and numerical interpretation (mere determination of the presence and absence of bands) of MLEE patterns, and then to determine the concordance (Pearson product-moment correlation coefficient) and similarity (Jaccard similarity coefficient) of the groups of strains generated by three cluster analysis models, and the discriminatory power of such models as well [model A: genetic interpretation, genetic distance matrix of Nei (d(ij)) and UPGMA dendrogram; model B: genetic interpretation, Dice similarity matrix (S(D1)) and UPGMA dendrogram; model C: numerical interpretation, Dice similarity matrix (S(D2)) and UPGMA dendrogram]. MLEE was found to be a powerful and reliable tool for the typing of C. albicans due to its high discriminatory power (>0.9). Discriminatory power indicated that numerical interpretation is a method capable of discriminating a greater number of strains (47 versus 43 subtypes), but also pointed to model B as a method capable of providing a greater number of groups, suggesting its use for the typing of C. albicans by MLEE and cluster analysis. Very good agreement was only observed between the elements of the matrices S(D1) and S(D2), but a large majority of the groups generated in the three UPGMA dendrograms showed similarity S(J) between 4.8% and 75%, suggesting disparities in the conclusions obtained by the cluster assays.

  10. Typing of Ochrobactrum anthropi clinical isolates using automated repetitive extragenic palindromic-polymerase chain reaction DNA fingerprinting and matrix-assisted laser desorption/ionization-time-of-flight mass spectrometry.

    PubMed

    Quirino, Angela; Pulcrano, Giovanna; Rametti, Linda; Puccio, Rossana; Marascio, Nadia; Catania, Maria Rosaria; Matera, Giovanni; Liberto, Maria Carla; Focà, Alfredo

    2014-03-22

    Ochrobactrum anthropi (O. anthropi), is a non-fermenting gram-negative bacillus usually found in the environment. Nevertheless, during the past decade it has been identified as pathogenic to immunocompromised patients. In this study, we assessed the usefulness of the automated repetitive extragenic palindromic-polymerase chain reaction (rep-PCR-based DiversiLab™ system, bioMèrieux, France) and of matrix-assisted laser desorption/ionization-time-of-flight (MALDI-TOF MS) for typing of twentythree O. anthropi clinical isolates that we found over a four-months period (from April 2011 to August 2011) in bacteriemic patients admitted in the same operative unit of our hospital. Pulsed-field gel electrophoresis (PFGE), commonly accepted as the gold standard technique for typing, was also used. Analysis was carried out using the Pearson correlation coefficient to determine the distance matrice and the unweighted pair group method with arithmetic mean (UPGMA) to generate dendogram. Rep-PCR analysis identified four different patterns: three that clustered together with 97% or more pattern similarity, and one whose members showed < 95% pattern similarity. Interestingly, strains isolated later (from 11/06/2011 to 24/08/2011) displayed a pattern with 99% similarity. MALDI-TOF MS evaluation clustered the twentythree strains of O. anthropi into a single group containing four distinct subgroups, each comprising the majority of strains clustering below 5 distance levels, indicating a high similarity between the isolates. Our results indicate that these isolates are clonally-related and the methods used afforded a valuable contribution to the epidemiology, prevention and control of the infections caused by this pathogen.

  11. Problems with small area surveys: lensing covariance of supernova distance measurements.

    PubMed

    Cooray, Asantha; Huterer, Dragan; Holz, Daniel E

    2006-01-20

    While luminosity distances from type Ia supernovae (SNe) are a powerful probe of cosmology, the accuracy with which these distances can be measured is limited by cosmic magnification due to gravitational lensing by the intervening large-scale structure. Spatial clustering of foreground mass leads to correlated errors in SNe distances. By including the full covariance matrix of SNe, we show that future wide-field surveys will remain largely unaffected by lensing correlations. However, "pencil beam" surveys, and those with narrow (but possibly long) fields of view, can be strongly affected. For a survey with 30 arcmin mean separation between SNe, lensing covariance leads to a approximately 45% increase in the expected errors in dark energy parameters.

  12. Random Weighting, Strong Tracking, and Unscented Kalman Filter for Soft Tissue Characterization.

    PubMed

    Shin, Jaehyun; Zhong, Yongmin; Oetomo, Denny; Gu, Chengfan

    2018-05-21

    This paper presents a new nonlinear filtering method based on the Hunt-Crossley model for online nonlinear soft tissue characterization. This method overcomes the problem of performance degradation in the unscented Kalman filter due to contact model error. It adopts the concept of Mahalanobis distance to identify contact model error, and further incorporates a scaling factor in predicted state covariance to compensate identified model error. This scaling factor is determined according to the principle of innovation orthogonality to avoid the cumbersome computation of Jacobian matrix, where the random weighting concept is adopted to improve the estimation accuracy of innovation covariance. A master-slave robotic indentation system is developed to validate the performance of the proposed method. Simulation and experimental results as well as comparison analyses demonstrate that the efficacy of the proposed method for online characterization of soft tissue parameters in the presence of contact model error.

  13. A region-based segmentation method for ultrasound images in HIFU therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Dong, E-mail: dongz@whu.edu.cn; Liu, Yu; Yang, Yan

    Purpose: Precisely and efficiently locating a tumor with less manual intervention in ultrasound-guided high-intensity focused ultrasound (HIFU) therapy is one of the keys to guaranteeing the therapeutic result and improving the efficiency of the treatment. The segmentation of ultrasound images has always been difficult due to the influences of speckle, acoustic shadows, and signal attenuation as well as the variety of tumor appearance. The quality of HIFU guidance images is even poorer than that of conventional diagnostic ultrasound images because the ultrasonic probe used for HIFU guidance usually obtains images without making contact with the patient’s body. Therefore, the segmentationmore » becomes more difficult. To solve the segmentation problem of ultrasound guidance image in the treatment planning procedure for HIFU therapy, a novel region-based segmentation method for uterine fibroids in HIFU guidance images is proposed. Methods: Tumor partitioning in HIFU guidance image without manual intervention is achieved by a region-based split-and-merge framework. A new iterative multiple region growing algorithm is proposed to first split the image into homogenous regions (superpixels). The features extracted within these homogenous regions will be more stable than those extracted within the conventional neighborhood of a pixel. The split regions are then merged by a superpixel-based adaptive spectral clustering algorithm. To ensure the superpixels that belong to the same tumor can be clustered together in the merging process, a particular construction strategy for the similarity matrix is adopted for the spectral clustering, and the similarity matrix is constructed by taking advantage of a combination of specifically selected first-order and second-order texture features computed from the gray levels and the gray level co-occurrence matrixes, respectively. The tumor region is picked out automatically from the background regions by an algorithm according to a priori information about the tumor position, shape, and size. Additionally, an appropriate cluster number for spectral clustering can be determined by the same algorithm, thus the automatic segmentation of the tumor region is achieved. Results: To evaluate the performance of the proposed method, 50 uterine fibroid ultrasound images from different patients receiving HIFU therapy were segmented, and the obtained tumor contours were compared with those delineated by an experienced radiologist. For area-based evaluation results, the mean values of the true positive ratio, the false positive ratio, and the similarity were 94.42%, 4.71%, and 90.21%, respectively, and the corresponding standard deviations were 2.54%, 3.12%, and 3.50%, respectively. For distance-based evaluation results, the mean values of the normalized Hausdorff distance and the normalized mean absolute distance were 4.93% and 0.90%, respectively, and the corresponding standard deviations were 2.22% and 0.34%, respectively. The running time of the segmentation process was 12.9 s for a 318 × 333 (pixels) image. Conclusions: Experiments show that the proposed method can segment the tumor region accurately and efficiently with less manual intervention, which provides for the possibility of automatic segmentation and real-time guidance in HIFU therapy.« less

  14. A protein relational database and protein family knowledge bases to facilitate structure-based design analyses.

    PubMed

    Mobilio, Dominick; Walker, Gary; Brooijmans, Natasja; Nilakantan, Ramaswamy; Denny, R Aldrin; Dejoannis, Jason; Feyfant, Eric; Kowticwar, Rupesh K; Mankala, Jyoti; Palli, Satish; Punyamantula, Sairam; Tatipally, Maneesh; John, Reji K; Humblet, Christine

    2010-08-01

    The Protein Data Bank is the most comprehensive source of experimental macromolecular structures. It can, however, be difficult at times to locate relevant structures with the Protein Data Bank search interface. This is particularly true when searching for complexes containing specific interactions between protein and ligand atoms. Moreover, searching within a family of proteins can be tedious. For example, one cannot search for some conserved residue as residue numbers vary across structures. We describe herein three databases, Protein Relational Database, Kinase Knowledge Base, and Matrix Metalloproteinase Knowledge Base, containing protein structures from the Protein Data Bank. In Protein Relational Database, atom-atom distances between protein and ligand have been precalculated allowing for millisecond retrieval based on atom identity and distance constraints. Ring centroids, centroid-centroid and centroid-atom distances and angles have also been included permitting queries for pi-stacking interactions and other structural motifs involving rings. Other geometric features can be searched through the inclusion of residue pair and triplet distances. In Kinase Knowledge Base and Matrix Metalloproteinase Knowledge Base, the catalytic domains have been aligned into common residue numbering schemes. Thus, by searching across Protein Relational Database and Kinase Knowledge Base, one can easily retrieve structures wherein, for example, a ligand of interest is making contact with the gatekeeper residue.

  15. The algorithm of fast image stitching based on multi-feature extraction

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  16. Pasture succession in the Neotropics: extending the nucleation hypothesis into a matrix discontinuity hypothesis.

    PubMed

    Peterson, Chris J; Dosch, Jerald J; Carson, Walter P

    2014-08-01

    The nucleation hypothesis appears to explain widespread patterns of succession in tropical pastures, specifically the tendency for isolated trees to promote woody species recruitment. Still, the nucleation hypothesis has usually been tested explicitly for only short durations and in some cases isolated trees fail to promote woody recruitment. Moreover, at times, nucleation occurs in other key habitat patches. Thus, we propose an extension, the matrix discontinuity hypothesis: woody colonization will occur in focal patches that function to mitigate the herbaceous vegetation effects, thus providing safe sites or regeneration niches. We tested predictions of the classical nucleation hypothesis, the matrix discontinuity hypothesis, and a distance from forest edge hypothesis, in five abandoned pastures in Costa Rica, across the first 11 years of succession. Our findings confirmed the matrix discontinuity hypothesis: specifically, rotting logs and steep slopes significantly enhanced woody colonization. Surprisingly, isolated trees did not consistently significantly enhance recruitment; only larger trees did so. Finally, woody recruitment consistently decreased with distance from forest. Our results as well as results from others suggest that the nucleation hypothesis needs to be broadened beyond its historical focus on isolated trees or patches; the matrix discontinuity hypothesis focuses attention on a suite of key patch types or microsites that promote woody species recruitment. We argue that any habitat discontinuities that ameliorate the inhibition by dense graminoid layers will be foci for recruitment. Such patches could easily be manipulated to speed the transition of pastures to closed canopy forests.

  17. Algebraic reconstruction for 3D magnetic resonance-electrical impedance tomography (MREIT) using one component of magnetic flux density.

    PubMed

    Ider, Y Ziya; Onart, Serkan

    2004-02-01

    Magnetic resonance-electrical impedance tomography (MREIT) algorithms fall into two categories: those utilizing internal current density and those utilizing only one component of measured magnetic flux density. The latter group of algorithms have the advantage that the object does not have to be rotated in the magnetic resonance imaging (MRI) system. A new algorithm which uses only one component of measured magnetic flux density is developed. In this method, the imaging problem is formulated as the solution of a non-linear matrix equation which is solved iteratively to reconstruct resistivity. Numerical simulations are performed to test the algorithm both for noise-free and noisy cases. The uniqueness of the solution is monitored by looking at the singular value behavior of the matrix and it is shown that at least two current injection profiles are necessary. The method is also modified to handle region-of-interest reconstructions. In particular it is shown that, if the image of a certain xy-slice is sought for, then it suffices to measure the z-component of magnetic flux density up to a distance above and below that slice. The method is robust and has good convergence behavior for the simulation phantoms used.

  18. Self-organized topology of recurrence-based complex networks

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Liu, Gang

    2013-12-01

    With the rapid technological advancement, network is almost everywhere in our daily life. Network theory leads to a new way to investigate the dynamics of complex systems. As a result, many methods are proposed to construct a network from nonlinear time series, including the partition of state space, visibility graph, nearest neighbors, and recurrence approaches. However, most previous works focus on deriving the adjacency matrix to represent the complex network and extract new network-theoretic measures. Although the adjacency matrix provides connectivity information of nodes and edges, the network geometry can take variable forms. The research objective of this article is to develop a self-organizing approach to derive the steady geometric structure of a network from the adjacency matrix. We simulate the recurrence network as a physical system by treating the edges as springs and the nodes as electrically charged particles. Then, force-directed algorithms are developed to automatically organize the network geometry by minimizing the system energy. Further, a set of experiments were designed to investigate important factors (i.e., dynamical systems, network construction methods, force-model parameter, nonhomogeneous distribution) affecting this self-organizing process. Interestingly, experimental results show that the self-organized geometry recovers the attractor of a dynamical system that produced the adjacency matrix. This research addresses a question, i.e., "what is the self-organizing geometry of a recurrence network?" and provides a new way to reproduce the attractor or time series from the recurrence plot. As a result, novel network-theoretic measures (e.g., average path length and proximity ratio) can be achieved based on actual node-to-node distances in the self-organized network topology. The paper brings the physical models into the recurrence analysis and discloses the spatial geometry of recurrence networks.

  19. Self-organized topology of recurrence-based complex networks.

    PubMed

    Yang, Hui; Liu, Gang

    2013-12-01

    With the rapid technological advancement, network is almost everywhere in our daily life. Network theory leads to a new way to investigate the dynamics of complex systems. As a result, many methods are proposed to construct a network from nonlinear time series, including the partition of state space, visibility graph, nearest neighbors, and recurrence approaches. However, most previous works focus on deriving the adjacency matrix to represent the complex network and extract new network-theoretic measures. Although the adjacency matrix provides connectivity information of nodes and edges, the network geometry can take variable forms. The research objective of this article is to develop a self-organizing approach to derive the steady geometric structure of a network from the adjacency matrix. We simulate the recurrence network as a physical system by treating the edges as springs and the nodes as electrically charged particles. Then, force-directed algorithms are developed to automatically organize the network geometry by minimizing the system energy. Further, a set of experiments were designed to investigate important factors (i.e., dynamical systems, network construction methods, force-model parameter, nonhomogeneous distribution) affecting this self-organizing process. Interestingly, experimental results show that the self-organized geometry recovers the attractor of a dynamical system that produced the adjacency matrix. This research addresses a question, i.e., "what is the self-organizing geometry of a recurrence network?" and provides a new way to reproduce the attractor or time series from the recurrence plot. As a result, novel network-theoretic measures (e.g., average path length and proximity ratio) can be achieved based on actual node-to-node distances in the self-organized network topology. The paper brings the physical models into the recurrence analysis and discloses the spatial geometry of recurrence networks.

  20. Self-organized topology of recurrence-based complex networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Hui, E-mail: huiyang@usf.edu; Liu, Gang

    With the rapid technological advancement, network is almost everywhere in our daily life. Network theory leads to a new way to investigate the dynamics of complex systems. As a result, many methods are proposed to construct a network from nonlinear time series, including the partition of state space, visibility graph, nearest neighbors, and recurrence approaches. However, most previous works focus on deriving the adjacency matrix to represent the complex network and extract new network-theoretic measures. Although the adjacency matrix provides connectivity information of nodes and edges, the network geometry can take variable forms. The research objective of this article ismore » to develop a self-organizing approach to derive the steady geometric structure of a network from the adjacency matrix. We simulate the recurrence network as a physical system by treating the edges as springs and the nodes as electrically charged particles. Then, force-directed algorithms are developed to automatically organize the network geometry by minimizing the system energy. Further, a set of experiments were designed to investigate important factors (i.e., dynamical systems, network construction methods, force-model parameter, nonhomogeneous distribution) affecting this self-organizing process. Interestingly, experimental results show that the self-organized geometry recovers the attractor of a dynamical system that produced the adjacency matrix. This research addresses a question, i.e., “what is the self-organizing geometry of a recurrence network?” and provides a new way to reproduce the attractor or time series from the recurrence plot. As a result, novel network-theoretic measures (e.g., average path length and proximity ratio) can be achieved based on actual node-to-node distances in the self-organized network topology. The paper brings the physical models into the recurrence analysis and discloses the spatial geometry of recurrence networks.« less

  1. Targeting functional motifs of a protein family

    NASA Astrophysics Data System (ADS)

    Bhadola, Pradeep; Deo, Nivedita

    2016-10-01

    The structural organization of a protein family is investigated by devising a method based on the random matrix theory (RMT), which uses the physiochemical properties of the amino acid with multiple sequence alignment. A graphical method to represent protein sequences using physiochemical properties is devised that gives a fast, easy, and informative way of comparing the evolutionary distances between protein sequences. A correlation matrix associated with each property is calculated, where the noise reduction and information filtering is done using RMT involving an ensemble of Wishart matrices. The analysis of the eigenvalue statistics of the correlation matrix for the β -lactamase family shows the universal features as observed in the Gaussian orthogonal ensemble (GOE). The property-based approach captures the short- as well as the long-range correlation (approximately following GOE) between the eigenvalues, whereas the previous approach (treating amino acids as characters) gives the usual short-range correlations, while the long-range correlations are the same as that of an uncorrelated series. The distribution of the eigenvector components for the eigenvalues outside the bulk (RMT bound) deviates significantly from RMT observations and contains important information about the system. The information content of each eigenvector of the correlation matrix is quantified by introducing an entropic estimate, which shows that for the β -lactamase family the smallest eigenvectors (low eigenmodes) are highly localized as well as informative. These small eigenvectors when processed gives clusters involving positions that have well-defined biological and structural importance matching with experiments. The approach is crucial for the recognition of structural motifs as shown in β -lactamase (and other families) and selectively identifies the important positions for targets to deactivate (activate) the enzymatic actions.

  2. Calculation of electronic coupling matrix elements for ground and excited state electron transfer reactions: Comparison of the generalized Mulliken{endash}Hush and block diagonalization methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cave, R.J.; Newton, M.D.

    1997-06-01

    Two independent methods are presented for the nonperturbative calculation of the electronic coupling matrix element (H{sub ab}) for electron transfer reactions using {ital ab initio} electronic structure theory. The first is based on the generalized Mulliken{endash}Hush (GMH) model, a multistate generalization of the Mulliken Hush formalism for the electronic coupling. The second is based on the block diagonalization (BD) approach of Cederbaum, Domcke, and co-workers. Detailed quantitative comparisons of the two methods are carried out based on results for (a) several states of the system Zn{sub 2}OH{sub 2}{sup +} and (b) the low-lying states of the benzene{endash}Cl atom complex andmore » its contact ion pair. Generally good agreement between the two methods is obtained over a range of geometries. Either method can be applied at an arbitrary nuclear geometry and, as a result, may be used to test the validity of the Condon approximation. Examples of nonmonotonic behavior of the electronic coupling as a function of nuclear coordinates are observed for Zn{sub 2}OH{sub 2}{sup +}. Both methods also yield a natural definition of the effective distance (r{sub DA}) between donor (D) and acceptor (A) sites, in contrast to earlier approaches which required independent estimates of r{sub DA}, generally based on molecular structure data. {copyright} {ital 1997 American Institute of Physics.}« less

  3. Distribution of organic matrix in calcium oxalate renal calculi.

    PubMed

    Warpehoski, M A; Buscemi, P J; Osborn, D C; Finlayson, B; Goldberg, E P

    1981-01-01

    The quantity of protein and carbohydrate comprising the matrix of calcium oxalate monohydrate (COM) renal stones was found to decrease with distance from the surface of the stone. The average organic concentration of stones 3 to 30 mm in diameter ranged from 5.7% at the surface to 2.7% at the core. This concentration gradient suggests matrix involvement in a "growth front" on stone surfaces with migration of organic material from the "older" interior. The matrix distribution was not readily correlated with density variations or with the presence of hydroxyapatite or calcium oxalate dihydrate. Surface matrix concentrations were greater than amounts predicted by physical adsorption. Electron microscopy confirmed the presence of the organic-rich surface layer and also suggested that increase in stone size occurs predominantly by crystal growth with microcrystal aggregates as growth centers.

  4. A Mixed Methods Approach to Code Stakeholder Beliefs in Urban Water Governance

    NASA Astrophysics Data System (ADS)

    Bell, E. V.; Henry, A.; Pivo, G.

    2017-12-01

    What is a reliable way to code policies to represent belief systems? The Advocacy Coalition Framework posits that public policy may be viewed as manifestations of belief systems. Belief systems include both ontological beliefs about cause-and-effect relationships and policy effectiveness, as well as normative beliefs about appropriate policy instruments and the relative value of different outcomes. The idea that belief systems are embodied in public policy is important for urban water governance because it trains our focus on belief conflict; this can help us understand why many water-scarce cities do not adopt innovative technology despite available scientific information. To date, there has been very little research on systematic, rigorous methods to measure the belief system content of public policies. We address this by testing the relationship between beliefs and policy participation to develop an innovative coding framework. With a focus on urban water governance in Tucson, Arizona, we analyze grey literature on local water management. Mentioned policies are coded into a typology of common approaches identified in urban water governance literature, which include regulation, education, price and non-price incentives, green infrastructure and other types of technology. We then survey local water stakeholders about their perceptions of these policies. Urban water governance requires coordination of organizations from multiple sectors, and we cannot assume that belief development and policy participation occur in a vacuum. Thus, we use a generalized exponential random graph model to test the relationship between perceptions and policy participation in the Tucson water governance network. We measure policy perceptions for organizations by averaging across their respective, affiliated respondents and generating a belief distance matrix of coordinating network participants. Similarly, we generate a distance matrix of these actors based on the frequency of their participation in each of the aforementioned policy types. By linking these perceptions and policies, we develop a coding frame that can supplement future content analysis when survey methods are not viable.

  5. A Semi-Analytical Model for Dispersion Modelling Studies in the Atmospheric Boundary Layer

    NASA Astrophysics Data System (ADS)

    Gupta, A.; Sharan, M.

    2017-12-01

    The severe impact of harmful air pollutants has always been a cause of concern for a wide variety of air quality analysis. The analytical models based on the solution of the advection-diffusion equation have been the first and remain the convenient way for modeling air pollutant dispersion as it is easy to handle the dispersion parameters and related physics in it. A mathematical model describing the crosswind integrated concentration is presented. The analytical solution to the resulting advection-diffusion equation is limited to a constant and simple profiles of eddy diffusivity and wind speed. In practice, the wind speed depends on the vertical height above the ground and eddy diffusivity profiles on the downwind distance from the source as well as the vertical height. In the present model, a method of eigen-function expansion is used to solve the resulting partial differential equation with the appropriate boundary conditions. This leads to a system of first order ordinary differential equations with a coefficient matrix depending on the downwind distance. The solution of this system, in general, can be expressed in terms of Peano-baker series which is not easy to compute, particularly when the coefficient matrix becomes non-commutative (Martin et al., 1967). An approach based on Taylor's series expansion is introduced to find the numerical solution of first order system. The method is applied to various profiles of wind speed and eddy diffusivities. The solution computed from the proposed methodology is found to be efficient and accurate in comparison to those available in the literature. The performance of the model is evaluated with the diffusion datasets from Copenhagen (Gryning et al., 1987) and Hanford (Doran et al., 1985). In addition, the proposed method is used to deduce three dimensional concentrations by considering the Gaussian distribution in crosswind direction, which is also evaluated with diffusion data corresponding to a continuous point source.

  6. Screening effect in matrix graphene / SiC planar field emmiters

    NASA Astrophysics Data System (ADS)

    Jityaev, I. L.; Svetlichnyi, A. M.; Kolomiytsev, A. S.; Ageev, O. A.

    2017-11-01

    The paper describes simulation of matrix field emission nanostructures on the basis of graphene on a semi-insulating silicon carbide. The planar spike-type field emission cathodes were measured. The electric field distribution in an interelectrode gap of the emission structure was obtained. The models take into account the distance between cathode tops. Screening effect condition was detected in planar field emission structure and a way of eliminating was proposed.

  7. Local order structure and surface acidity properties of a Nb 2O 5/SiO 2 mixed oxide prepared by the sol-gel processing method

    NASA Astrophysics Data System (ADS)

    Francisco, Maria Suzana P.; Landers, Richard; Gushikem, Yoshitaka

    2004-07-01

    The sol-gel processing method was used as an alternative route to obtain Nb 2O 5 phase homogenously dispersed in the SiO 2 matrix, improving the thermal stability of the Brønsted acid sites, Nb-OH and Nb-OH-Si groups. The local niobium structure and the influence of the amount of niobia on the surface of the Nb 2O 5/SiO 2 system were studied by XAS and XPS, respectively. For the samples calcined at 423 and 873 K, the 3 d5/2 BE values are at ca. 208.2 eV, indicating an ionic character for Nb(V) species in the SiO 2 matrix, probably associated to Si-O-Nb linkages. The features of Nb K-edge XANES spectra of samples show the absence of NbO species. The Nb K-edge EXAFS oscillations exhibit a shoulder at ca. 5.6 Å -1, which probably arises from Nb-O-Si. This fact corroborates the EXAFS simulation data of the second coordination shell, whose best fitting is achieved with three distances, two Nb-Nb lengths and one Nb-Si.

  8. Cluster structure of EU-15 countries derived from the correlation matrix analysis of macroeconomic index fluctuations

    NASA Astrophysics Data System (ADS)

    Gligor, M.; Ausloos, M.

    2007-05-01

    The statistical distances between countries, calculated for various moving average time windows, are mapped into the ultrametric subdominant space as in classical Minimal Spanning Tree methods. The Moving Average Minimal Length Path (MAMLP) algorithm allows a decoupling of fluctuations with respect to the mass center of the system from the movement of the mass center itself. A Hamiltonian representation given by a factor graph is used and plays the role of cost function. The present analysis pertains to 11 macroeconomic (ME) indicators, namely the GDP (x1), Final Consumption Expenditure (x2), Gross Capital Formation (x3), Net Exports (x4), Consumer Price Index (y1), Rates of Interest of the Central Banks (y2), Labour Force (z1), Unemployment (z2), GDP/hour worked (z3), GDP/capita (w1) and Gini coefficient (w2). The target group of countries is composed of 15 EU countries, data taken between 1995 and 2004. By two different methods (the Bipartite Factor Graph Analysis and the Correlation Matrix Eigensystem Analysis) it is found that the strongly correlated countries with respect to the macroeconomic indicators fluctuations can be partitioned into stable clusters.

  9. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-05-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  10. Target-adaptive polarimetric synthetic aperture radar target discrimination using maximum average correlation height filters.

    PubMed

    Sadjadi, Firooz A; Mahalanobis, Abhijit

    2006-05-01

    We report the development of a technique for adaptive selection of polarization ellipse tilt and ellipticity angles such that the target separation from clutter is maximized. From the radar scattering matrix [S] and its complex components, in phase and quadrature phase, the elements of the Mueller matrix are obtained. Then, by means of polarization synthesis, the radar cross section of the radar scatters are obtained at different transmitting and receiving polarization states. By designing a maximum average correlation height filter, we derive a target versus clutter distance measure as a function of four transmit and receive polarization state angles. The results of applying this method on real synthetic aperture radar imagery indicate a set of four transmit and receive angles that lead to maximum target versus clutter discrimination. These optimum angles are different for different targets. Hence, by adaptive control of the state of polarization of polarimetric radar, one can noticeably improve the discrimination of targets from clutter.

  11. Measuring the Microlensing Parallax from Various Space Observatories

    NASA Astrophysics Data System (ADS)

    Bachelet, E.; Hinse, T. C.; Street, R.

    2018-05-01

    A few observational methods allow the measurement of the mass and distance of the lens-star for a microlensing event. A first estimate can be obtained by measuring the microlensing parallax effect produced by either the motion of the Earth (annual parallax) or the contemporaneous observation of the lensing event from two (or more) observatories (space or terrestrial parallax) sufficiently separated from each other. Further developing ideas originally outlined by Gould as well as Mogavero & Beaulieu, we review the possibility of measuring systematically the microlensing parallax using a telescope based on the Moon surface and other space-based observing platforms, including the upcoming WFIRST space-telescope. We first generalize the Fisher matrix formulation and present results demonstrating the advantage for each observing scenario. We conclude by outlining the limitation of the Fisher matrix analysis when submitted to a practical data modeling process. By considering a lunar-based parallax observation, we find that parameter correlations introduce a significant loss in detection efficiency of the probed lunar parallax effect.

  12. Exact solution of a one-dimensional model of strained epitaxy on a periodically modulated substrate

    NASA Astrophysics Data System (ADS)

    Tokar, V. I.; Dreyssé, H.

    2005-03-01

    We consider a one-dimensional lattice gas model of strained epitaxy with the elastic strain accounted for through a finite number of cluster interactions comprising contiguous atomic chains. Interactions of this type arise in the models of strained epitaxy based on the Frenkel-Kontorova model. Furthermore, the deposited atoms interact with the substrate via an arbitrary periodic potential of period L . This model is solved exactly with the use of an appropriately adopted technique developed recently in the theory of protein folding. The advantage of the proposed approach over the standard transfer-matrix method is that it reduces the problem to finding the largest eigenvalue of a matrix of size L instead of 2L-1 , which is vital in the case of nanostructures where L may measure in hundreds of interatomic distances. Our major conclusion is that the substrate modulation always facilitates the size calibration of self-assembled nanoparticles in one- and two-dimensional systems.

  13. Self-assembly of an electronically conductive network through microporous scaffolds.

    PubMed

    Sebastian, H Bri; Bryant, Steven L

    2017-06-15

    Electron transfer spanning significant distances through a microporous structure was established via the self-assembly of an electronically conductive iridium oxide nanowire matrix enveloping the pore walls. Microporous formations were simulated using two scaffold materials of varying physical and chemical properties; paraffin wax beads, and agar gel. Following infiltration into the micropores, iridium nanoparticles self-assembled at the pore wall/ethanol interface. Subsequently, cyclic voltammetry was employed to electrochemically crosslink the metal, erecting an interconnected, and electronically conductive metal oxide nanowire matrix. Electrochemical and spectral characterization techniques confirmed the formation of oxide nanowire matrices encompassing lengths of at least 1.6mm, 400× distances previously achieved using iridium nanoparticles. Nanowire matrices were engaged as biofuel cell anodes, where electrons were donated to the nanowires by a glucose oxidizing enzyme. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Optical configuration with fixed transverse magnification for self-interference incoherent digital holography.

    PubMed

    Imbe, Masatoshi

    2018-03-20

    The optical configuration proposed in this paper consists of a 4-f optical setup with the wavefront modulation device on the Fourier plane, such as a concave mirror and a spatial light modulator. The transverse magnification of reconstructed images with the proposed configuration is independent of locations of an object and an image sensor; therefore, reconstructed images of object(s) at different distances can be scaled with a fixed transverse magnification. It is yielded based on Fourier optics and mathematically verified with the optical matrix method. Numerical simulation results and experimental results are also given to confirm the fixity of the reconstructed images.

  15. Framework for analyzing ecological trait-based models in multidimensional niche spaces

    NASA Astrophysics Data System (ADS)

    Biancalani, Tommaso; DeVille, Lee; Goldenfeld, Nigel

    2015-05-01

    We develop a theoretical framework for analyzing ecological models with a multidimensional niche space. Our approach relies on the fact that ecological niches are described by sequences of symbols, which allows us to include multiple phenotypic traits. Ecological drivers, such as competitive exclusion, are modeled by introducing the Hamming distance between two sequences. We show that a suitable transform diagonalizes the community interaction matrix of these models, making it possible to predict the conditions for niche differentiation and, close to the instability onset, the asymptotically long time population distributions of niches. We exemplify our method using the Lotka-Volterra equations with an exponential competition kernel.

  16. Phylogenetic continuum indicates "galaxies" in the protein universe: preliminary results on the natural group structures of proteins.

    PubMed

    Ladunga, I

    1992-04-01

    The markedly nonuniform, even systematic distribution of sequences in the protein "universe" has been analyzed by methods of protein taxonomy. Mapping of the natural hierarchical system of proteins has revealed some dense cores, i.e., well-defined clusterings of proteins that seem to be natural structural groupings, possibly seeds for a future protein taxonomy. The aim was not to force proteins into more or less man-made categories by discriminant analysis, but to find structurally similar groups, possibly of common evolutionary origin. Single-valued distance measures between pairs of superfamilies from the Protein Identification Resource were defined by two chi 2-like methods on tripeptide frequencies and the variable-length subsequence identity method derived from dot-matrix comparisons. Distance matrices were processed by several methods of cluster analysis to detect phylogenetic continuum between highly divergent proteins. Only well-defined clusters characterized by relatively unique structural, intracellular environmental, organismal, and functional attribute states were selected as major protein groups, including subsets of viral and Escherichia coli proteins, hormones, inhibitors, plant, ribosomal, serum and structural proteins, amino acid synthases, and clusters dominated by certain oxidoreductases and apolar and DNA-associated enzymes. The limited repertoire of functional patterns due to small genome size, the high rate of recombination, specific features of the bacterial membranes, or of the virus cycle canalize certain proteins of viruses and Gram-negative bacteria, respectively, to organismal groups.

  17. Applying the J-optimal channelized quadratic observer to SPECT myocardial perfusion defect detection

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric; Ghaly, Michael; Frey, Eric C.

    2016-03-01

    To evaluate performance on a perfusion defect detection task from 540 image pairs of myocardial perfusion SPECT image data we apply the J-optimal channelized quadratic observer (J-CQO). We compare AUC values of the linear Hotelling observer and J-CQO when the defect location is fixed and when it occurs in one of two locations. As expected, when the location is fixed a single channels maximizes AUC; location variability requires multiple channels to maximize the AUC. The AUC is estimated from both the projection data and reconstructed images. J-CQO is quadratic since it uses the first- and second- order statistics of the image data from both classes. The linear data reduction by the channels is described by an L x M channel matrix and in prior work we introduced an iterative gradient-based method for calculating the channel matrix. The dimensionality reduction from M measurements to L channels yields better estimates of these sample statistics from smaller sample sizes, and since the channelized covariance matrix is L x L instead of M x M, the matrix inverse is easier to compute. The novelty of our approach is the use of Jeffrey's divergence (J) as the figure of merit (FOM) for optimizing the channel matrix. We previously showed that the J-optimal channels are also the optimum channels for the AUC and the Bhattacharyya distance when the channel outputs are Gaussian distributed with equal means. This work evaluates the use of J as a surrogate FOM (SFOM) for AUC when these statistical conditions are not satisfied.

  18. Investigation on Characteristic Variation of the FBG Spectrum with Crack Propagation in Aluminum Plate Structures

    PubMed Central

    Jin, Bo; Zhang, Weifang; Zhang, Meng; Ren, Feifei; Dai, Wei; Wang, Yanrong

    2017-01-01

    In order to monitor the crack tip propagation of aluminum alloy, this study investigates the variation of the spectrum characteristics of a fiber Bragg grating (FBG), combined with an analysis of the spectrum simulation. The results identify the location of the subordinate peak as significantly associated with the strain distribution along the grating, corresponding to the different plastic zones ahead of the crack tip with various crack lengths. FBG sensors could observe monotonic and cyclic plastic zones ahead of the crack tip, with the quadratic strain distribution along the grating at the crack tip-FBG distance of 1.2 and 0.7 mm, respectively. FBG sensors could examine the process zones ahead of the crack tip with the cubic strain distribution along the grating at the crack tip-FBG distance of 0.5 mm. The spectrum oscillation occurs as the crack approaches the FBG where the highly heterogeneous strain is distributed. Another idea is to use a finite element method (FEM), together with a T-matrix method, to analyze the reflection intensity spectra of FBG sensors for various crack sizes. The described crack propagation detection system may apply in structural health monitoring. PMID:28772949

  19. Investigation on Characteristic Variation of the FBG Spectrum with Crack Propagation in Aluminum Plate Structures.

    PubMed

    Jin, Bo; Zhang, Weifang; Zhang, Meng; Ren, Feifei; Dai, Wei; Wang, Yanrong

    2017-05-27

    In order to monitor the crack tip propagation of aluminum alloy, this study investigates the variation of the spectrum characteristics of a fiber Bragg grating (FBG), combined with an analysis of the spectrum simulation. The results identify the location of the subordinate peak as significantly associated with the strain distribution along the grating, corresponding to the different plastic zones ahead of the crack tip with various crack lengths. FBG sensors could observe monotonic and cyclic plastic zones ahead of the crack tip, with the quadratic strain distribution along the grating at the crack tip-FBG distance of 1.2 and 0.7 mm, respectively. FBG sensors could examine the process zones ahead of the crack tip with the cubic strain distribution along the grating at the crack tip-FBG distance of 0.5 mm. The spectrum oscillation occurs as the crack approaches the FBG where the highly heterogeneous strain is distributed. Another idea is to use a finite element method (FEM), together with a T -matrix method, to analyze the reflection intensity spectra of FBG sensors for various crack sizes. The described crack propagation detection system may apply in structural health monitoring.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esgin, U.; Özyürek, D.; Kaya, H., E-mail: hasan.kaya@kocaeli.edu.tr

    In the present study, wear behaviors of Monel 400, Monel 404, Monel R-405 and Monel K-500 alloys produced by Powder Metallurgy (P/M) method were investigated. These compounds prepared from elemental powders were cold-pressed (600 MPa) and then, sintered at 1150°C for 2 hours and cooled down to the room temperature in furnace environment. Monel alloys produced by the P/M method were characterized through scanning electron microscope (SEM+EDS), X-ray diffraction (XRD), hardness and density measurements. In wear tests, standard pin-on-disk type device was used. Specimens produced within four different Monel Alloys were tested under 1ms{sup −1} sliding speed, under three different loadsmore » (20N, 30N and 40N) and five different sliding distances (400-2000 m). The results show that Monel Alloys have γ matrix and that Al{sub 0,9}Ni{sub 4,22} intermetallic phase was formed in the structure. Also, the highest hardness value was measured with the Monel K-500 alloy. In wear tests, the maximum weight loss according to the sliding distance, was observed in Monel 400 and Monel 404 alloys while the minimum weight loss was achieved by the Monel K-500 alloy.« less

  1. Clustering and visualizing similarity networks of membrane proteins.

    PubMed

    Hu, Geng-Ming; Mai, Te-Lun; Chen, Chi-Ming

    2015-08-01

    We proposed a fast and unsupervised clustering method, minimum span clustering (MSC), for analyzing the sequence-structure-function relationship of biological networks, and demonstrated its validity in clustering the sequence/structure similarity networks (SSN) of 682 membrane protein (MP) chains. The MSC clustering of MPs based on their sequence information was found to be consistent with their tertiary structures and functions. For the largest seven clusters predicted by MSC, the consistency in chain function within the same cluster is found to be 100%. From analyzing the edge distribution of SSN for MPs, we found a characteristic threshold distance for the boundary between clusters, over which SSN of MPs could be properly clustered by an unsupervised sparsification of the network distance matrix. The clustering results of MPs from both MSC and the unsupervised sparsification methods are consistent with each other, and have high intracluster similarity and low intercluster similarity in sequence, structure, and function. Our study showed a strong sequence-structure-function relationship of MPs. We discussed evidence of convergent evolution of MPs and suggested applications in finding structural similarities and predicting biological functions of MP chains based on their sequence information. © 2015 Wiley Periodicals, Inc.

  2. Multi-criteria decision making development of ion chromatographic method for determination of inorganic anions in oilfield waters based on artificial neural networks retention model.

    PubMed

    Stefanović, Stefica Cerjan; Bolanča, Tomislav; Luša, Melita; Ukić, Sime; Rogošić, Marko

    2012-02-24

    This paper describes the development of ad hoc methodology for determination of inorganic anions in oilfield water, since their composition often significantly differs from the average (concentration of components and/or matrix). Therefore, fast and reliable method development has to be performed in order to ensure the monitoring of desired properties under new conditions. The method development was based on computer assisted multi-criteria decision making strategy. The used criteria were: maximal value of objective functions used, maximal robustness of the separation method, minimal analysis time, and maximal retention distance between two nearest components. Artificial neural networks were used for modeling of anion retention. The reliability of developed method was extensively tested by the validation of performance characteristics. Based on validation results, the developed method shows satisfactory performance characteristics, proving the successful application of computer assisted methodology in the described case study. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Allelic database and accession divergence of a Brazilian mango collection based on microsatellite markers.

    PubMed

    Dos Santos Ribeiro, I C N; Lima Neto, F P; Santos, C A F

    2012-12-19

    Allelic patterns and genetic distances were examined in a collection of 103 foreign and Brazilian mango (Mangifera indica) accessions in order to develop a reference database to support cultivar protection and breeding programs. An UPGMA dendrogram was generated using Jaccard's coefficients from a distance matrix based on 50 alleles of 12 microsatellite loci. The base pair number was estimated by the method of inverse mobility. The cophenetic correlation was 0.8. The accessions had a coefficient of similarity from 30 to 100%, which reflects high genetic variability. Three groups were observed in the UPGMA dendrogram; the first group was formed predominantly by foreign accessions, the second group was formed by Brazilian accessions, and the Dashehari accession was isolated from the others. The 50 microsatellite alleles did not separate all 103 accessions, indicating that there are duplicates in this mango collection. These 12 microsatellites need to be validated in order to establish a reliable set to identify mango cultivars.

  4. Discriminative least squares regression for multiclass classification and feature selection.

    PubMed

    Xiang, Shiming; Nie, Feiping; Meng, Gaofeng; Pan, Chunhong; Zhang, Changshui

    2012-11-01

    This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the ε-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method.

  5. The effect of rigid fixation on growth of the neurocranium.

    PubMed

    Wong, L; Dufresne, C R; Richtsmeier, J T; Manson, P N

    1991-09-01

    The effects on skull growth of plating the coronal suture and frontal bone were studied in New Zealand White rabbits. Three-dimensional coordinate landmarks were digitized and analyzed to determine the differences in form between operated and unoperated animals using Euclidian distance matrix analysis. This method compares sets of interlandmark distances in three dimensions and was used to demonstrate changes induced by plating. We interpret these changes in morphology to be the result of differences in growth between the operated and unoperated groups. Periosteal elevation alone (n = 6) resulted in a minimal local growth increase. Coronal suture plating (n = 8) resulted in local growth restriction with contralateral and adjacent size increases. Frontal bone plating (n = 6) without crossing a suture line also resulted in local growth restriction and adjacent bone size increases. The timing of intervention in relation to the completion of bone growth may explain the magnitude of clinically apparent effects. Changes in bones adjacent to those directly manipulated may be an attempt to maintain a normal skull volume.

  6. Linear discriminant analysis based on L1-norm maximization.

    PubMed

    Zhong, Fujin; Zhang, Jiashu

    2013-08-01

    Linear discriminant analysis (LDA) is a well-known dimensionality reduction technique, which is widely used for many purposes. However, conventional LDA is sensitive to outliers because its objective function is based on the distance criterion using L2-norm. This paper proposes a simple but effective robust LDA version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class dispersion and the L1-norm-based within-class dispersion. The proposed method is theoretically proved to be feasible and robust to outliers while overcoming the singular problem of the within-class scatter matrix for conventional LDA. Experiments on artificial datasets, standard classification datasets and three popular image databases demonstrate the efficacy of the proposed method.

  7. Plasma and cold sprayed aluminum carbon nanotube composites: Quantification of nanotube distribution and multi-scale mechanical properties

    NASA Astrophysics Data System (ADS)

    Bakshi, Srinivasa Rao

    Carbon nanotubes (CNT) could serve as potential reinforcement for metal matrix composites for improved mechanical properties. However dispersion of carbon nanotubes (CNT) in the matrix has been a longstanding problem, since they tend to form clusters to minimize their surface area. The aim of this study was to use plasma and cold spraying techniques to synthesize CNT reinforced aluminum composite with improved dispersion and to quantify the degree of CNT dispersion as it influences the mechanical properties. Novel method of spray drying was used to disperse CNTs in Al-12 wt.% Si prealloyed powder, which was used as feedstock for plasma and cold spraying. A new method for quantification of CNT distribution was developed. Two parameters for CNT dispersion quantification, namely Dispersion parameter (DP) and Clustering Parameter (CP) have been proposed based on the image analysis and distance between the centers of CNTs. Nanomechanical properties were correlated with the dispersion of CNTs in the microstructure. Coating microstructure evolution has been discussed in terms of splat formation, deformation and damage of CNTs and CNT/matrix interface. Effect of Si and CNT content on the reaction at CNT/matrix interface was thermodynamically and kinetically studied. A pseudo phase diagram was computed which predicts the interfacial carbide for reaction between CNT and Al-Si alloy at processing temperature. Kinetic aspects showed that Al4C3 forms with Al-12 wt.% Si alloy while SiC forms with Al-23wt.% Si alloy. Mechanical properties at nano, micro and macro-scale were evaluated using nanoindentation and nanoscratch, microindentation and bulk tensile testing respectively. Nano and micro-scale mechanical properties (elastic modulus, hardness and yield strength) displayed improvement whereas macro-scale mechanical properties were poor. The inversion of the mechanical properties at different scale length was attributed to the porosity, CNT clustering, CNT-splat adhesion and Al 4C3 formation at the CNT/matrix interface. The Dispersion parameter (DP) was more sensitive than Clustering parameter (CP) in measuring degree of CNT distribution in the matrix.

  8. Distance-constrained orthogonal Latin squares for brain-computer interface.

    PubMed

    Luo, Gang; Min, Wanli

    2012-02-01

    The P300 brain-computer interface (BCI) using electroencephalogram (EEG) signals can allow amyotrophic lateral sclerosis (ALS) patients to instruct computers to perform tasks. To strengthen the P300 response and increase classification accuracy, we proposed an experimental design where characters are intensified according to orthogonal Latin square pairs. These orthogonal Latin square pairs satisfy certain distance constraint so that neighboring characters are not intensified simultaneously. However, it is unknown whether such distance-constrained, orthogonal Latin square pairs actually exist. In this paper, we show that for every matrix size commonly used in P300 BCI, thousands to millions of such distance-constrained, orthogonal Latin square pairs can be systematically and efficiently constructed and are sufficient for the purpose of being used in P300 BCI.

  9. Effects of English admixture and geographic distance on anthropometric variation and genetic structure in 19th-century Ireland.

    PubMed

    Relethford, J H

    1988-05-01

    The analysis of anthropometric data often allows investigation of patterns of genetic structure in historical populations. This paper focuses on interpopulational anthropometric variation in seven populations in Ireland using data collected in the 1890s. The seven populations were located within a 120-km range along the west coast of Ireland and include islands and mainland isolates. Two of the populations (the Aran Islands and Inishbofin) have a known history of English admixture in earlier centuries. Ten anthropometric measures (head length, breadth, and height; nose length and breadth; bizygomatic and bigonial breadth; stature; hand length; and forearm length) on 259 adult Irish males were analyzed following age adjustment. Discriminant and canonical variates analysis were used to determine the degree and pattern of among-group variation. Mahalanobis' distance measure, D2, was computed between each pair of populations and compared to distance measures based on geographic distance and English admixture (a binary measure indicating whether either of a pair of populations had historical indications of admixture). In addition, surname frequencies were used to construct distance measures based on random isonymy. Correlations were computed between distance measures, and their probabilities were derived using the Mantel matrix permutation method. English admixture has the greatest effect on anthropometric variation among these populations, followed by geographic distance. The correlation between anthropometric distance and geographic distance is not significant (r = -0.081, P = .590), but the correlation of admixture and anthropometric distance is significant (r = 0.829, P = .047). When the two admixed populations are removed from the analysis the correlation between geographic and anthropometric distance becomes significant (r = 0.718, P = .025). Isonymy distance shows a significant correlation with geographic distance (r = 0.425, P = .046) but not with admixture distance (r = -0.052, P = .524). The fact that anthropometrics show past patterns of gene flow and surnames do not reflects the greater impact of stochastic processes on surnames, along with the continued extinction of surnames. This study shows that 1) anthropometrics can be extremely useful in assessing population structure and history, 2) differential gene flow into populations can have a major impact on local genetic structure, and 3) microevolutionary processes can have different effects on biological characters and surnames.

  10. Comparison of clast and matrix dispersal in till: Charlo-Atholville area, north-central New Brunswick

    USGS Publications Warehouse

    Dickson, M.L.; Broster, B.E.; Parkhill, M.A.

    2004-01-01

    Striations and dispersal patterns for till clasts and matrix geochemistry are used to define flow directions of glacial transport across an area of about 800km2 in the Charlo-Atholville area of north-central New Brunswick. A total of 170 clast samples and 328 till matrix samples collected for geochemical analysis across the region, were analyzed for a total of 39 elements. Major lithologic contacts used here to delineate till clast provenance were based on recent bedrock mapping. Eleven known mineral occurrences and a gossan are used to define point source targets for matrix geochemical dispersal trains and to estimate probable distance and direction of transport from unknown sources. Clast trains are traceable for distances of approximately 10 km, whereas till geochemical dispersal patterns are commonly lost within 5 km of transport. Most dispersal patterns reflect more than a single direction of glacial transport. These data indicate that a single till sheet, 1-4 m thick, was deposited as the dominant ice-flow direction fluctuated between southeastward, eastward, and northward over the study area. Directions of early flow represent changes in ice sheet dominance, first from the northwest and then from the west. Locally, eastward and northward flow represent the maximum erosive phases. The last directions of flow are likely due to late glacial ice sheet drawdown towards the valley outlet at Baie des Chaleurs.

  11. Does silvoagropecuary landscape fragmentation affect the genetic diversity of the sigmodontine rodent Oligoryzomys longicaudatus?

    PubMed

    Lazo-Cancino, Daniela; Musleh, Selim S; Hernandez, Cristian E; Palma, Eduardo; Rodriguez-Serrano, Enrique

    2017-01-01

    Fragmentation of native forests is a highly visible result of human land-use throughout the world. In this study, we evaluated the effects of landscape fragmentation and matrix features on the genetic diversity and structure of Oligoryzomys longicaudatus, the natural reservoir of Hantavirus in southern South America. We focused our work in the Valdivian Rainforest where human activities have produced strong change of natural habitats, with an important number of human cases of Hantavirus. We sampled specimens of O. longicaudatus from five native forest patches surrounded by silvoagropecuary matrix from Panguipulli, Los Rios Region, Chile. Using the hypervariable domain I (mtDNA), we characterized the genetic diversity and evaluated the effect of fragmentation and landscape matrix on the genetic structure of O. longicaudatus . For the latter, we used three approaches: (i) Isolation by Distance (IBD) as null model, (ii) Least-cost Path (LCP) where genetic distances between patch pairs increase with cost-weighted distances, and (iii) Isolation by Resistance (IBR) where the resistance distance is the average number of steps that is needed to commute between the patches during a random walk. We found low values of nucleotide diversity ( π ) for the five patches surveyed, ranging from 0.012 to 0.015, revealing that the 73 sampled specimens of this study belong to two populations but with low values of genetic distance ( γ ST ) ranging from 0.022 to 0.099. Likewise, we found that there are no significant associations between genetic distance and geographic distance for IBD and IBR. However, we found for the LCP approach, a significant positive relationship ( r  = 0.737, p  = 0.05), with shortest least-cost paths traced through native forest and arborescent shrublands. In this work we found that, at this reduced geographical scale , Oligoryzomys longicaudatus shows genetic signs of fragmentation. In addition, we found that connectivity between full growth native forest remnants is mediated by the presence of dense shrublands and native forest corridors. In this sense, our results are important because they show how native forest patches and associated routes act as source of vector species in silvoagropecuary landscape, increasing the infection risk on human population. This study is the first approach to understand the epidemiological spatial context of silvoagropecuary risk of Hantavirus emergence. Further studies are needed to elucidate the effects of landscape fragmentation in order to generate new predictive models based on vector intrinsic attributes and landscape features.

  12. A comparison of phenotypic variation and covariation patterns and the role of phylogeny, ecology, and ontogeny during cranial evolution of new world monkeys.

    PubMed

    Marroig, G; Cheverud, J M

    2001-12-01

    Similarity of genetic and phenotypic variation patterns among populations is important for making quantitative inferences about past evolutionary forces acting to differentiate populations and for evaluating the evolution of relationships among traits in response to new functional and developmental relationships. Here, phenotypic co variance and correlation structure is compared among Platyrrhine Neotropical primates. Comparisons range from among species within a genus to the superfamily level. Matrix correlation followed by Mantel's test and vector correlation among responses to random natural selection vectors (random skewers) were used to compare correlation and variance/covariance matrices of 39 skull traits. Sampling errors involved in matrix estimates were taken into account in comparisons using matrix repeatability to set upper limits for each pairwise comparison. Results indicate that covariance structure is not strictly constant but that the amount of variance pattern divergence observed among taxa is generally low and not associated with taxonomic distance. Specific instances of divergence are identified. There is no correlation between the amount of divergence in covariance patterns among the 16 genera and their phylogenetic distance derived from a conjoint analysis of four already published nuclear gene datasets. In contrast, there is a significant correlation between phylogenetic distance and morphological distance (Mahalanobis distance among genus centroids). This result indicates that while the phenotypic means were evolving during the last 30 millions years of New World monkey evolution, phenotypic covariance structures of Neotropical primate skulls have remained relatively consistent. Neotropical primates can be divided into four major groups based on their feeding habits (fruit-leaves, seed-fruits, insect-fruits, and gum-insect-fruits). Differences in phenotypic covariance structure are correlated with differences in feeding habits, indicating that to some extent changes in interrelationships among skull traits are associated with changes in feeding habits. Finally, common patterns and levels of morphological integration are found among Platyrrhine primates, suggesting that functional/developmental integration could be one major factor keeping covariance structure relatively stable during evolutionary diversification of South American monkeys.

  13. First and second order stereology of hyaline cartilage: Application on mice femoral cartilage.

    PubMed

    Noorafshan, Ali; Niazi, Behnam; Mohamadpour, Masoomeh; Hoseini, Leila; Hoseini, Najmeh; Owji, Ali Akbar; Rafati, Ali; Sadeghi, Yasaman; Karbalay-Doust, Saied

    2016-11-01

    Stereological techniques could be considered in research on cartilage to obtain quantitative data. The present study aimed to explain application of the first- and second-order stereological methods on articular cartilage of mice and the methods applied on the mice exposed to cadmium (Cd). The distal femoral articular cartilage of BALB/c mice (control and Cd-treated) was removed. Then, volume and surface area of the cartilage and number of chondrocytes were estimated using Cavalieri and optical dissector techniques on isotropic uniform random sections. Pair-correlation function [g(r)] and cross-correlation function were calculated to express the spatial arrangement of chondrocytes-chondrocytes and chondrocytes-matrix (chondrocyte clustering/dispersing), respectively. The mean±standard deviation of the cartilage volume, surface area, and thickness were 1.4±0.1mm 3 , 26.2±5.4mm 2 , and 52.8±6.7μm, respectively. Besides, the mean number of chondrocytes was 680±200 (×10 3 ). The cartilage volume, cartilage surface area, and number of chondrocytes were respectively reduced by 25%, 27%, and 27% in the Cd-treated mice in comparison to the control animals (p<0.03). Estimates of g(r) for the cells and matrix against the dipole distances, r, have been plotted. This plot showed that the chondrocytes and the matrix were neither dispersed nor clustered in the two study groups. Application of design-based stereological methods and also evaluation of spatial arrangement of the cartilage components carried potential advantages for investigating the cartilage in different joint conditions. Chondrocyte clustering/dispersing and cellularity can be evaluated in cartilage assessment in normal or abnormal situations. Copyright © 2016 Elsevier GmbH. All rights reserved.

  14. Production and characterization of hyaluronic acid microparticles for the controlled delivery of growth factors using a spray/dehydration method.

    PubMed

    Babo, Pedro S; Reis, Rui L; Gomes, Manuela E

    2016-11-01

    Hyaluronic acid is the main polysaccharide present in the connective tissue. Besides its structural function as backbone of the extracellular matrix, hyaluronic acid plays staple roles in several biological processes including the modulation of inflammation and wound healing processes. The application of hyaluronic acid in regenerative medicine, either as cells and/or drug/growth factors delivery vehicles, relies on its ability to be cross-linked using a plethora of reactions, producing stable hydrogels. In this work, we propose a novel method for the production of hyaluronic acid microparticles that presents several advantages over others that have been used. Basically, droplets of hyaluronic acid solution produced with a nozzle are collected in an isopropanol dehydration bath, and stabilized after crosslinking with adipic acid dihydrazide, using a cabodiimide-based chemistry. The size and morphology of the hyaluronic acid microparticles produced by this method varied with the molecular weight and concentration of the hyaluronic acid solution, the nozzle chamber pressure, the distance between the nozzle and the crosslinking solution, and the number of crosslinking steps. The degree of crosslinking of the hyaluronic acid microparticles produced was tunable and allowed to control the rate of the degradation promoted by hyaluronidase. Moreover, the particles were loaded with platelet lysate, a hemoderivative rich in cytokines with interest for regenerative medicine applications. The hyaluronic acid microparticles showed potential to bind selectively to positively charged molecules, as the factors present in the platelet lysate. It is envisioned that these can be further released in a sustained manner by ion exchange or by the degradation of the hyaluronic acid microparticles matrix promoted by extracellular matrix remodeling. © The Author(s) 2016.

  15. UID...Leaving Its Mark on the Universe

    NASA Technical Reports Server (NTRS)

    Schramm, Harry F., Jr.

    2008-01-01

    Since 1975 bar codes on products at the retail counter have been accepted as the standard for entering product identity for price determination. Since the beginning of the 21 st century, the Data Matrix symbol has become accepted as the bar code format that is marked directly on a part, assembly or product that is durable enough to identify that item for its lifetime. NASA began the studies for direct part marking Data Matrix symbols on parts during the Return to Flight activities after the Challenger Accident. Over the 20 year period that has elapsed since Challenger, a mountain of studies, analyses and focused problem solutions developed by and for NASA have brought about world changing results. NASA Technical Standard 6002 and NASA Handbook 6003 for Direct Part Marking Data Matrix Symbols on Aerospace Parts have formed the basis for most other standards on part marking internationally. NASA and its commercial partners have developed numerous products and methods that addressed the difficulties of collecting part identification in aerospace operations. These products enabled the marking of Data Matrix symbols in virtually every situation and the reading of symbols at great distances, severe angles, under paint and in the dark without a light. Even unmarkable delicate parts now have a process to apply a chemical mixture, recently trademarked as Nanocodes, that can be converted to Data Matrix information through software. The accompanying intellectual property is protected by ten patents, several of which are licensed. Direct marking Data Matrix on NASA parts dramatically decreases data entry errors and the number of parts that go through their life cycle unmarked, two major threats to sound configuration management and flight safety. NASA is said to only have people and stuff with information connecting them. Data Matrix is one of the most significant improvements since Challenger to the safety and reliability of that connection.

  16. UID...Now That's Gonna Leave A Mark

    NASA Technical Reports Server (NTRS)

    Schramm, Harry F., Jr.

    2007-01-01

    Since 1975 bar codes on products at the retail counter have been accepted as the standard for entering product identity for price determination. Since the beginning of the 21st century, the Data Matrix symbol has become accepted as the bar code format that is marked directly on a part, assembly or product that is durable enough to identify that item for its lifetime. NASA began the studies for direct part marking Data Matrix symbols on parts during the Return to Flight activities after the Challenger Accident. Over the 20 year period that has elapsed since Challenger, a mountain of studies, analyses and focused problem solutions developed by and for NASA have brought about world changing results. NASA Technical Standard 6002 and NASA Handbook 6003 for Direct Part Marking Data Matrix Symbols on Aerospace Parts have formed the basis for most other standards on part marking internationally. NASA and its commercial partners have developed numerous products and methods that addressed the difficulties of collecting part identification in aerospace operations. These products enabled the marking of Data Matrix symbols in virtually every situation and the reading of symbols at great distances, severe angles, under paint and in the dark without a light. Even unmarkable delicate parts now have a process to apply a chemical mixture, recently trademarked as Nanocodes, that can be converted to Data Matrix information through software. The accompanying intellectual property is protected by ten patents, several of which are licensed. Direct marking Data Matrix on NASA parts dramatically decreases data entry errors and the number ofparts that go through their life cycle unmarked, two major threats to sound configuration management and flight safety. NASA is said to only have people and stuff with information connecting them. Data Matrix is one of the most significant improvements since Challenger to the safety and reliability of that connection.

  17. UID...Now That's Gonna Leave a Mark

    NASA Technical Reports Server (NTRS)

    Schramm, Harry F.

    2008-01-01

    Since 1975 bar codes on products at the retail counter have been accepted as the standard for entering product identity for price determination. Since the beginning of the 21st century, the Data Matrix symbol has become accepted as the bar code format that is marked directly on a part, assembly or product that is durable enough to identify that item for its lifetime. NASA began the studies for direct part marking Data Matrix symbols on parts during the Return to Flight activities after the Challenger Accident. Over the 20 year period that has elapsed since Challenger, a mountain of studies, analyses and focused problem solutions developed by and for NASA have brought about world changing results. NASA Technical Standard 6002 and NASA Handbook 6003 for Direct Part Marking Data Matrix Symbols on Aerospace Parts have formed the basis for most other standards on part marking internationally. NASA and its commercial partners have developed numerous products and methods that addressed the difficulties of collecting part identification in aerospace operations. These products enabled the marking of Data Matrix symbols in virtually every situation and the reading of symbols at great distances, severe angles, under paint and in the dark without a light. Even unmarkable delicate parts now have a process to apply a chemical mixture, recently trademarked as Nanocodes, that can be converted to Data Matrix information through software. The accompanying intellectual property is protected by ten patents, several of which are licensed. Direct marking Data Matrix on NASA parts dramatically decreases data entry errors and the number of parts that go through their life cycle unmarked, two major threats to sound configuration management and flight safety. NASA is said to only have people and stuff with information connecting them. Data Matrix is one of the most significant improvements since Challenger to the safety and reliability of that connection.

  18. UID....Now That's Gonna Leave A Mark

    NASA Technical Reports Server (NTRS)

    Schramm, Harry F., Jr.

    2008-01-01

    Since 1975 bar codes on products at the retail counter have been accepted as the standard for entering product identity for price determination. Since the beginning of the 21 st century, the Data Matrix symbol has become accepted as the bar code format that is marked directly on a part, assembly or product that is durable enough to identify that item for its lifetime. NASA began the studies for direct part marking Data Matrix symbols on parts during the Return to Flight activities after the Challenger Accident. Over the 20 year period that has elapsed since Challenger, a mountain of studies, analyses and focused problem solutions developed by and for NASA have brought about world changing results. NASA Technical Standard 6002 and NASA Handbook 6003 for Direct Part Marking Data Matrix Symbols on Aerospace Parts have formed the basis for most other standards on part marking internationally. NASA and its commercial partners have developed numerous products and methods that addressed the difficulties of collecting part identification in aerospace operations. These products enabled the marking of Data Matrix symbols in virtually every situation and the reading of symbols at great distances, severe angles, under paint and in the dark without a light. Even unmarkable delicate parts now have a process to apply a chemical mixture, recently trademarked as Nanocodes, that can be converted to Data Matrix information through software. The accompanying intellectual property is protected by ten patents, several of which are licensed. Direct marking Data Matrix on NASA parts dramatically decreases data entry errors and the number of parts that go through their life cycle unmarked, two major threats to sound configuration management and flight safety. NASA is said to only have people and stuff with information connecting them. Data Matrix is one of the most significant improvements since Challenger to the safety and reliability of that connection.

  19. UID.. .Now That's Gonna Leave A Mark

    NASA Technical Reports Server (NTRS)

    Schramm, Fred

    2006-01-01

    Since 1975 bar codes on products at the retail counter have been accepted as the standard for entering product identity for price determination. Since the beginning of the 21st century, the Data Matrix symbol has become accepted as the bar code format that is marked directly on a part, assembly or product that is durable enough to identify that item for its lifetime. NASA began the studies for direct part marking Data Matrix symbols on parts during the Return to Flight activities after the Challenger Accident. Over the 20 year period that has elapsed since Challenger, a mountain of studies, analyses and focused problem solutions developed by and for NASA have brought about world changing results. NASA Technical Standard 6002 and NASA Handbook 6003 for Direct Part Marking Data Matrix Symbols on Aerospace Parts have formed the basis for most other standards on part marking internationally. NASA and its commercial partners have developed numerous products and methods that addressed the difficulties of collecting part identification in aerospace operations. These products enabled the marking of Data Matrix symbols in virtually every situation and the reading of symbols at great distances, severe angles, under paint and in the dark without a light. Even unmarkable delicate parts now have a process to apply a chemical mixture, recently trademarked as Nanocodes, that can be converted to Data Matrix information through software. The accompanying intellectual property is protected by ten patents, several of which are licensed. Direct marking Data Matrix on NASA parts dramatically decreases data entry errors and the number of parts that go through their life cycle marked, two major threats to sound configuration management and flight safety. NASA is said to only have people and stuff with information connecting them. Data Matrix is one of the most significant improvements since Challenger to the safety and reliability of that connection.

  20. NASA Technologies for Product Identification

    NASA Technical Reports Server (NTRS)

    Schramm, Fred, Jr.

    2006-01-01

    Since 1975 bar codes on products at the retail counter have been accepted as the standard for entering product identity for price determination. Since the beginning of the 21st century, the Data Matrix symbol has become accepted as the bar code format that is marked directly on a part, assembly or product that is durable enough to identify that item for its lifetime. NASA began the studies for direct part marking Data Matrix symbols on parts during the Return to Flight activities after the Challenger Accident. Over the 20 year period that has elapsed since Challenger, a mountain of studies, analyses and focused problem solutions developed by and for NASA have brought about world changing results. NASA Technical Standard 6002 and NASA Handbook 6003 for Direct Part Marking Data Matrix Symbols on Aerospace Parts have formed the basis for most other standards on part marking internationally. NASA and its commercial partners have developed numerous products and methods that addressed the difficulties of collecting part identification in aerospace operations. These products enabled the marking of Data Matrix symbols in virtually every situation and the reading of symbols at great distances, severe angles, under paint and in the dark without a light. Even unmarkable delicate parts now have a process to apply a chemical mixture called NanocodesTM that can be converted to a Data Matrix. The accompanying intellectual property is protected by 10 patents, several of which are licensed. Direct marking Data Matrix on NASA parts virtually eliminates data entry errors and the number of parts that go through their life cycle unmarked, two major threats to sound configuration management and flight safety. NASA is said to only have people and stuff with information connecting them. Data Matrix is one of the most significant improvements since Challenger to the safety and reliability of that connection. This presentation highlights the accomplishments of NASA in its efforts to develop technologies for automatic identification, its efforts to implement them and its vision on their role in space.

  1. Almost all quantum channels are equidistant

    NASA Astrophysics Data System (ADS)

    Nechita, Ion; Puchała, Zbigniew; Pawela, Łukasz; Życzkowski, Karol

    2018-05-01

    In this work, we analyze properties of generic quantum channels in the case of large system size. We use random matrix theory and free probability to show that the distance between two independent random channels converges to a constant value as the dimension of the system grows larger. As a measure of the distance we use the diamond norm. In the case of a flat Hilbert-Schmidt distribution on quantum channels, we obtain that the distance converges to 1/2 +2/π , giving also an estimate for the maximum success probability for distinguishing the channels. We also consider the problem of distinguishing two random unitary rotations.

  2. Density-matrix simulation of small surface codes under current and projected experimental noise

    NASA Astrophysics Data System (ADS)

    O'Brien, T. E.; Tarasinski, B.; DiCarlo, L.

    2017-09-01

    We present a density-matrix simulation of the quantum memory and computing performance of the distance-3 logical qubit Surface-17, following a recently proposed quantum circuit and using experimental error parameters for transmon qubits in a planar circuit QED architecture. We use this simulation to optimize components of the QEC scheme (e.g., trading off stabilizer measurement infidelity for reduced cycle time) and to investigate the benefits of feedback harnessing the fundamental asymmetry of relaxation-dominated error in the constituent transmons. A lower-order approximate calculation extends these predictions to the distance-5 Surface-49. These results clearly indicate error rates below the fault-tolerance threshold of the surface code, and the potential for Surface-17 to perform beyond the break-even point of quantum memory. However, Surface-49 is required to surpass the break-even point of computation at state-of-the-art qubit relaxation times and readout speeds.

  3. Low energy sign illumination system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minogue, R.W.

    A low energy sign contruction is illustrated for illumination of signs of the type having translucent illuminated faces. An opaque sign border is bridged by a reflector extending generally parallel to the illuminated face and having a truncated sawtooth profile. For single sided signs, one set of sawtooth points is truncated; for dual sided signs, both set of sawtooth points are truncated. Bayonet mounted lighting sockets are mounted at apertures in the respective truncations and utilize the metallic reflective surface as one side of a low voltage (10.5-volt) ac circuit. The reflector forms a cooled heat sink mounting the bulbsmore » as well as a supporting matrix. The lamps, as mounted to this supporting matrix, are typically spaced at distances which do not exceed twice the distance of the lamp filament to the translucent face. By the expedient of using 14-V lamps, prolonged lamp life with low energy illumination results.« less

  4. Oscillation properties of active and sterile neutrinos and neutrino anomalies at short distances

    NASA Astrophysics Data System (ADS)

    Khruschov, V. V.; Fomichev, S. V.; Titov, O. A.

    2016-09-01

    A generalized phenomenological (3 + 2 + 1) model featuring three active and three sterile neutrinos that is intended for calculating oscillation properties of neutrinos for the case of a normal activeneutrino mass hierarchy and a large splitting between the mass of one sterile neutrino and the masses of the other two sterile neutrinos is considered. A new parametrization and a specific form of the general mixing matrix are proposed for active and sterile neutrinos with allowance for possible CP violation in the lepton sector, and test values are chosen for the neutrino masses and mixing parameters. The probabilities for the transitions between different neutrino flavors are calculated, and graphs representing the probabilities for the disappearance of muon neutrinos/antineutrinos and the appearance of electron neutrinos/antineutrinos in a beam of muon neutrinos/antineutrinos versus the distance from the neutrino source for various values of admissible model parameters at neutrino energies not higher than 50 MeV, as well as versus the ratio of this distance to the neutrino energy, are plotted. It is shown that the short-distance accelerator anomaly in neutrino data (LNSD anomaly) can be explained in the case of a specific mixing matrix for active and sterile neutrinos (which belongs to the a 2 type) at the chosen parameter values. The same applies to the short-distance reactor and gallium anomalies. The theoretical results obtained in the present study can be used to interpret and predict the results of ground-based neutrino experiments aimed at searches for sterile neutrinos, as well as to analyze some astrophysical observational data.

  5. Synergistic interactions between edge and area effects in a heavily fragmented landscape.

    PubMed

    Ewers, Robert M; Thorpe, Stephen; Didham, Raphael K

    2007-01-01

    Both area and edge effects have a strong influence on ecological processes in fragmented landscapes, but there is little understanding of how these two factors might interact to exacerbate local species declines. To test for synergistic interactions between area and edge effects, we sampled a diverse beetle community in a heavily fragmented landscape in New Zealand. More than 35,000 beetles of approximately 900 species were sampled over large gradients in habitat area (10(-2) 10(6) ha) and distance from patch edge (2(0)-2(10) m from the forest edge into both the forest and adjacent matrix). Using a new approach to partition variance following an ordination analysis, we found that a synergistic interaction between habitat area and distance to edge was a more important determinant of patterns in beetle community composition than direct edge or area effects alone. The strength of edge effects in beetle-species composition increased nonlinearly with increasing fragment area. One important consequence of the synergy is that the slopes of species area (SA) curves constructed from habitat islands depend sensitively on the distance from edge at which sampling is conducted. Surprisingly, we found negative SA curves for communities sampled at intermediate distances from habitat edges, caused by differential edge responses of matrix- vs. forest-specialist species in fragments of increasing area. Our data indicate that distance to habitat edge has a consistently greater impact on beetle community composition than habitat area and that variation in the strength of edge effects may underlie many patterns that are superficially related to habitat area.

  6. Survey of Quantification and Distance Functions Used for Internet-based Weak-link Sociological Phenomena

    DTIC Science & Technology

    2016-03-01

    well as the Yahoo search engine and a classic SearchKing HIST algorithm. The co-PI immersed herself in the sociology literature for the relevant...Google matrix, PageRank as well as the Yahoo search engine and a classic SearchKing HIST algorithm. The co-PI immersed herself in the sociology...The PI studied all mathematical literature he can find related to the Google search engine, Google matrix, PageRank as well as the Yahoo search

  7. Geometrical eigen-subspace framework based molecular conformation representation for efficient structure recognition and comparison

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Tian; Yang, Xiao-Bao; Zhao, Yu-Jun

    2017-04-01

    We have developed an extended distance matrix approach to study the molecular geometric configuration through spectral decomposition. It is shown that the positions of all atoms in the eigen-space can be specified precisely by their eigen-coordinates, while the refined atomic eigen-subspace projection array adopted in our approach is demonstrated to be a competent invariant in structure comparison. Furthermore, a visual eigen-subspace projection function (EPF) is derived to characterize the surrounding configuration of an atom naturally. A complete set of atomic EPFs constitute an intrinsic representation of molecular conformation, based on which the interatomic EPF distance and intermolecular EPF distance can be reasonably defined. Exemplified with a few cases, the intermolecular EPF distance shows exceptional rationality and efficiency in structure recognition and comparison.

  8. Kinematic Distances: A Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Wenger, Trey V.; Balser, Dana S.; Anderson, L. D.; Bania, T. M.

    2018-03-01

    Distances to high-mass star-forming regions (HMSFRs) in the Milky Way are a crucial constraint on the structure of the Galaxy. Only kinematic distances are available for a majority of the HMSFRs in the Milky Way. Here, we compare the kinematic and parallax distances of 75 Galactic HMSFRs to assess the accuracy of kinematic distances. We derive the kinematic distances using three different methods: the traditional method using the Brand & Blitz rotation curve (Method A), the traditional method using the Reid et al. rotation curve and updated solar motion parameters (Method B), and a Monte Carlo technique (Method C). Methods B and C produce kinematic distances closest to the parallax distances, with median differences of 13% (0.43 {kpc}) and 17% (0.42 {kpc}), respectively. Except in the vicinity of the tangent point, the kinematic distance uncertainties derived by Method C are smaller than those of Methods A and B. In a large region of the Galaxy, the Method C kinematic distances constrain both the distances and the Galactocentric positions of HMSFRs more accurately than parallax distances. Beyond the tangent point along ℓ = 30°, for example, the Method C kinematic distance uncertainties reach a minimum of 10% of the parallax distance uncertainty at a distance of 14 {kpc}. We develop a prescription for deriving and applying the Method C kinematic distances and distance uncertainties. The code to generate the Method C kinematic distances is publicly available and may be utilized through an online tool.

  9. Unsupervised Learning for Monaural Source Separation Using Maximization–Minimization Algorithm with Time–Frequency Deconvolution †

    PubMed Central

    Bouridane, Ahmed; Ling, Bingo Wing-Kuen

    2018-01-01

    This paper presents an unsupervised learning algorithm for sparse nonnegative matrix factor time–frequency deconvolution with optimized fractional β-divergence. The β-divergence is a group of cost functions parametrized by a single parameter β. The Itakura–Saito divergence, Kullback–Leibler divergence and Least Square distance are special cases that correspond to β=0, 1, 2, respectively. This paper presents a generalized algorithm that uses a flexible range of β that includes fractional values. It describes a maximization–minimization (MM) algorithm leading to the development of a fast convergence multiplicative update algorithm with guaranteed convergence. The proposed model operates in the time–frequency domain and decomposes an information-bearing matrix into two-dimensional deconvolution of factor matrices that represent the spectral dictionary and temporal codes. The deconvolution process has been optimized to yield sparse temporal codes through maximizing the likelihood of the observations. The paper also presents a method to estimate the fractional β value. The method is demonstrated on separating audio mixtures recorded from a single channel. The paper shows that the extraction of the spectral dictionary and temporal codes is significantly more efficient by using the proposed algorithm and subsequently leads to better source separation performance. Experimental tests and comparisons with other factorization methods have been conducted to verify its efficacy. PMID:29702629

  10. Membership determination of open clusters based on a spectral clustering method

    NASA Astrophysics Data System (ADS)

    Gao, Xin-Hua

    2018-06-01

    We present a spectral clustering (SC) method aimed at segregating reliable members of open clusters in multi-dimensional space. The SC method is a non-parametric clustering technique that performs cluster division using eigenvectors of the similarity matrix; no prior knowledge of the clusters is required. This method is more flexible in dealing with multi-dimensional data compared to other methods of membership determination. We use this method to segregate the cluster members of five open clusters (Hyades, Coma Ber, Pleiades, Praesepe, and NGC 188) in five-dimensional space; fairly clean cluster members are obtained. We find that the SC method can capture a small number of cluster members (weak signal) from a large number of field stars (heavy noise). Based on these cluster members, we compute the mean proper motions and distances for the Hyades, Coma Ber, Pleiades, and Praesepe clusters, and our results are in general quite consistent with the results derived by other authors. The test results indicate that the SC method is highly suitable for segregating cluster members of open clusters based on high-precision multi-dimensional astrometric data such as Gaia data.

  11. Decreased Sensitivity to Long-Distance Dependencies in Children with a History of Specific Language Impairment: Electrophysiological Evidence

    PubMed Central

    Purdy, J. D.; Leonard, Laurence B.; Weber-Fox, Christine; Kaganovich, Natalya

    2015-01-01

    Purpose One possible source of tense and agreement limitations in children with SLI is a weakness in appreciating structural dependencies that occur in many sentences in the input. We tested this possibility in the present study. Method Children with a history of SLI (H-SLI; N = 12; M age 9;7) and typically developing same-age peers (TD; N = 12; M age 9;7) listened to and made grammaticality judgments about grammatical and ungrammatical sentences involving either a local agreement error (e.g., Every night they talks on the phone) or a long-distance finiteness error (e.g., He makes the quiet boy talks a little louder). Electrophysiological (ERP) and behavioral (accuracy) measures were obtained. Results Local agreement errors elicited the expected anterior negativity and P600 components in both groups of children. However, relative to the TD group, the P600 effect for the long-distance finiteness errors was delayed, reduced in amplitude, and shorter in duration for the H-SLI group. The children's grammaticality judgments were consistent with the ERP findings. Conclusions Children with H-SLI seem to be relatively insensitive to the finiteness constraints that matrix verbs place on subject-verb clauses that appear later in the sentence. PMID:24686983

  12. Dimensionality reduction based on distance preservation to local mean for symmetric positive definite matrices and its application in brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Davoudi, Alireza; Shiry Ghidary, Saeed; Sadatnejad, Khadijeh

    2017-06-01

    Objective. In this paper, we propose a nonlinear dimensionality reduction algorithm for the manifold of symmetric positive definite (SPD) matrices that considers the geometry of SPD matrices and provides a low-dimensional representation of the manifold with high class discrimination in a supervised or unsupervised manner. Approach. The proposed algorithm tries to preserve the local structure of the data by preserving distances to local means (DPLM) and also provides an implicit projection matrix. DPLM is linear in terms of the number of training samples. Main results. We performed several experiments on the multi-class dataset IIa from BCI competition IV and two other datasets from BCI competition III including datasets IIIa and IVa. The results show that our approach as dimensionality reduction technique—leads to superior results in comparison with other competitors in the related literature because of its robustness against outliers and the way it preserves the local geometry of the data. Significance. The experiments confirm that the combination of DPLM with filter geodesic minimum distance to mean as the classifier leads to superior performance compared with the state of the art on brain-computer interface competition IV dataset IIa. Also the statistical analysis shows that our dimensionality reduction method performs significantly better than its competitors.

  13. Primary Energy Reconstruction from the Charged Particle Densities Recorded with the KASCADE-Grande Detector at 500 m Distance from Shower Core

    NASA Astrophysics Data System (ADS)

    Toma, G.; Apel, W. D.; Arteaga, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Buchholz, P.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Finger, M.; Fuhrmann, D.; Ghia, P. L.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Kickelbick, D.; Klages, H. O.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Mayer, H. J.; Melissas, M.; Milke, J.; Mitrica, B.; Morello, C.; Navarra, G.; Nehls, S.; Oehlschläger, J.; Ostapchenko, S.; Over, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schröder, F.; Sima, O.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Wommer, M.; Zabierowski, J.

    2010-11-01

    Previous EAS investigations have shown that for a fixed primary energy the charged particle density becomes independent of the primary mass at certain (fixed) distances from the shower core. This feature can be used as an estimator for the primary energy. We present results on the reconstruction of the primary energy spectrum of cosmic rays from the experimentally recorded S(500) observable (the density of charged particles at 500 m distance to the shower core) using the KASCADE-Grande detector array. The KASCADE-Grande experiment is hosted by the Karlsruhe Institute for Technology-Campus North, Karlsruhe, Germany, and operated by an international collaboration. The constant intensity cut (CIC) method is applied to evaluate the attenuation of the S(500) observable with the zenith angle and is corrected for. A calibration of S(500) values with the primary energy has been worked out by simulations and was applied to the data to obtain the primary energy spectrum (in the energy range log10[E0/GeV]∈[7.5,9]). The systematic uncertainties induced by different sources are considered. In addition, a correction based on a response matrix is applied to account for the effects of shower-to-shower fluctuations on the spectral index of the reconstructed energy spectrum.

  14. Potential of nisin-incorporated sodium caseinate films to control Listeria in artificially contaminated cheese.

    PubMed

    Cao-Hoang, Lan; Chaine, Aline; Grégoire, Lydie; Waché, Yves

    2010-10-01

    A sodium caseinate film containing nisin (1000 IU/cm(2)) was produced and used to control Listeria innocua in an artificially contaminated cheese. Mini red Babybel cheese was chosen as a model semi-soft cheese. L. innocua was both surface- and in-depth inoculated to investigate the effectiveness of the antimicrobial film as a function of the distance from the surface in contact with the film. The presence of the active film resulted in a 1.1 log CFU/g reduction in L. innocua counts in surface-inoculated cheese samples after one week of storage at 4 degrees C as compared to control samples. With regard to in-depth inoculated cheese samples, antimicrobial efficiency was found to be dependent on the distance from the surface in contact with the active films to the cheese matrix. The inactivation rates obtained were 1.1, 0.9 and 0.25 log CFU/g for distances from the contact surface of 1 mm, 2 mm and 3 mm, respectively. Our study demonstrates the potential application of sodium caseinate films containing nisin as a promising method to overcome problems associated with post-process contamination, thereby extending the shelf life and possibly enhancing the microbial safety of cheeses. 2010 Elsevier Ltd. All rights reserved.

  15. Fluid-driven Fractures and Backflow in a Multilayered Elastic Matrix

    NASA Astrophysics Data System (ADS)

    Smiddy, Samuel; Lai, Ching-Yao; Stone, Howard

    2016-11-01

    We study the dynamics when pressurized fluid is injected at a constant flow rate into a multi-layered elastic matrix. In particular, we report experiments of such crack propagation as a function of orientation and distance from the contact of the layers. Subsequently we study the shape and propagation of the fluid along the contact of layers as well as volume of fluid remaining in the matrix once the injection pressure is released and "flowback" occurs. The experiments presented here may mimic the interaction between hydraulic fractures and pre-existing fractures and the dynamics of flowback in hydraulic fracturing. Study made possible by the Andlinger Center for Energy and the Environment and the Fred Fox Fund.

  16. Deuterium REDOR: Principles and Applications for Distance Measurements

    NASA Astrophysics Data System (ADS)

    Sack, I.; Goldbourt, A.; Vega, S.; Buntkowsky, G.

    1999-05-01

    The application of short composite pulse schemes ([figure] and [figure]) to the rotational echo double-resonance (REDOR) spectroscopy ofX-2H (X: spin{1}/{2}, observed) systems with large deuterium quadrupolar interactions has been studied experimentally and theoretically and compared with simple 180° pulse schemes. The basic properties of the composite pulses on the deuterium nuclei have been elucidated, using average Hamiltonian theory, and exact simulations of the experiments have been achieved by stepwise integration of the equation of motion of the density matrix. REDOR experiments were performed on15N-2H in doubly labeled acetanilide and on13C-2H in singly2H-labeled acetanilide. The most efficient REDOR dephasing was observed when [figure] composite pulses were used. It is found that the dephasing due to simple 180° deuterium pulses is about a factor of 2 less efficient than the dephasing due to the composite pulse sequences and thus the range of couplings observable byX-2H REDOR is enlarged toward weaker couplings, i.e., larger distances. From these experiments the2H-15N dipolar coupling between the amino deuteron and the amino nitrogen and the2H-13C dipolar couplings between the amino deuteron and the α and β carbons have been elucidated and the corresponding distances have been determined. The distance data from REDOR are in good agreement with data from X-ray and neutron diffraction, showing the power of the method.

  17. Phylogenetic position of the genus Perkinsus (Protista, Apicomplexa) based on small subunit ribosomal RNA.

    PubMed

    Goggin, C L; Barker, S C

    1993-07-01

    Parasites of the genus Perkinsus destroy marine molluscs worldwide. Their phylogenetic position within the kingdom Protista is controversial. Nucleotide sequence data (1792 bp) from the small subunit rRNA gene of Perkinsus sp. from Anadara trapezia (Mollusca: Bivalvia) from Moreton Bay, Queensland, was used to examine the phylogenetic affinities of this enigmatic genus. These data were aligned with nucleotide sequences from 6 apicomplexans, 3 ciliates, 3 flagellates, a dinoflagellate, 3 fungi, maize and human. Phylogenetic trees were constructed after analysis with maximum parsimony and distance matrix methods. Our analyses indicate that Perkinsus is phylogenetically closer to dinoflagellates and to coccidean and piroplasm apicomplexans than to fungi or flagellates.

  18. Transfer-Matrix Method for Solving the Spin 1/2 Antiferromagnetic Heisenberg Chain

    NASA Astrophysics Data System (ADS)

    Garcia-Bach, M. A.; Klein, D. J.; Valenti, R.

    Following the discovery of high Tc superconductivity in the copper oxides, there has been a great deal of interest in the RVB wave function proposed by Anderson [1]. As a warm-up exercise we have considered a valence-bond wave function for the one dimensional spin-1/2 Heisenberg chain. The main virtue of our work is to propose a new variational singlet wavefunction which is almost analytically tractable by a transfer-matrix technique. We have obtained the ground state energy for finite as well as infinite chains, in good agreement with exact results. Correlation functions, excited states, and the effects of other interactions (e.g., spin-Peierls) are also accessible within this scheme [2]. Since the ground state of the chain is known to be a singlet (Lieb & Mattis [3]), we write the appropriate wave function as a superposition of valence-bond singlets, |ψ > =∑ limits k C k | k>, where |k> is a spin configuration obtained by pairing all spins into singlet pairs, in a way which is common in valence-bond calculations of large molecules. As in that case, each configuration, |k>, can be represented by a Rümer diagram, with directed bonds connecting each pair of spins on the chain. The ck's are variational co-efficients, the form of which is determined as follows: Each singlet configuration (Rümer diagram) is divided into "zones", a "zone" corresponding to the region between two consecutive sites. Each zone is indexed by its distance from the end of the chain and by the number of bonds crossing it. Our procedure assigns a variational parameter, xij, to the jth zone, when crossed by i bonds. The resulting wavefunction for an N-site chain is written as |ψ > =∑ limits k ∏ M limits { i =1} ∏ { N -1}limits { j =1} X ij{ m ij (k)} | k> where mij(k) equals 1 when zone j is crossed by i bonds and zero otherwise. To make the calculation tractable we reduce the number of variational parameters by disallowing configurations with bonds connecting any two sites separated by more than 2M lattice points. (For simplicity, we have limited ourselves to M=3, but the scheme can be used for any M). With the simple ansatz, matrix elements can be calculated by a transfer-matrix method. To understand the transfer-matrix method note that since only local zone parameters appear in the description of each state |k>, matrix elements and overlaps, < k| bar S q bar S{ q +1} |k'> and , are completely specified by a small number of "local states" associated with each zone. Within a given zone a local state is defined by (i) the number of bonds crossing the zone and (ii), by whether the bonds originate from the initial (|k>) or final (|k'>) state. It is then easy to see that "local states" of consecutive zones are connected by a 15 × 15 transfer matrix (for the case M=3). Furthermore, the overlap matrix element can be written as a product of transfer-matrices associated with each zone of the chain. When calculating matrix elements of the Hamiltonian, an additional matrix, U, must be defined, to represent the particular zone involving the two spins connected by the Heisenberg interaction. The relevant details as well as the comparison with exact results will be given elsewhere. We are planning to ultimately apply this method to the two dimensional case, and hope to include the effects of holes.

  19. Evaluation of genetic divergence among clones of conilon coffee after scheduled cycle pruning.

    PubMed

    Dalcomo, J M; Vieira, H D; Ferreira, A; Lima, W L; Ferrão, R G; Fonseca, A F A; Ferrão, M A G; Partelli, F L

    2015-11-30

    Coffea canephora genotypes from the breeding program of Instituto Capixaba de Pesquisa e Extensão Rural were evaluated, and genetic diversity was estimated with the aim of future improvement strategies. From an initial group of 55 genotypes, 18 from the region of Castelo, ES, were selected, and three clones of the cultivars "Vitória" and "robusta tropical." Upon completion of the scheduled cycle pruning, 17 morphoagronomic traits were measured in the 22 genotypes selected. The principal components method was used to evaluate the contributions relative to the traits. The genetic dissimilarity matrix was obtained through Mahalanobis generalized distance, and genotypes were grouped using the hierarchical method based on the mean of the distances. The most promising clones of Avaliação Castelo were AC02, AC03, AC12, AC13, AC22, AC24, AC26, AC27, AC28, AC29, AC30, AC35, AC36, AC37, AC39, AC40, AC43, and AC46. These methods detected high genetic variability, grouping, by similarity, the genotypes in five groups. The trait that contributed the least to genetic divergence was the number of leaves in plagiotropic branches; however, this was not eliminated, because discarding it altered the groups. There are superior genotypes with potential for use in the next stages of the breeding program, aimed at both the composition of clonal variety and hybridizations.

  20. A Random Walk Approach to Query Informative Constraints for Clustering.

    PubMed

    Abin, Ahmad Ali

    2017-08-09

    This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.

  1. Decisive role of magnetism in the interaction of chromium and nickel solute atoms with 1/2$$\\langle$$111$$\\rangle$$-screw dislocation core in body-centered cubic iron

    DOE PAGES

    Odbadrakh, Kh.; Samolyuk, G.; Nicholson, D.; ...

    2016-09-13

    Resistance to swelling under irradiation and a low rate of corrosion in high temperature environments make Fe-Cr and Fe-Cr-Ni alloys promising structural materials for energy technologies. In this paper we report the results obtained using a combination of density functional theory (DFT) techniques: plane wave basis set solutions for pseudo-potentials and multiple scattering solutions for all electron potentials. We have found a very strong role of magnetism in the stability of screw dislocation cores in pure Fe and their interaction with Cr and Ni magnetic impurities. In particular, the screw dislocation quadrupole in Fe is stabilized only in the presencemore » of ferromagnetism. In addition, Ni atoms, who's magnetic moment is oriented along the magnetization direction of the Fe matrix, prefer to occupy in core positions whereas Cr atoms, which couple anti-ferromagnetically with the Fe matrix, prefer out of the dislocation core positions. In effect, Ni impurities are attracted to, while Cr impurities are repelled by the dislocation core. Moreover, we demonstrate that this contrasting behavior can be explained only by the nature of magnetic coupling of the impurities to the Fe matrix. In addition, Cr interaction with the dislocation core mirrors that of Ni if the Cr magnetic moment is constrained to be along the direction of Fe matrix magnetization. In addition, we have shown that the magnetic contribution can affect the impurity-impurity interaction at distances up to a few Burgers vectors. In particular, the distance between Cr atoms in Fe matrix should be at least 3–4 lattice parameters in order to eliminate finite size effects.« less

  2. Reconstruction of structural damage based on reflection intensity spectra of fiber Bragg gratings

    NASA Astrophysics Data System (ADS)

    Huang, Guojun; Wei, Changben; Chen, Shiyuan; Yang, Guowei

    2014-12-01

    We present an approach for structural damage reconstruction based on the reflection intensity spectra of fiber Bragg gratings (FBGs). Our approach incorporates the finite element method, transfer matrix (T-matrix), and genetic algorithm to solve the inverse photo-elastic problem of damage reconstruction, i.e. to identify the location, size, and shape of a defect. By introducing a parameterized characterization of the damage information, the inverse photo-elastic problem is reduced to an optimization problem, and a relevant computational scheme was developed. The scheme iteratively searches for the solution to the corresponding direct photo-elastic problem until the simulated and measured (or target) reflection intensity spectra of the FBGs near the defect coincide within a prescribed error. Proof-of-concept validations of our approach were performed numerically and experimentally using both holed and cracked plate samples as typical cases of plane-stress problems. The damage identifiability was simulated by changing the deployment of the FBG sensors, including the total number of sensors and their distance to the defect. Both the numerical and experimental results demonstrate that our approach is effective and promising. It provides us with a photo-elastic method for developing a remote, automatic damage-imaging technique that substantially improves damage identification for structural health monitoring.

  3. Geometrical analysis of Cys-Cys bridges in proteins and their prediction from incomplete structural information

    NASA Technical Reports Server (NTRS)

    Goldblum, A.; Rein, R.

    1987-01-01

    Analysis of C-alpha atom positions from cysteines involved in disulphide bridges in protein crystals shows that their geometric characteristics are unique with respect to other Cys-Cys, non-bridging pairs. They may be used for predicting disulphide connections in incompletely determined protein structures, such as low resolution crystallography or theoretical folding experiments. The basic unit for analysis and prediction is the 3 x 3 distance matrix for Cx positions of residues (i - 1), Cys(i), (i +1) with (j - 1), Cys(j), (j + 1). In each of its columns, row and diagonal vector--outer distances are larger than the central distance. This analysis is compared with some analytical models.

  4. Long-range comparison between genes and languages based on syntactic distances.

    PubMed

    Colonna, Vincenza; Boattini, Alessio; Guardiano, Cristina; Dall'ara, Irene; Pettener, Davide; Longobardi, Giuseppe; Barbujani, Guido

    2010-01-01

    To propose a new approach for comparing genetic and linguistic diversity in populations belonging to distantly related groups. Comparisons of linguistic and genetic differences have proved powerful tools to reconstruct human demographic history. Current models assume on both sides that similarities reflect either descent from common ancestry or the balance between isolation and contact. Most linguistic phylogenies are ultimately based on lexical evidence (roughly, words and morphemes with their sounds and meanings). However, measures of lexical divergence are reliable only for closely related languages, thus large-scale comparisons of genetic and linguistic diversity have appeared problematic so far. Syntax (abstract rules to combine words into sentences) appears more measurable, universally comparable, and stable than the lexicon, and hence certain syntactic similarities might reflect deeper linguistic relationships, such as those between distant language families. In this study, we for the first time compared genetic data to a matrix of syntactic differences among selected populations of three continents. Comparing two databases of microsatellite (Short Tandem Repeat) markers and Single Nucleotides Polymorphisms (SNPs), with a linguistic matrix based on the values of 62 grammatical parameters, we show that there is indeed a correlation of syntactic and genetic distances. We also identified a few outliers and suggest a possible interpretation of the overall pattern. These results strongly support the possibility of better investigating population history by combining genetic data with linguistic information of a new type, provided by a theoretically more sophisticated method to assess the relationships between distantly related languages and language families. Copyright © 2010 S. Karger AG, Basel.

  5. Analysis of Fundus Fluorescein Angiogram Based on the Hessian Matrix of Directional Curvelet Sub-bands and Distance Regularized Level Set Evolution.

    PubMed

    Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2015-01-01

    This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction.

  6. Wear Behaviour of Al-6061/SiC Metal Matrix Composites

    NASA Astrophysics Data System (ADS)

    Mishra, Ashok Kumar; Srivastava, Rajesh Kumar

    2017-04-01

    Aluminium Al-6061 base composites, reinforced with SiC particles having mesh size of 150 and 600, which is fabricated by stir casting method and their wear resistance and coefficient of friction has been investigated in the present study as a function of applied load and weight fraction of SiC varying from 5, 10, 15, 20, 25, 30, 35 and 40 %. The dry sliding wear properties of composites were investigated by using Pin-on-disk testing machine at sliding velocity of 2 m/s and sliding distance of 2000 m over a various loads of 10, 20 and 30 N. The result shows that the reinforcement of the metal matrix with SiC particulates up to weight percentage of 35 % reduces the wear rate. The result also show that the wear of the test specimens increases with the increasing load and sliding distance. The coefficient of friction slightly decreases with increasing weight percentage of reinforcements. The wear surfaces are examined by optical microscopy which shows that the large grooved regions and cavities with ceramic particles are found on the worn surface of the composite alloy. This indicates an abrasive wear mechanism, which is essentially a result of hard ceramic particles exposed on the worn surfaces. Further, it was found from the experimentation that the wear rate decreases linearly with increasing weight fraction of SiC and average coefficient of friction decreases linearly with increasing applied load, weight fraction of SiC and mesh size of SiC. The best result has been obtained at 35 % weight fraction and 600 mesh size of SiC.

  7. Analysis of Fundus Fluorescein Angiogram Based on the Hessian Matrix of Directional Curvelet Sub-bands and Distance Regularized Level Set Evolution

    PubMed Central

    Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2015-01-01

    This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction. PMID:26284170

  8. Cyclic Fiber Push-In Test Monitors Evolution of Interfacial Behavior in Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Eldridge, Jeffrey I.

    1998-01-01

    SiC fiber-reinforced ceramic matrix composites are being developed for high-temperature advanced jet engine applications. Obtaining a strong, tough composite material depends critically on optimizing the mechanical coupling between the reinforcing fibers and the surrounding matrix material. This has usually been accomplished by applying a thin C or BN coating onto the surface of the reinforcing fibers. The performance of these fiber coatings, however, may degrade under cyclic loading conditions or exposure to different environments. Degradation of the coating-controlled interfacial behavior will strongly affect the useful service lifetime of the composite material. Cyclic fiber push-in testing was applied to monitor the evolution of fiber sliding behavior in both C- and BN-coated small-diameter (15-mm) SiC-fiber-reinforced ceramic matrix composites. The cyclic fiber push-in tests were performed using a desktop fiber push-out apparatus. At the beginning of each test, the fiber to be tested was aligned underneath a 10- mm-diameter diamond punch; then, the applied load was cycled between selected maximum and minimum loads. From the measured response, the fiber sliding distance and frictional sliding stresses were determined for each cycle. Tests were performed in both room air and nitrogen. Cyclic fiber push-in tests of C-coated, SiC-fiber-reinforced SiC showed progressive increases in fiber sliding distances along with decreases in frictional sliding stresses for continued cycling in room air. This rapid degradation in interfacial response was not observed for cycling in nitrogen, indicating that moisture exposure had a large effect in immediately lowering the frictional sliding stresses of C-coated fibers. These results indicate that matrix cracks bridged by C-coated fibers will not be stable, but will rapidly grow in moisture-containing environments. In contrast, cyclic fiber push-in tests of both BN-coated, SiC-fiber-reinforced SiC and BNcoated, SiC-fiber-reinforced barium strontium aluminosilicate showed no significant changes in fiber sliding behavior with continued short-term cycling in either room air or nitrogen. Although the composites with BN-coated fibers showed stable short-term cycling behavior in both environments, long-term (several-week) exposure of debonded fibers to room air resulted in dramatically increased fiber sliding distances and decreased frictional sliding stresses. These results indicate that although matrix cracks bridged by BNcoated fibers will show short-term stability, such cracks will show substantial growth with long-term exposure to moisture-containing environments. Newly formulated BN coatings, with higher moisture resistance, will be tested in the near future.

  9. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    PubMed

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  10. Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms

    NASA Astrophysics Data System (ADS)

    Kwok, Kwan S.; Driessen, Brian J.; Phillips, Cynthia A.; Tovey, Craig A.

    1997-09-01

    This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. We wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which we must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solution times for one hundred robots took only seconds on a silicon graphics crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. We have found these mobile robot problems to be a very interesting application of network optimization methods, and we expect this to be a fruitful area for future research.

  11. Guidance and Control System for a Satellite Constellation

    NASA Technical Reports Server (NTRS)

    Bryson, Jonathan Lamar; Cox, James; Mays, Paul Richard; Neidhoefer, James Christian; Ephrain, Richard

    2010-01-01

    A distributed guidance and control algorithm was developed for a constellation of satellites. The system repositions satellites as required, regulates satellites to desired orbits, and prevents collisions. 1. Optimal methods are used to compute nominal transfers from orbit to orbit. 2. Satellites are regulated to maintain the desired orbits once the transfers are complete. 3. A simulator is used to predict potential collisions or near-misses. 4. Each satellite computes perturbations to its controls so as to increase any unacceptable distances of nearest approach to other objects. a. The avoidance problem is recast in a distributed and locally-linear form to arrive at a tractable solution. b. Plant matrix values are approximated via simulation at each time step. c. The Linear Quadratic Gaussian (LQG) method is used to compute perturbations to the controls that will result in increased miss distances. 5. Once all danger is passed, the satellites return to their original orbits, all the while avoiding each other as above. 6. The delta-Vs are reasonable. The controller begins maneuvers as soon as practical to minimize delta-V. 7. Despite the inclusion of trajectory simulations within the control loop, the algorithm is sufficiently fast for available satellite computer hardware. 8. The required measurement accuracies are within the capabilities of modern inertial measurement devices and modern positioning devices.

  12. A LiDAR data-based camera self-calibration method

    NASA Astrophysics Data System (ADS)

    Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun

    2018-07-01

    To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.

  13. Oscillation properties of active and sterile neutrinos and neutrino anomalies at short distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khruschov, V. V., E-mail: khruschov-vv@nrcki.ru; Fomichev, S. V., E-mail: fomichev-sv@nrcki.ru; Titov, O. A., E-mail: titov-oa@nrcki.ru

    2016-09-15

    A generalized phenomenological (3 + 2 + 1) model featuring three active and three sterile neutrinos that is intended for calculating oscillation properties of neutrinos for the case of a normal active neutrino mass hierarchy and a large splitting between the mass of one sterile neutrino and the masses of the other two sterile neutrinos is considered. A new parametrization and a specific form of the general mixing matrix are proposed for active and sterile neutrinos with allowance for possible CP violation in the lepton sector, and test values are chosen for the neutrino masses and mixing parameters. The probabilitiesmore » for the transitions between different neutrino flavors are calculated, and graphs representing the probabilities for the disappearance of muon neutrinos/antineutrinos and the appearance of electron neutrinos/antineutrinos in a beam of muon neutrinos/antineutrinos versus the distance from the neutrino source for various values of admissible model parameters at neutrino energies not higher than 50 MeV, as well as versus the ratio of this distance to the neutrino energy, are plotted. It is shown that the short-distance accelerator anomaly in neutrino data (LNSD anomaly) can be explained in the case of a specific mixing matrix for active and sterile neutrinos (which belongs to the a{sub 2} type) at the chosen parameter values. The same applies to the short-distance reactor and gallium anomalies. The theoretical results obtained in the present study can be used to interpret and predict the results of ground-based neutrino experiments aimed at searches for sterile neutrinos, as well as to analyze some astrophysical observational data.« less

  14. Universal relations for spin-orbit-coupled Fermi gas near an s -wave resonance

    NASA Astrophysics Data System (ADS)

    Zhang, Pengfei; Sun, Ning

    2018-04-01

    Synthetic spin-orbit-coupled quantum gases have been widely studied both experimentally and theoretically in the past decade. As shown in previous studies, this modification of single-body dispersion will in general couple different partial waves of the two-body scattering and thus distort the wave function of few-body bound states which determines the short-distance behavior of many-body wave function. In this work, we focus on the two-component Fermi gas with one-dimensional or three-dimensional spin-orbit coupling (SOC) near an s -wave resonance. Using the method of effective field theory and the operator product expansion, we derive universal relations for both systems, including the adiabatic theorem, viral theorem, and pressure relation, and obtain the momentum distribution matrix 〈ψa†(q ) ψb(q ) 〉 at large q (a ,b are spin indices). The momentum distribution matrix shows both spin-dependent and spatial anisotropic features. And the large momentum tail is modified at the subleading order thanks to the SOC. We also discuss the experimental implication of these results depending on the realization of the SOC.

  15. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-12-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  16. [Continuum, the continuing education platform based on a competency matrix].

    PubMed

    Ochoa Sangrador, C; Villaizán Pérez, C; González de Dios, J; Hijano Bandera, F; Málaga Guerrero, S

    2016-04-01

    Competency-Based Education is a learning method that has changed the traditional teaching-based focus to a learning-based one. Students are the centre of the process, in which they must learn to learn, solve problems, and adapt to changes in their environment. The goal is to provide learning based on knowledge, skills (know-how), attitude and behaviour. These sets of knowledge are called competencies. It is essential to have a reference of the required competencies in order to identify the need for them. Their acquisition is approached through teaching modules, in which one or more skills can be acquired. This teaching strategy has been adopted by Continuum, the distance learning platform of the Spanish Paediatric Association, which has developed a competency matrix based on the Global Paediatric Education Consortium training program. In this article, a review will be presented on the basics of Competency-Based Education and how it is applied in Continuum. Copyright © 2015 Asociación Española de Pediatría. Published by Elsevier España, S.L.U. All rights reserved.

  17. Creating Perfused Functional Vascular Channels Using 3D Bio-Printing Technology

    PubMed Central

    Lee, Vivian K.; Kim, Diana Y.; Ngo, Haygan; Lee, Young; Seo, Lan; Yoo, Seung-Schik; Vincent, Peter A.; Dai, Guohao

    2014-01-01

    We developed a methodology using 3D bio-printing technology to create a functional in vitro vascular channel with perfused open lumen using only cells and biological matrices. The fabricated vasculature has a tight, confluent endothelium lining, presenting barrier function for both plasma protein and high-molecular weight dextran molecule. The fluidic vascular channel is capable of supporting the viability of tissue up to 5mm in distance at 5 million cells/mL density under the physiological flow condition. In static-cultured vascular channels, active angiogenic sprouting from the vessel surface was observed whereas physiological flow strongly suppressed this process. Gene expression analysis were reported in this study to show the potential of this vessel model in vascular biology research. The methods have great potential in vascularized tissue fabrication using 3D bio-printing technology as the vascular channel is simultaneously created while cells and matrix are printed around the channel in desired 3D patterns. It can also serve as a unique experimental tool for investigating fundamental mechanisms of vascular remodeling with extracellular matrix and maturation process under 3D flow condition. PMID:24965886

  18. X-ray absorption spectroscopic studies on gold nanoparticles in mesoporous and microporous materials.

    PubMed

    Akolekar, Deepak B; Foran, Garry; Bhargava, Suresh K

    2004-05-01

    Au L(3)-edge X-ray absorption spectroscopic measurements were carried out over a series of mesoporous and microporous materials containing gold nanoparticles to investigate the effects of the host matrix and preparation methods on the properties of gold nanoparticles. The materials of structure type MCM-41, ZSM-5, SAPO-18 and LSX with varying framework composition containing low concentrations of gold nanoparticles were prepared and characterized. In these materials the size of the gold nanoparticles varied in the range approximately 1 to 4 nm. A series of gold nanoparticles within different mesoporous and microporous materials have been investigated using X-ray absorption fine structure (XANES, EXAFS) and other techniques. Information such as atomic distances, bonding and neighbouring environment obtained from XAFS measurements was useful in elucidating the nature and structure of gold nanoparticles on these catalytic materials. The influence of the high-temperature (823, 1113, 1273 K) treatment on gold nanoparticles inside the mesoporous matrix was investigated using the XAFS technique. The XAFS and XANES results confirm various characteristics of gold nanoparticles in these materials suitable for catalysis, fabrication of nanodevices and other applications.

  19. Cooperative Activated Transport of Dilute Penetrants in Viscous Molecular and Polymer Liquids

    NASA Astrophysics Data System (ADS)

    Schweizer, Kenneth; Zhang, Rui

    We generalize the force-level Elastically Collective Nonlinear Langevin Equation theory of activated relaxation in one-component supercooled liquids to treat the hopping transport of a dilute penetrant in a dense hard sphere fluid. The new idea is to explicitly account for the coupling between penetrant displacement and a local matrix cage re-arrangement which facilitates its hopping. A temporal casuality condition is employed to self-consistently determine a dimensionless degree of matrix distortion relative to the penetrant jump distance using the dynamic free energy concept. Penetrant diffusion becomes increasingly coupled to the correlated matrix displacements for larger penetrant to matrix particle size ratio (R) and/or attraction strength (physical bonds), but depends weakly on matrix packing fraction. In the absence of attractions, a nearly exponential dependence of penetrant diffusivity on R is predicted in the intermediate range of 0.2

  20. Wear study of Al-SiC metal matrix composites processed through microwave energy

    NASA Astrophysics Data System (ADS)

    Honnaiah, C.; Srinath, M. S.; Prasad, S. L. Ajit

    2018-04-01

    Particulate reinforced metal matrix composites are finding wider acceptance in many industrial applications due to their isotropic properties and ease of manufacture. Uniform distribution of reinforcement particulates and good bonding between matrix and reinforcement phases are essential features in order to obtain metal matrix composites with improved properties. Conventional powder metallurgy technique can successfully overcome the limitation of stir casting techniques, but it is time consuming and not cost effective. Use of microwave technology for processing particulate reinforced metal matrix composites through powder metallurgy technique is being increasingly explored in recent times because of its cost effectiveness and speed of processing. The present work is an attempt to process Al-SiC metal matrix composites using microwaves irradiated at 2.45 GHz frequency and 900 W power for 10 minutes. Further, dry sliding wear studies were conducted at different loads at constant velocity of 2 m/s for various sliding distances using pin-on-disc equipment. Analysis of the obtained results show that the microwave processed Al-SiC composite material shows around 34 % of resistance to wear than the aluminium alloy.

  1. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

    NASA Astrophysics Data System (ADS)

    Nygaard, Cecilie R.; Olsen, Jeppe

    2013-03-01

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  2. Analyzing Impact Area of Osym Offices in Istanbul by Idw Method

    NASA Astrophysics Data System (ADS)

    Kalkan, Y.; Ozturk, O.; Gülnerman, A. G.; Bilgi, S.

    2016-12-01

    OSYM is the main institute for organizing the national level large scale exams in Turkey. According to the Ministry of National Education of Turkey data, there are 17.588.958 students in the country. Therefore, OSYM has a significant role for everyone from every level of education. More than 15% of the total students are studying in Istanbul. These students have various exams throughout a year, which brings some procedures for each exam to be applied. OSYM Coordination Offices were founded to meet the demands and procedures of these exams and applicants. There are 9 Coordination Offices in Istanbul. Moreover, OSYM Application Centers were founded as support units to OSYM Coordination Offices. These units are under the high schools. There are 67 OSYM Application Centers in Istanbul. In the study, spatial distribution of OSYM Coordination Offices and OSYM Application Centers in Istanbul have been studied related to the transportation network of each district of Istanbul city. Origin Destination Cost Matrix (ODCM) and Invers Distance Weighting (IDW) Method were used to visualize the distribution of OSYM Coordination Offices and Application Centers accessibilities. ODCM measures the nearest paths along the transportation network from origins to destinations. IDW is one of the several interpolation methods allocating values to unknown points. ODCM Method was used to calculate the distances over the transportation network. The results obtained from ODCM Method were used in IDW Method to interpolate the weightings of the OSYM offices and centers. Accessibility of the OSYM Coordination Offices and Application Centers has been detected according to surrounding transportation network. Spatial distribution of existing offices and application centers were evaluated by districts of Istanbul city in conclusion of the study by the ODCM and IDW Methods.

  3. Machine-learned cluster identification in high-dimensional data.

    PubMed

    Ultsch, Alfred; Lötsch, Jörn

    2017-02-01

    High-dimensional biomedical data are frequently clustered to identify subgroup structures pointing at distinct disease subtypes. It is crucial that the used cluster algorithm works correctly. However, by imposing a predefined shape on the clusters, classical algorithms occasionally suggest a cluster structure in homogenously distributed data or assign data points to incorrect clusters. We analyzed whether this can be avoided by using emergent self-organizing feature maps (ESOM). Data sets with different degrees of complexity were submitted to ESOM analysis with large numbers of neurons, using an interactive R-based bioinformatics tool. On top of the trained ESOM the distance structure in the high dimensional feature space was visualized in the form of a so-called U-matrix. Clustering results were compared with those provided by classical common cluster algorithms including single linkage, Ward and k-means. Ward clustering imposed cluster structures on cluster-less "golf ball", "cuboid" and "S-shaped" data sets that contained no structure at all (random data). Ward clustering also imposed structures on permuted real world data sets. By contrast, the ESOM/U-matrix approach correctly found that these data contain no cluster structure. However, ESOM/U-matrix was correct in identifying clusters in biomedical data truly containing subgroups. It was always correct in cluster structure identification in further canonical artificial data. Using intentionally simple data sets, it is shown that popular clustering algorithms typically used for biomedical data sets may fail to cluster data correctly, suggesting that they are also likely to perform erroneously on high dimensional biomedical data. The present analyses emphasized that generally established classical hierarchical clustering algorithms carry a considerable tendency to produce erroneous results. By contrast, unsupervised machine-learned analysis of cluster structures, applied using the ESOM/U-matrix method, is a viable, unbiased method to identify true clusters in the high-dimensional space of complex data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Process Optimization and Microstructure Characterization of Ti6Al4V Manufactured by Selective Laser Melting

    NASA Astrophysics Data System (ADS)

    junfeng, Li; zhengying, Wei

    2017-11-01

    Process optimization and microstructure characterization of Ti6Al4V manufactured by selective laser melting (SLM) were investigated in this article. The relative density of sampled fabricated by SLM is influenced by the main process parameters, including laser power, scan speed and hatch distance. The volume energy density (VED) was defined to account for the combined effect of the main process parameters on the relative density. The results shown that the relative density changed with the change of VED and the optimized process interval is 55˜60J/mm3. Furthermore, compared with laser power, scan speed and hatch distance by taguchi method, it was found that the scan speed had the greatest effect on the relative density. Compared with the microstructure of the cross-section of the specimen at different scanning speeds, it was found that the microstructures at different speeds had similar characteristics, all of them were needle-like martensite distributed in the β matrix, but with the increase of scanning speed, the microstructure is finer and the lower scan speed leads to coarsening of the microstructure.

  5. Genetic structure of cougar populations across the Wyoming basin: Metapopulation or megapopulation

    USGS Publications Warehouse

    Anderson, C.R.; Lindzey, F.G.; McDonald, D.B.

    2004-01-01

    We examined the genetic structure of 5 Wyoming cougar (Puma concolor) populations surrounding the Wyoming Basin, as well as a population from southwestern Colorado. When using 9 microsatellite DNA loci, observed heterozygosity was similar among populations (HO = 0.49-0.59) and intermediate to that of other large carnivores. Estimates of genetic structure (FST = 0.028, RST = 0.029) and number of migrants per generation (Nm) suggested high gene flow. Nm was lowest between distant populations and highest among adjacent populations. Examination of these data, plus Mantel test results of genetic versus geographic distance (P ??? 0.01), suggested both isolation by distance and an effect of habitat matrix. Bayesian assignment to population based on individual genotypes showed that cougars in this region were best described as a single panmictic population. Total effective population size for cougars in this region ranged from 1,797 to 4,532 depending on mutation model and analytical method used. Based on measures of gene flow, extinction risk in the near future appears low. We found no support for the existence of metapopulation structure among cougars in this region.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glicken, H.

    Large volcanic debris avalanches are among the world's largest mass movements. The rockslide-debris avalanche of the May 18, 1980, eruption of Mount St. Helens produced a 2.8 km/sup 3/ deposit and is the largest historic mass movement. A Pleistocene debris avalanche at Mount Shasta produced a 26 km/sup 3/ deposit that may be the largest Quaternary mass movement. The hummocky deposits at both volcanoes consist of rubble divided into (1) block facies that comprises unconsolidated pieces of the old edifice transported relatively intact, and (2) matrix facies that comprises a mixture of rocks from the old mountain and material pickedmore » up from the surrounding terrain. At Mount St. Helens, the juvenile dacite is found in the matrix facies, indicating that matrix facies formed from explosions of the erupting magma as well as from disaggregation and mixing of blocks. The block facies forms both hummocks and interhummock areas in the proximal part of the St. Helens avalanche deposit. At Mount St. Helens, the density of the old cone is 21% greater than the density of the avalanche deposit. Block size decreases with distance. Clast size, measured in the field and by sieving, coverages about a mean with distance, which suggests that blocks disaggregated and mixed together during transport.« less

  7. Interpreting Gas Production Decline Curves By Combining Geometry and Topology

    NASA Astrophysics Data System (ADS)

    Ewing, R. P.; Hu, Q.

    2014-12-01

    Shale gas production forms an increasing fraction of domestic US energy supplies, but individual gas production wells show steep production declines. Better understanding of this production decline would allow better economic forecasting; better understanding of the reasons behind the decline would allow better production management. Yet despite these incentives, production declines curves remain poorly understood, and current analyses range from Arps' purely empirical equation to new sophisticated approaches requiring multiple unavailable parameters. Models often fail to capture salient features: for example, in log-log space many wells decline with an exponent markedly different from the -0.5 expected from diffusion, and often show a transition from one decline mode to another. We propose a new approach based on the assumption that the rate-limiting step is gas movement from the matrix to the induced fracture network. The matrix is represented as an assemblage of equivalent spheres (geometry), with low matrix pore connectivity (topology) that results in a distance-dependent accessible porosity profile given by percolation theory. The basic theory has just 2 parameters: the sphere size distribution (geometry), and the crossover distance (topology) that characterizes the porosity distribution. The theory is readily extended to include e.g. alternative geometries and bi-modal size distributions. Comparisons with historical data are promising.

  8. Matrix theory for baryons: an overview of holographic QCD for nuclear physics.

    PubMed

    Aoki, Sinya; Hashimoto, Koji; Iizuka, Norihiro

    2013-10-01

    We provide, for non-experts, a brief overview of holographic QCD (quantum chromodynamics) and a review of the recent proposal (Hashimoto et al 2010 (arXiv:1003.4988[hep-th])) of a matrix-like description of multi-baryon systems in holographic QCD. Based on the matrix model, we derive the baryon interaction at short distances in multi-flavor holographic QCD. We show that there is a very universal repulsive core of inter-baryon forces for a generic number of flavors. This is consistent with a recent lattice QCD analysis for Nf = 2, 3 where the repulsive core looks universal. We also provide a comparison of our results with the lattice QCD and the operator product expansion analysis.

  9. A model for compression-weakening materials and the elastic fields due to contractile cells

    NASA Astrophysics Data System (ADS)

    Rosakis, Phoebus; Notbohm, Jacob; Ravichandran, Guruswami

    2015-12-01

    We construct a homogeneous, nonlinear elastic constitutive law that models aspects of the mechanical behavior of inhomogeneous fibrin networks. Fibers in such networks buckle when in compression. We model this as a loss of stiffness in compression in the stress-strain relations of the homogeneous constitutive model. Problems that model a contracting biological cell in a finite matrix are solved. It is found that matrix displacements and stresses induced by cell contraction decay slower (with distance from the cell) in a compression weakening material than linear elasticity would predict. This points toward a mechanism for long-range cell mechanosensing. In contrast, an expanding cell would induce displacements that decay faster than in a linear elastic matrix.

  10. Regulation of Hematopoietic Stem Cell Behavior by the Nanostructured Presentation of Extracellular Matrix Components

    PubMed Central

    Muth, Christine Anna; Steinl, Carolin; Klein, Gerd; Lee-Thedieck, Cornelia

    2013-01-01

    Hematopoietic stem cells (HSCs) are maintained in stem cell niches, which regulate stem cell fate. Extracellular matrix (ECM) molecules, which are an essential part of these niches, can actively modulate cell functions. However, only little is known on the impact of ECM ligands on HSCs in a biomimetic environment defined on the nanometer-scale level. Here, we show that human hematopoietic stem and progenitor cell (HSPC) adhesion depends on the type of ligand, i.e., the type of ECM molecule, and the lateral, nanometer-scaled distance between the ligands (while the ligand type influenced the dependency on the latter). For small fibronectin (FN)–derived peptide ligands such as RGD and LDV the critical adhesive interligand distance for HSPCs was below 45 nm. FN-derived (FN type III 7–10) and osteopontin-derived protein domains also supported cell adhesion at greater distances. We found that the expression of the ECM protein thrombospondin-2 (THBS2) in HSPCs depends on the presence of the ligand type and its nanostructured presentation. Functionally, THBS2 proved to mediate adhesion of HSPCs. In conclusion, the present study shows that HSPCs are sensitive to the nanostructure of their microenvironment and that they are able to actively modulate their environment by secreting ECM factors. PMID:23405094

  11. Kirchhoff index of linear hexagonal chains

    NASA Astrophysics Data System (ADS)

    Yang, Yujun; Zhang, Heping

    The resistance distance rij between vertices i and j of a connected (molecular) graph G is computed as the effective resistance between nodes i and j in the corresponding network constructed from G by replacing each edge of G with a unit resistor. The Kirchhoff index Kf(G) is the sum of resistance distances between all pairs of vertices. In this work, according to the decomposition theorem of Laplacian polynomial, we obtain that the Laplacian spectrum of linear hexagonal chain Ln consists of the Laplacian spectrum of path P2n+1 and eigenvalues of a symmetric tridiagonal matrix of order 2n + 1. By applying the relationship between roots and coefficients of the characteristic polynomial of the above matrix, explicit closed-form formula for Kirchhoff index of Ln is derived in terms of Laplacian spectrum. To our surprise, the Krichhoff index of Ln is approximately to one half of its Wiener index. Finally, we show that holds for all graphs G in a class of graphs including Ln.0

  12. Distance-dependent magnetic resonance tuning as a versatile MRI sensing platform for biological targets

    NASA Astrophysics Data System (ADS)

    Choi, Jin-Sil; Kim, Soojin; Yoo, Dongwon; Shin, Tae-Hyun; Kim, Hoyoung; Gomes, Muller D.; Kim, Sun Hee; Pines, Alexander; Cheon, Jinwoo

    2017-05-01

    Nanoscale distance-dependent phenomena, such as Förster resonance energy transfer, are important interactions for use in sensing and imaging, but their versatility for bioimaging can be limited by undesirable photon interactions with the surrounding biological matrix, especially in in vivo systems. Here, we report a new type of magnetism-based nanoscale distance-dependent phenomenon that can quantitatively and reversibly sense and image intra-/intermolecular interactions of biologically important targets. We introduce distance-dependent magnetic resonance tuning (MRET), which occurs between a paramagnetic `enhancer' and a superparamagnetic `quencher', where the T1 magnetic resonance imaging (MRI) signal is tuned ON or OFF depending on the separation distance between the quencher and the enhancer. With MRET, we demonstrate the principle of an MRI-based ruler for nanometre-scale distance measurement and the successful detection of both molecular interactions (for example, cleavage, binding, folding and unfolding) and biological targets in in vitro and in vivo systems. MRET can serve as a novel sensing principle to augment the exploration of a wide range of biological systems.

  13. Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred

    Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less

  14. Steganography in arrhythmic electrocardiogram signal.

    PubMed

    Edward Jero, S; Ramu, Palaniappan; Ramakrishnan, S

    2015-08-01

    Security and privacy of patient data is a vital requirement during exchange/storage of medical information over communication network. Steganography method hides patient data into a cover signal to prevent unauthenticated accesses during data transfer. This study evaluates the performance of ECG steganography to ensure secured transmission of patient data where an abnormal ECG signal is used as cover signal. The novelty of this work is to hide patient data into two dimensional matrix of an abnormal ECG signal using Discrete Wavelet Transform and Singular Value Decomposition based steganography method. A 2D ECG is constructed according to Tompkins QRS detection algorithm. The missed R peaks are computed using RR interval during 2D conversion. The abnormal ECG signals are obtained from the MIT-BIH arrhythmia database. Metrics such as Peak Signal to Noise Ratio, Percentage Residual Difference, Kullback-Leibler distance and Bit Error Rate are used to evaluate the performance of the proposed approach.

  15. Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator

    PubMed Central

    Mohamd Shoukry, Alaa; Gani, Showkat

    2017-01-01

    Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements. PMID:29209364

  16. Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator.

    PubMed

    Hussain, Abid; Muhammad, Yousaf Shad; Nauman Sajid, M; Hussain, Ijaz; Mohamd Shoukry, Alaa; Gani, Showkat

    2017-01-01

    Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements.

  17. Application of the dual-kinetic-balance sets in the relativistic many-body problem of atomic structure

    NASA Astrophysics Data System (ADS)

    Beloy, Kyle; Derevianko, Andrei

    2008-05-01

    The dual-kinetic-balance (DKB) finite basis set method for solving the Dirac equation for hydrogen-like ions [V. M. Shabaev et al., Phys. Rev. Lett. 93, 130405 (2004)] is extended to problems with a non-local spherically-symmetric Dirac-Hartree-Fock potential. We implement the DKB method using B-spline basis sets and compare its performance with the widely- employed approach of Notre Dame (ND) group [W.R. Johnson, S.A. Blundell, J. Sapirstein, Phys. Rev. A 37, 307-15 (1988)]. We compare the performance of the ND and DKB methods by computing various properties of Cs atom: energies, hyperfine integrals, the parity-non-conserving amplitude of the 6s1/2-7s1/2 transition, and the second-order many-body correction to the removal energy of the valence electrons. We find that for a comparable size of the basis set the accuracy of both methods is similar for matrix elements accumulated far from the nuclear region. However, for atomic properties determined by small distances, the DKB method outperforms the ND approach.

  18. Toward the optimization of normalized graph Laplacian.

    PubMed

    Xie, Bo; Wang, Meng; Tao, Dacheng

    2011-04-01

    Normalized graph Laplacian has been widely used in many practical machine learning algorithms, e.g., spectral clustering and semisupervised learning. However, all of them use the Euclidean distance to construct the graph Laplacian, which does not necessarily reflect the inherent distribution of the data. In this brief, we propose a method to directly optimize the normalized graph Laplacian by using pairwise constraints. The learned graph is consistent with equivalence and nonequivalence pairwise relationships, and thus it can better represent similarity between samples. Meanwhile, our approach, unlike metric learning, automatically determines the scale factor during the optimization. The learned normalized Laplacian matrix can be directly applied in spectral clustering and semisupervised learning algorithms. Comprehensive experiments demonstrate the effectiveness of the proposed approach.

  19. Diode Laser Assisted Filament Winding of Thermoplastic Matrix Composites

    PubMed Central

    Quadrini, Fabrizio; Squeo, Erica Anna; Prosperi, Claudia

    2010-01-01

    A new consolidation method for the laser-assisted filament winding of thermoplastic prepregs is discussed: for the first time a diode laser is used, as well as long glass fiber reinforced polypropylene prepregs. A consolidation apparatus was built by means of a CNC motion table, a stepper motor and a simple tensioner. Preliminary tests were performed in a hoop winding configuration: only the winding speed was changed, and all the other process parameters (laser power, distance from the laser focus, consolidation force) were kept constant. Small wound rings with an internal diameter of 25 mm were produced and compression tests were carried out to evaluate the composite agglomeration in dependence of the winding speed. At lower winding speeds, a strong interpenetration of adjacent layers was observed.

  20. Computer-assisted bladder cancer grading: α-shapes for color space decomposition

    NASA Astrophysics Data System (ADS)

    Niazi, M. K. K.; Parwani, Anil V.; Gurcan, Metin N.

    2016-03-01

    According to American Cancer Society, around 74,000 new cases of bladder cancer are expected during 2015 in the US. To facilitate the bladder cancer diagnosis, we present an automatic method to differentiate carcinoma in situ (CIS) from normal/reactive cases that will work on hematoxylin and eosin (H and E) stained images of bladder. The method automatically determines the color deconvolution matrix by utilizing the α-shapes of the color distribution in the RGB color space. Then, variations in the boundary of transitional epithelium are quantified, and sizes of nuclei in the transitional epithelium are measured. We also approximate the "nuclear to cytoplasmic ratio" by computing the ratio of the average shortest distance between transitional epithelium and nuclei to average nuclei size. Nuclei homogeneity is measured by computing the kurtosis of the nuclei size histogram. The results show that 30 out of 34 (88.2%) images were correctly classified by the proposed method, indicating that these novel features are viable markers to differentiate CIS from normal/reactive bladder.

  1. Combining point context and dynamic time warping for online gesture recognition

    NASA Astrophysics Data System (ADS)

    Mao, Xia; Li, Chen

    2017-05-01

    Previous gesture recognition methods usually focused on recognizing gestures after the entire gesture sequences were obtained. However, in many practical applications, a system has to identify gestures before they end to give instant feedback. We present an online gesture recognition approach that can realize early recognition of unfinished gestures with low latency. First, a curvature buffer-based point context (CBPC) descriptor is proposed to extract the shape feature of a gesture trajectory. The CBPC descriptor is a complete descriptor with a simple computation, and thus has its superiority in online scenarios. Then, we introduce an online windowed dynamic time warping algorithm to realize online matching between the ongoing gesture and the template gestures. In the algorithm, computational complexity is effectively decreased by adding a sliding window to the accumulative distance matrix. Lastly, the experiments are conducted on the Australian sign language data set and the Kinect hand gesture (KHG) data set. Results show that the proposed method outperforms other state-of-the-art methods especially when gesture information is incomplete.

  2. A review of the matrix-exponential formalism in radiative transfer

    NASA Astrophysics Data System (ADS)

    Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian

    2017-07-01

    This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.

  3. Lattice Boltzmann methods applied to large-scale three-dimensional virtual cores constructed from digital optical borehole images of the karst carbonate Biscayne aquifer in southeastern Florida

    USGS Publications Warehouse

    Michael Sukop,; Cunningham, Kevin J.

    2014-01-01

    Digital optical borehole images at approximately 2 mm vertical resolution and borehole caliper data were used to create three-dimensional renderings of the distribution of (1) matrix porosity and (2) vuggy megaporosity for the karst carbonate Biscayne aquifer in southeastern Florida. The renderings based on the borehole data were used as input into Lattice Boltzmann methods to obtain intrinsic permeability estimates for this extremely transmissive aquifer, where traditional aquifer test methods may fail due to very small drawdowns and non-Darcian flow that can reduce apparent hydraulic conductivity. Variogram analysis of the borehole data suggests a nearly isotropic rock structure at lag lengths up to the nominal borehole diameter. A strong correlation between the diameter of the borehole and the presence of vuggy megaporosity in the data set led to a bias in the variogram where the computed horizontal spatial autocorrelation is strong at lag distances greater than the nominal borehole size. Lattice Boltzmann simulation of flow across a 0.4 × 0.4 × 17 m (2.72 m3 volume) parallel-walled column of rendered matrix and vuggy megaporosity indicates a high hydraulic conductivity of 53 m s−1. This value is similar to previous Lattice Boltzmann calculations of hydraulic conductivity in smaller limestone samples of the Biscayne aquifer. The development of simulation methods that reproduce dual-porosity systems with higher resolution and fidelity and that consider flow through horizontally longer renderings could provide improved estimates of the hydraulic conductivity and help to address questions about the importance of scale.

  4. Lattice Boltzmann methods applied to large-scale three-dimensional virtual cores constructed from digital optical borehole images of the karst carbonate Biscayne aquifer in southeastern Florida

    NASA Astrophysics Data System (ADS)

    Sukop, Michael C.; Cunningham, Kevin J.

    2014-11-01

    Digital optical borehole images at approximately 2 mm vertical resolution and borehole caliper data were used to create three-dimensional renderings of the distribution of (1) matrix porosity and (2) vuggy megaporosity for the karst carbonate Biscayne aquifer in southeastern Florida. The renderings based on the borehole data were used as input into Lattice Boltzmann methods to obtain intrinsic permeability estimates for this extremely transmissive aquifer, where traditional aquifer test methods may fail due to very small drawdowns and non-Darcian flow that can reduce apparent hydraulic conductivity. Variogram analysis of the borehole data suggests a nearly isotropic rock structure at lag lengths up to the nominal borehole diameter. A strong correlation between the diameter of the borehole and the presence of vuggy megaporosity in the data set led to a bias in the variogram where the computed horizontal spatial autocorrelation is strong at lag distances greater than the nominal borehole size. Lattice Boltzmann simulation of flow across a 0.4 × 0.4 × 17 m (2.72 m3 volume) parallel-walled column of rendered matrix and vuggy megaporosity indicates a high hydraulic conductivity of 53 m s-1. This value is similar to previous Lattice Boltzmann calculations of hydraulic conductivity in smaller limestone samples of the Biscayne aquifer. The development of simulation methods that reproduce dual-porosity systems with higher resolution and fidelity and that consider flow through horizontally longer renderings could provide improved estimates of the hydraulic conductivity and help to address questions about the importance of scale.

  5. Landscape resistance and habitat combine to provide an optimal model of genetic structure and connectivity at the range margin of a small mammal.

    PubMed

    Marrotte, R R; Gonzalez, A; Millien, V

    2014-08-01

    We evaluated the effect of habitat and landscape characteristics on the population genetic structure of the white-footed mouse. We develop a new approach that uses numerical optimization to define a model that combines site differences and landscape resistance to explain the genetic differentiation between mouse populations inhabiting forest patches in southern Québec. We used ecological distance computed from resistance surfaces with Circuitscape to infer the effect of the landscape matrix on gene flow. We calculated site differences using a site index of habitat characteristics. A model that combined site differences and resistance distances explained a high proportion of the variance in genetic differentiation and outperformed models that used geographical distance alone. Urban and agriculture-related land uses were, respectively, the most and the least resistant landscape features influencing gene flow. Our method detected the effect of rivers and highways as highly resistant linear barriers. The density of grass and shrubs on the ground best explained the variation in the site index of habitat characteristics. Our model indicates that movement of white-footed mouse in this region is constrained along routes of low resistance. Our approach can generate models that may improve predictions of future northward range expansion of this small mammal. © 2014 John Wiley & Sons Ltd.

  6. A Comparison of Accuracy of Matrix Impression System with Putty Reline Technique and Multiple Mix Technique: An In Vitro Study.

    PubMed

    Kumar, M Praveen; Patil, Suneel G; Dheeraj, Bhandari; Reddy, Keshav; Goel, Dinker; Krishna, Gopi

    2015-06-01

    The difficulty in obtaining an acceptable impression increases exponentially as the number of abutments increases. Accuracy of the impression material and the use of a suitable impression technique are of utmost importance in the fabrication of a fixed partial denture. This study compared the accuracy of the matrix impression system with conventional putty reline and multiple mix technique for individual dies by comparing the inter-abutment distance in the casts obtained from the impressions. Three groups, 10 impressions each with three impression techniques (matrix impression system, putty reline technique and multiple mix technique) were made of a master die. Typodont teeth were embedded in a maxillary frasaco model base. The left first premolar was removed to create a three-unit fixed partial denture situation and the left canine and second premolar were prepared conservatively, and hatch marks were made on the abutment teeth. The final casts obtained from the impressions were examined under a profile projector and the inter-abutment distance was calculated for all the casts and compared. The results from this study showed that in the mesiodistal dimensions the percentage deviation from master model in Group I was 0.1 and 0.2, in Group II was 0.9 and 0.3, and Group III was 1.6 and 1.5, respectively. In the labio-palatal dimensions the percentage deviation from master model in Group I was 0.01 and 0.4, Group II was 1.9 and 1.3, and Group III was 2.2 and 2.0, respectively. In the cervico-incisal dimensions the percentage deviation from the master model in Group I was 1.1 and 0.2, Group II was 3.9 and 1.7, and Group III was 1.9 and 3.0, respectively. In the inter-abutment dimension of dies, percentage deviation from master model in Group I was 0.1, Group II was 0.6, and Group III was 1.0. The matrix impression system showed more accuracy of reproduction for individual dies when compared with putty reline technique and multiple mix technique in all the three directions, as well as the inter-abutment distance.

  7. Size-dependent characterization of embedded Ge nanocrystals: Structural and thermal properties

    NASA Astrophysics Data System (ADS)

    Araujo, L. L.; Giulian, R.; Sprouster, D. J.; Schnohr, C. S.; Llewellyn, D. J.; Kluth, P.; Cookson, D. J.; Foran, G. J.; Ridgway, M. C.

    2008-09-01

    A combination of conventional and synchrotron-based techniques has been used to characterize the size-dependent structural and thermal properties of Ge nanocrystals (NCs) embedded in a silica (a-SiO2) matrix. Ge NC size distributions with four different diameters ranging from 4.0 to 9.0 nm were produced by ion implantation and thermal annealing as characterized with small-angle x-ray scattering and transmission electron microscopy. The NCs were well represented by the superposition of bulklike crystalline and amorphous environments, suggesting the formation of an amorphous layer separating the crystalline NC core and the a-SiO2 matrix. The amorphous fraction was quantified with x-ray-absorption near-edge spectroscopy and increased as the NC diameter decreased, consistent with the increase in surface-to-volume ratio. The structural parameters of the first three nearest-neighbor shells were determined with extended x-ray-absorption fine-structure (EXAFS) spectroscopy and evolved linearly with inverse NC diameter. Specifically, increases in total disorder, interatomic distance, and the asymmetry in the distribution of distances were observed as the NC size decreased, demonstrating that finite-size effects govern the structural properties of embedded Ge NCs. Temperature-dependent EXAFS measurements in the range of 15-300 K were employed to probe the mean vibrational frequency and the variation of the interatomic distance distribution (mean value, variance, and asymmetry) with temperature for all NC distributions. A clear trend of increased stiffness (higher vibrational frequency) and decreased thermal expansion with decreasing NC size was evident, confirming the close relationship between the variation of structural and thermal/vibrational properties with size for embedded Ge NCs. The increase in surface-to-volume ratio and the presence of an amorphous Ge layer separating the matrix and crystalline NC core are identified as the main factors responsible for the observed behavior, with the surrounding a-SiO2 matrix also contributing to a lesser extent. Such results are compared to previous reports and discussed in terms of the influence of the surface-to-volume ratio in objects of nanometer dimensions.

  8. Biodistance analysis of the Moche sacrificial victims from Huaca de la Luna plaza 3C: Matrix method test of their origins.

    PubMed

    Sutter, Richard C; Verano, John W

    2007-02-01

    The purpose of this study is to test two competing models regarding the origins of Early Intermediate Period (AD 200-750) sacrificial victims from the Huacas de Moche site using the matrix correlation method. The first model posits the sacrificial victims represent local elites who lost competitions in ritual battles with one another, while the other model suggests the victims were nonlocal warriors captured during warfare with nearby polities. We estimate biodistances for sacrificial victims from Huaca de la Luna Plaza 3C (AD 300-550) with eight previously reported samples from the north coast of Peru using both the mean measure of divergence (MMD) and Mahalanobis' distance (d2). Hypothetical matrices are developed based upon the assumptions of each of the two competing models regarding the origins of Moche sacrificial victims. When the MMD matrix is compared to the two hypothetical matrices using a partial-Mantel test (Smouse et al.: Syst Zool 35 (1986) 627-632), the ritual combat model (i.e. local origins) has a low and nonsignificant correlation (r = 0.134, P = 0.163), while the nonlocal origins model is highly correlated and significant (r = 0.688, P = 0.001). Comparisons of the d2 results and the two hypothetical matrices also produced low and nonsignificant correlation for the ritual combat model (r = 0.210, P = 0.212), while producing a higher and statistically significant result with the nonlocal origins model (r = 0.676, P = 0.002). We suggest that the Moche sacrificial victims represent nonlocal warriors captured in territorial combat with nearby competing polities. Copyright 2006 Wiley-Liss, Inc.

  9. [Study on preparation of laser micropore porcine acellular dermal matrix combined with split-thickness autograft and its application in wound transplantation].

    PubMed

    Liang, Li-Ming; Chai, Ji-Ke; Yang, Hong-Ming; Feng, Rui; Yin, Hui-Nan; Li, Feng-Yu; Sun, Qiang

    2007-04-01

    To prepare a porcine acellular dermal matrix (PADM), and to optimize the interpore distance between PADM and co-grafted split-thickness autologous skin. Porcine skin was treated with trypsin/Triton X-100 to prepare an acellular dermal matrix. Micropores were produced on the PADM with a laser punch. The distance between micropores varied as 0.8 mm, 1.0 mm, 1.2 mm and 1.5 mm. Full-thickness defect wounds were created on the back of 144 SD rats. The rats were randomly divided into 6 groups as follows, with 24 rats in each group. Micropore groups I -IV: the wounds were grafted with PADM with micropores in four different intervals respectively, and covered with split-thickness autologous skin graft. Mesh group: the wounds were grafted with meshed PADM and split-thickness autograft. with simple split-thickness autografting. The gross observation of wound healing and histological observation were performed at 2, 4, 6 weeks after surgery. The wound healing rate and contraction rate were calculated. Two and four weeks after surgery, the wound healing rate in micropore groups I and II was lower than that in control group (P < 0.05), but no obvious difference was between micropore groups I , II and mesh group (P > 0.05) until 6 weeks after grafting( P <0.05). The wound contraction rate in micropore groups I and II ([(16.0 +/- 2.6)%, (15.1 +/- 2.4)%] was remarkably lower than that in control group 4 and 6 weeks after grafting (P < 0.05), and it was significantly lower than that in mesh group [(19.3 +/- 2.4)%] 6 weeks after surgery (P <0.05). Histological examination showed good epithelization, regularly arranged collagenous fibers, and integral structure of basement membrane. Laser micropore PADM (0.8 mm or 1.0 mm in distance) grafting in combination with split-thickness autografting can improve the quality of wound healing. PADM with laser micropores in 1.0 mm distance is the best choice among them.

  10. Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method

    NASA Astrophysics Data System (ADS)

    Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing

    2017-05-01

    Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach's feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method.

  11. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    PubMed

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  12. DISTANCES TO DARK CLOUDS: COMPARING EXTINCTION DISTANCES TO MASER PARALLAX DISTANCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, Jonathan B.; Jackson, James M.; Stead, Joseph J.

    We test two different methods of using near-infrared extinction to estimate distances to dark clouds in the first quadrant of the Galaxy using large near-infrared (Two Micron All Sky Survey and UKIRT Infrared Deep Sky Survey) surveys. Very long baseline interferometry parallax measurements of masers around massive young stars provide the most direct and bias-free measurement of the distance to these dark clouds. We compare the extinction distance estimates to these maser parallax distances. We also compare these distances to kinematic distances, including recent re-calibrations of the Galactic rotation curve. The extinction distance methods agree with the maser parallax distancesmore » (within the errors) between 66% and 100% of the time (depending on method and input survey) and between 85% and 100% of the time outside of the crowded Galactic center. Although the sample size is small, extinction distance methods reproduce maser parallax distances better than kinematic distances; furthermore, extinction distance methods do not suffer from the kinematic distance ambiguity. This validation gives us confidence that these extinction methods may be extended to additional dark clouds where maser parallaxes are not available.« less

  13. Efficient sparse matrix-matrix multiplication for computing periodic responses by shooting method on Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Stoykov, S.; Atanassov, E.; Margenov, S.

    2016-10-01

    Many of the scientific applications involve sparse or dense matrix operations, such as solving linear systems, matrix-matrix products, eigensolvers, etc. In what concerns structural nonlinear dynamics, the computations of periodic responses and the determination of stability of the solution are of primary interest. Shooting method iswidely used for obtaining periodic responses of nonlinear systems. The method involves simultaneously operations with sparse and dense matrices. One of the computationally expensive operations in the method is multiplication of sparse by dense matrices. In the current work, a new algorithm for sparse matrix by dense matrix products is presented. The algorithm takes into account the structure of the sparse matrix, which is obtained by space discretization of the nonlinear Mindlin's plate equation of motion by the finite element method. The algorithm is developed to use the vector engine of Intel Xeon Phi coprocessors. It is compared with the standard sparse matrix by dense matrix algorithm and the one developed by Intel MKL and it is shown that by considering the properties of the sparse matrix better algorithms can be developed.

  14. An Information-Theoretic-Cluster Visualization for Self-Organizing Maps.

    PubMed

    Brito da Silva, Leonardo Enzo; Wunsch, Donald C

    2018-06-01

    Improved data visualization will be a significant tool to enhance cluster analysis. In this paper, an information-theoretic-based method for cluster visualization using self-organizing maps (SOMs) is presented. The information-theoretic visualization (IT-vis) has the same structure as the unified distance matrix, but instead of depicting Euclidean distances between adjacent neurons, it displays the similarity between the distributions associated with adjacent neurons. Each SOM neuron has an associated subset of the data set whose cardinality controls the granularity of the IT-vis and with which the first- and second-order statistics are computed and used to estimate their probability density functions. These are used to calculate the similarity measure, based on Renyi's quadratic cross entropy and cross information potential (CIP). The introduced visualizations combine the low computational cost and kernel estimation properties of the representative CIP and the data structure representation of a single-linkage-based grouping algorithm to generate an enhanced SOM-based visualization. The visual quality of the IT-vis is assessed by comparing it with other visualization methods for several real-world and synthetic benchmark data sets. Thus, this paper also contains a significant literature survey. The experiments demonstrate the IT-vis cluster revealing capabilities, in which cluster boundaries are sharply captured. Additionally, the information-theoretic visualizations are used to perform clustering of the SOM. Compared with other methods, IT-vis of large SOMs yielded the best results in this paper, for which the quality of the final partitions was evaluated using external validity indices.

  15. Method of forming a ceramic matrix composite and a ceramic matrix component

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Diego, Peter; Zhang, James

    A method of forming a ceramic matrix composite component includes providing a formed ceramic member having a cavity, filling at least a portion of the cavity with a ceramic foam. The ceramic foam is deposited on a barrier layer covering at least one internal passage of the cavity. The method includes processing the formed ceramic member and ceramic foam to obtain a ceramic matrix composite component. Also provided is a method of forming a ceramic matrix composite blade and a ceramic matrix composite component.

  16. Across language families: Genome diversity mirrors linguistic variation within Europe

    PubMed Central

    Longobardi, Giuseppe; Ghirotto, Silvia; Guardiano, Cristina; Tassi, Francesca; Benazzo, Andrea; Ceolin, Andrea

    2015-01-01

    ABSTRACT Objectives: The notion that patterns of linguistic and biological variation may cast light on each other and on population histories dates back to Darwin's times; yet, turning this intuition into a proper research program has met with serious methodological difficulties, especially affecting language comparisons. This article takes advantage of two new tools of comparative linguistics: a refined list of Indo‐European cognate words, and a novel method of language comparison estimating linguistic diversity from a universal inventory of grammatical polymorphisms, and hence enabling comparison even across different families. We corroborated the method and used it to compare patterns of linguistic and genomic variation in Europe. Materials and Methods: Two sets of linguistic distances, lexical and syntactic, were inferred from these data and compared with measures of geographic and genomic distance through a series of matrix correlation tests. Linguistic and genomic trees were also estimated and compared. A method (Treemix) was used to infer migration episodes after the main population splits. Results: We observed significant correlations between genomic and linguistic diversity, the latter inferred from data on both Indo‐European and non‐Indo‐European languages. Contrary to previous observations, on the European scale, language proved a better predictor of genomic differences than geography. Inferred episodes of genetic admixture following the main population splits found convincing correlates also in the linguistic realm. Discussion: These results pave the ground for previously unfeasible cross‐disciplinary analyses at the worldwide scale, encompassing populations of distant language families. Am J Phys Anthropol 157:630–640, 2015. © 2015 Wiley Periodicals, Inc. PMID:26059462

  17. A preliminary characterization of the tensile and fatigue behavior of tungsten-fiber/Waspaloy-matrix composite

    NASA Technical Reports Server (NTRS)

    Corner, Ralph E.; Lerch, Brad A.

    1992-01-01

    A microstructural study and a preliminary characterization of the room temperature tensile and fatigue behavior of a continuous, tungsten fiber, Waspaloy-matrix composite was conducted. A heat treatment was chosen that would allow visibility of planar slip if it occurred during deformation, but would not allow growth of the reaction zone. Tensile and fatigue tests showed that the failed specimens contained transverse cracks in the fibers. The cracks that occurred in the tensile specimen were observed at the fracture surface and up to approximately 4.0 mm below the fracture surface. The crack spacing remained constant along the entire length of the cracked fibers. Conversely, the cracks that occurred in the fatigue specimen were only observed in the vicinity of the fracture surface. In instances where two fiber cracks occurred in the same plane, the matrix often necked between the two cracked fibers. Large groups of slip bands were generated in the matrix near the fiber cracks. Slip bands in the matrix of the tensile specimen were also observed in areas where there were no fiber cracks, at distances greater than 4 mm from the fracture surface. This suggests that the matrix plastically flows before fiber cracking occurs.

  18. Method paper--distance and travel time to casualty clinics in Norway based on crowdsourced postcode coordinates: a comparison with other methods.

    PubMed

    Raknes, Guttorm; Hunskaar, Steinar

    2014-01-01

    We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.

  19. Near-Sun and 1 AU magnetic field of coronal mass ejections: a parametric study

    NASA Astrophysics Data System (ADS)

    Patsourakos, S.; Georgoulis, M. K.

    2016-11-01

    Aims: The magnetic field of coronal mass ejections (CMEs) determines their structure, evolution, and energetics, as well as their geoeffectiveness. However, we currently lack routine diagnostics of the near-Sun CME magnetic field, which is crucial for determining the subsequent evolution of CMEs. Methods: We recently presented a method to infer the near-Sun magnetic field magnitude of CMEs and then extrapolate it to 1 AU. This method uses relatively easy to deduce observational estimates of the magnetic helicity in CME-source regions along with geometrical CME fits enabled by coronagraph observations. We hereby perform a parametric study of this method aiming to assess its robustness. We use statistics of active region (AR) helicities and CME geometrical parameters to determine a matrix of plausible near-Sun CME magnetic field magnitudes. In addition, we extrapolate this matrix to 1 AU and determine the anticipated range of CME magnetic fields at 1 AU representing the radial falloff of the magnetic field in the CME out to interplanetary (IP) space by a power law with index αB. Results: The resulting distribution of the near-Sun (at 10 R⊙) CME magnetic fields varies in the range [0.004, 0.02] G, comparable to, or higher than, a few existing observational inferences of the magnetic field in the quiescent corona at the same distance. We also find that a theoretically and observationally motivated range exists around αB = -1.6 ± 0.2, thereby leading to a ballpark agreement between our estimates and observationally inferred field magnitudes of magnetic clouds (MCs) at L1. Conclusions: In a statistical sense, our method provides results that are consistent with observations.

  20. Pure endmember extraction using robust kernel archetypoid analysis for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Yang, Gang; Wu, Ke; Li, Weiyue; Zhang, Dianfa

    2017-09-01

    A robust kernel archetypoid analysis (RKADA) method is proposed to extract pure endmembers from hyperspectral imagery (HSI). The RKADA assumes that each pixel is a sparse linear mixture of all endmembers and each endmember corresponds to a real pixel in the image scene. First, it improves the re8gular archetypal analysis with a new binary sparse constraint, and the adoption of the kernel function constructs the principal convex hull in an infinite Hilbert space and enlarges the divergences between pairwise pixels. Second, the RKADA transfers the pure endmember extraction problem into an optimization problem by minimizing residual errors with the Huber loss function. The Huber loss function reduces the effects from big noises and outliers in the convergence procedure of RKADA and enhances the robustness of the optimization function. Third, the random kernel sinks for fast kernel matrix approximation and the two-stage algorithm for optimizing initial pure endmembers are utilized to improve its computational efficiency in realistic implementations of RKADA, respectively. The optimization equation of RKADA is solved by using the block coordinate descend scheme and the desired pure endmembers are finally obtained. Six state-of-the-art pure endmember extraction methods are employed to make comparisons with the RKADA on both synthetic and real Cuprite HSI datasets, including three geometrical algorithms vertex component analysis (VCA), alternative volume maximization (AVMAX) and orthogonal subspace projection (OSP), and three matrix factorization algorithms the preconditioning for successive projection algorithm (PreSPA), hierarchical clustering based on rank-two nonnegative matrix factorization (H2NMF) and self-dictionary multiple measurement vector (SDMMV). Experimental results show that the RKADA outperforms all the six methods in terms of spectral angle distance (SAD) and root-mean-square-error (RMSE). Moreover, the RKADA has short computational times in offline operations and shows significant improvement in identifying pure endmembers for ground objects with smaller spectrum differences. Therefore, the RKADA could be an alternative for pure endmember extraction from hyperspectral images.

  1. Intensity-distance attenuation law in the continental Portugal using intensity data points

    NASA Astrophysics Data System (ADS)

    Le Goff, Boris; Bezzeghoud, Mourad; Borges, José Fernando

    2013-04-01

    Several attempts have been done to evaluate the intensity attenuation with the epicentral distance in the Iberian Peninsula [1, 2]. So far, the results are not satisfying or not using the intensity data points of the available events. We developed a new intensity law for the continental Portugal, using the macroseismic reports that provide intensity data points, instrumental magnitudes and instrumental locations. We collected 31 events from the Instituto Portugues do Mar e da Atmosfera (IPMA, Portugal; ex-IM), covering the period between 1909 and 1997, with a largest magnitude of 8.2, closed to the African-Eurasian plate boundary. For each event, the intensity data points are plotted versus the distance and different trend lines are achieved (linear, exponential and logarithmic). The better fits are obtained with the logarithmic trend lines. We evaluate a form of the attenuation equation as follow: I = c0(M) + c1(M).ln(R) (1) where I, M and R are, respectively, the intensity, the magnitude and the epicentral distance. To solve this equation, we investigate two methods. The first one consists in plotting the slope of the different logarithmic trends versus the magnitude, to estimate the parameter c1(M), and to evaluate how the intensity behaves in function of the magnitude. Another plot, representing the intercepts versus the magnitude, allows to determine the second parameter, c0(M). The second method consists in using the inverse theory. From the data, we recover the parameters of the model, using a linear inverse matrix. Both parameters, c0(M) and c1(M), are provided with their associated errors. A sensibility test will be achieved, using the macroseismic data, to estimate the resolution power of both methods. This new attenuation law will be used with the Bakun and Wentworth method [3] in order to reestimate the epicentral region and the magnitude estimation of the 1909 Benavente event. This attenuation law may also be adapted to be used in Probabilistic Seismic Hazard Analysis. [1] Lopez Casado, C., Molina Palacios, S., Delgado, J., and Pelaez, J.A., 2000, BSSA, 90, 1, pp. 34-47 [2] Sousa, M. L., and Oliveira, C. S., 1997, Natural Hazard, 14: 207-225 [3] Bakun, W. H., and Wentworth, C. M., 1997, BSSA, vol.87, No. 6, pp. 1502-1521

  2. Design of character-based DNA barcode motif for species identification: A computational approach and its validation in fishes.

    PubMed

    Chakraborty, Mohua; Dhar, Bishal; Ghosh, Sankar Kumar

    2017-11-01

    The DNA barcodes are generally interpreted using distance-based and character-based methods. The former uses clustering of comparable groups, based on the relative genetic distance, while the latter is based on the presence or absence of discrete nucleotide substitutions. The distance-based approach has a limitation in defining a universal species boundary across the taxa as the rate of mtDNA evolution is not constant throughout the taxa. However, character-based approach more accurately defines this using a unique set of nucleotide characters. The character-based analysis of full-length barcode has some inherent limitations, like sequencing of the full-length barcode, use of a sparse-data matrix and lack of a uniform diagnostic position for each group. A short continuous stretch of a fragment can be used to resolve the limitations. Here, we observe that a 154-bp fragment, from the transversion-rich domain of 1367 COI barcode sequences can successfully delimit species in the three most diverse orders of freshwater fishes. This fragment is used to design species-specific barcode motifs for 109 species by the character-based method, which successfully identifies the correct species using a pattern-matching program. The motifs also correctly identify geographically isolated population of the Cypriniformes species. Further, this region is validated as a species-specific mini-barcode for freshwater fishes by successful PCR amplification and sequencing of the motif (154 bp) using the designed primers. We anticipate that use of such motifs will enhance the diagnostic power of DNA barcode, and the mini-barcode approach will greatly benefit the field-based system of rapid species identification. © 2017 John Wiley & Sons Ltd.

  3. Electromagnetic Scattering by an Exponentially Distributed Rough Surface with the Introduction of a Rough Surface Generation Technique

    DTIC Science & Technology

    1987-12-01

    d integer corrow, corcol , refrow, refcol C Create lower triangle of corr. matrix (symetric matrix) do 33 i~l,n2 C calculate the row point (i) is in...reference Fig.(21)) corrow = (((i-l)/n)+1) C claculate the column point (i) is in corcol = i-(corrow-l)*n) write(6,*) i do 31 jl,i C calculate the row...refrow)*space C the horizontal distance (b) b = ( corcol -refcol)*space 14 d = sqrt(a**2+b**2) S coeff(i,j) = e%-P(-d**2) 31 ]i<ontiinue .3 crnt inue

  4. Matrix completion by deep matrix factorization.

    PubMed

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Context-aware and locality-constrained coding for image categorization.

    PubMed

    Xiao, Wenhua; Wang, Bin; Liu, Yu; Bao, Weidong; Zhang, Maojun

    2014-01-01

    Improving the coding strategy for BOF (Bag-of-Features) based feature design has drawn increasing attention in recent image categorization works. However, the ambiguity in coding procedure still impedes its further development. In this paper, we introduce a context-aware and locality-constrained Coding (CALC) approach with context information for describing objects in a discriminative way. It is generally achieved by learning a word-to-word cooccurrence prior to imposing context information over locality-constrained coding. Firstly, the local context of each category is evaluated by learning a word-to-word cooccurrence matrix representing the spatial distribution of local features in neighbor region. Then, the learned cooccurrence matrix is used for measuring the context distance between local features and code words. Finally, a coding strategy simultaneously considers locality in feature space and context space, while introducing the weight of feature is proposed. This novel coding strategy not only semantically preserves the information in coding, but also has the ability to alleviate the noise distortion of each class. Extensive experiments on several available datasets (Scene-15, Caltech101, and Caltech256) are conducted to validate the superiority of our algorithm by comparing it with baselines and recent published methods. Experimental results show that our method significantly improves the performance of baselines and achieves comparable and even better performance with the state of the arts.

  6. Modeling the spatio-temporal dynamics of porcine reproductive & respiratory syndrome cases at farm level using geographical distance and pig trade network matrices.

    PubMed

    Amirpour Haredasht, Sara; Polson, Dale; Main, Rodger; Lee, Kyuyoung; Holtkamp, Derald; Martínez-López, Beatriz

    2017-06-07

    Porcine reproductive and respiratory syndrome (PRRS) is one of the most economically devastating infectious diseases for the swine industry. A better understanding of the disease dynamics and the transmission pathways under diverse epidemiological scenarios is a key for the successful PRRS control and elimination in endemic settings. In this paper we used a two step parameter-driven (PD) Bayesian approach to model the spatio-temporal dynamics of PRRS and predict the PRRS status on farm in subsequent time periods in an endemic setting in the US. For such purpose we used information from a production system with 124 pig sites that reported 237 PRRS cases from 2012 to 2015 and from which the pig trade network and geographical location of farms (i.e., distance was used as a proxy of airborne transmission) was available. We estimated five PD models with different weights namely: (i) geographical distance weight which contains the inverse distance between each pair of farms in kilometers, (ii) pig trade weight (PT ji ) which contains the absolute number of pig movements between each pair of farms, (iii) the product between the distance weight and the standardized relative pig trade weight, (iv) the product between the standardized distance weight and the standardized relative pig trade weight, and (v) the product of the distance weight and the pig trade weight. The model that included the pig trade weight matrix provided the best fit to model the dynamics of PRRS cases on a 6-month basis from 2012 to 2015 and was able to predict PRRS outbreaks in the subsequent time period with an area under the ROC curve (AUC) of 0.88 and the accuracy of 85% (105/124). The result of this study reinforces the importance of pig trade in PRRS transmission in the US. Methods and results of this study may be easily adapted to any production system to characterize the PRRS dynamics under diverse epidemic settings to more timely support decision-making.

  7. Key-Generation Algorithms for Linear Piece In Hand Matrix Method

    NASA Astrophysics Data System (ADS)

    Tadaki, Kohtaro; Tsujii, Shigeo

    The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.

  8. Sparse subspace clustering for data with missing entries and high-rank matrix completion.

    PubMed

    Fan, Jicong; Chow, Tommy W S

    2017-09-01

    Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. [A New Distance Metric between Different Stellar Spectra: the Residual Distribution Distance].

    PubMed

    Liu, Jie; Pan, Jing-chang; Luo, A-li; Wei, Peng; Liu, Meng

    2015-12-01

    Distance metric is an important issue for the spectroscopic survey data processing, which defines a calculation method of the distance between two different spectra. Based on this, the classification, clustering, parameter measurement and outlier data mining of spectral data can be carried out. Therefore, the distance measurement method has some effect on the performance of the classification, clustering, parameter measurement and outlier data mining. With the development of large-scale stellar spectral sky surveys, how to define more efficient distance metric on stellar spectra has become a very important issue in the spectral data processing. Based on this problem and fully considering of the characteristics and data features of the stellar spectra, a new distance measurement method of stellar spectra named Residual Distribution Distance is proposed. While using this method to measure the distance, the two spectra are firstly scaled and then the standard deviation of the residual is used the distance. Different from the traditional distance metric calculation methods of stellar spectra, when used to calculate the distance between stellar spectra, this method normalize the two spectra to the same scale, and then calculate the residual corresponding to the same wavelength, and the standard error of the residual spectrum is used as the distance measure. The distance measurement method can be used for stellar classification, clustering and stellar atmospheric physical parameters measurement and so on. This paper takes stellar subcategory classification as an example to test the distance measure method. The results show that the distance defined by the proposed method is more effective to describe the gap between different types of spectra in the classification than other methods, which can be well applied in other related applications. At the same time, this paper also studies the effect of the signal to noise ratio (SNR) on the performance of the proposed method. The result show that the distance is affected by the SNR. The smaller the signal-to-noise ratio is, the greater impact is on the distance; While SNR is larger than 10, the signal-to-noise ratio has little effect on the performance for the classification.

  10. An accurate and linear-scaling method for calculating charge-transfer excitation energies and diabatic couplings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavanello, Michele; Van Voorhis, Troy; Visscher, Lucas

    2013-02-07

    Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlapmore » matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Angstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.« less

  11. An accurate and linear-scaling method for calculating charge-transfer excitation energies and diabatic couplings.

    PubMed

    Pavanello, Michele; Van Voorhis, Troy; Visscher, Lucas; Neugebauer, Johannes

    2013-02-07

    Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlap matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Ångstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.

  12. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  13. A possible biochemical missing link among archaebacteria

    NASA Technical Reports Server (NTRS)

    Achenbach-Richter, Laurie; Woese, Carl R.; Stetter, Karl O.

    1987-01-01

    The characteristics of the newly discovered strain of archaebacteria, VC-16, the only archaebacterium known to reduce sulfate, suggest that VC-16 might represent a transitional form between an anaerobic thermophilic sulfur-based type of metabolism and methanogenesis. It is shown here, using a matrix of evolutionary distances derived from an alignment of various archaebacterial 16S rRNAs and the phylogenetic tree derived from these evolutionary distances, that the lineage represented by strain VC-16 arises from the archaebacterial tree precisely where such an interpretation would predict that it would, between the Methanococcus lineage and that of Thermococcus.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theiler, James; Grosklos, Guen

    We examine the properties and performance of kernelized anomaly detectors, with an emphasis on the Mahalanobis-distance-based kernel RX (KRX) algorithm. Although the detector generally performs well for high-bandwidth Gaussian kernels, it exhibits problematic (in some cases, catastrophic) performance for distances that are large compared to the bandwidth. By comparing KRX to two other anomaly detectors, we can trace the problem to a projection in feature space, which arises when a pseudoinverse is used on the covariance matrix in that feature space. Here, we show that a regularized variant of KRX overcomes this difficulty and achieves superior performance over a widemore » range of bandwidths.« less

  15. Short-distance matrix elements for D 0 -meson mixing from N f = 2 + 1 lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bazavov, A.; Bernard, C.; Bouchard, C. M.

    We calculate in three-flavor lattice QCD the short-distance hadronic matrix elements of all five ΔC=2 four-fermion operators that contribute to neutral D-meson mixing both in and beyond the Standard Model. We use the MILC Collaboration’s N f=2+1 lattice gauge-field configurations generated with asqtad-improved staggered sea quarks. We also employ the asqtad action for the valence light quarks and use the clover action with the Fermilab interpretation for the charm quark. We analyze a large set of ensembles with pions as light as M π≈180 MeV and lattice spacings as fine as a≈0.045 fm, thereby enabling good control over the extrapolation to the physical pion mass and continuum limit. We obtain for the matrix elements in themore » $$\\overline{MS}$$-NDR scheme using the choice of evanescent operators proposed by Beneke et al., evaluated at 3 GeV, $$\\langle$$D 0|O i|$$\\bar{D}$$ 0 $$\\rangle$$={0.0805(55)(16),-0.1561(70)(31),0.0464(31)(9),0.2747(129)(55),0.1035(71)(21)} GeV 4 (i=1–5). The errors shown are from statistics and lattice systematics, and the omission of charmed sea quarks, respectively. To illustrate the utility of our matrix-element results, we place bounds on the scale of CP-violating new physics in D 0 mixing, finding lower limits of about 10–50×10 3 TeV for couplings of O(1). To enable our results to be employed in more sophisticated or model-specific phenomenological studies, we provide the correlations among our matrix-element results. For convenience, we also present numerical results in the other commonly used scheme of Buras, Misiak, and Urban.« less

  16. Cytochrome C in a dry trehalose matrix: structural and dynamical effects probed by x-ray absorption spectroscopy.

    PubMed

    Giachini, Lisa; Francia, Francesco; Cordone, Lorenzo; Boscherini, Federico; Venturoli, Giovanni

    2007-02-15

    We report on the structure and dynamics of the Fe ligand cluster of reduced horse heart cytochrome c in solution, in a dried polyvinyl alcohol (PVA) film, and in two trehalose matrices characterized by different contents of residual water. The effect of the solvent/matrix environment was studied at room temperature using Fe K-edge x-ray absorption fine structure (XAFS) spectroscopy. XAFS data were analyzed by combining ab initio simulations and multi-parameter fitting in an attempt to disentangle structural from disorder parameters. Essentially the same structural and disorder parameters account adequately for the XAFS spectra measured in solution, both in the absence and in the presence of glycerol, and in the PVA film, showing that this polymer interacts weakly with the embedded protein. Instead, incorporation in trehalose leads to severe structural changes, more prominent in the more dried matrix, consisting of 1), an increase up to 0.2 A of the distance between Fe and the imidazole N atom of the coordinating histidine residue and 2), an elongation up to 0.16 A of the distance between Fe and the fourth-shell C atoms of the heme pyrrolic units. These structural distortions are accompanied by a substantial decrease of the relative mean-square displacements of the first ligands. In the extensively dried trehalose matrix, extremely low values of the Debye Waller factors are obtained for the pyrrolic and for the imidazole N atoms. This finding is interpreted as reflecting a drastic hindering in the relative motions of the Fe ligand cluster atoms and an impressive decrease in the static disorder of the local Fe structure. It appears, therefore, that the dried trehalose matrix dramatically perturbs the energy landscape of cytochrome c, giving rise, at the level of local structure, to well-resolved structural distortions and restricting the ensemble of accessible conformational substates.

  17. Short-distance matrix elements for D 0 -meson mixing from N f = 2 + 1 lattice QCD

    DOE PAGES

    Bazavov, A.; Bernard, C.; Bouchard, C. M.; ...

    2018-02-28

    We calculate in three-flavor lattice QCD the short-distance hadronic matrix elements of all five ΔC=2 four-fermion operators that contribute to neutral D-meson mixing both in and beyond the Standard Model. We use the MILC Collaboration’s N f=2+1 lattice gauge-field configurations generated with asqtad-improved staggered sea quarks. We also employ the asqtad action for the valence light quarks and use the clover action with the Fermilab interpretation for the charm quark. We analyze a large set of ensembles with pions as light as M π≈180 MeV and lattice spacings as fine as a≈0.045 fm, thereby enabling good control over the extrapolation to the physical pion mass and continuum limit. We obtain for the matrix elements in themore » $$\\overline{MS}$$-NDR scheme using the choice of evanescent operators proposed by Beneke et al., evaluated at 3 GeV, $$\\langle$$D 0|O i|$$\\bar{D}$$ 0 $$\\rangle$$={0.0805(55)(16),-0.1561(70)(31),0.0464(31)(9),0.2747(129)(55),0.1035(71)(21)} GeV 4 (i=1–5). The errors shown are from statistics and lattice systematics, and the omission of charmed sea quarks, respectively. To illustrate the utility of our matrix-element results, we place bounds on the scale of CP-violating new physics in D 0 mixing, finding lower limits of about 10–50×10 3 TeV for couplings of O(1). To enable our results to be employed in more sophisticated or model-specific phenomenological studies, we provide the correlations among our matrix-element results. For convenience, we also present numerical results in the other commonly used scheme of Buras, Misiak, and Urban.« less

  18. Relative risk for HIV in India - An estimate using conditional auto-regressive models with Bayesian approach.

    PubMed

    Kandhasamy, Chandrasekaran; Ghosh, Kaushik

    2017-02-01

    Indian states are currently classified into HIV-risk categories based on the observed prevalence counts, percentage of infected attendees in antenatal clinics, and percentage of infected high-risk individuals. This method, however, does not account for the spatial dependence among the states nor does it provide any measure of statistical uncertainty. We provide an alternative model-based approach to address these issues. Our method uses Poisson log-normal models having various conditional autoregressive structures with neighborhood-based and distance-based weight matrices and incorporates all available covariate information. We use R and WinBugs software to fit these models to the 2011 HIV data. Based on the Deviance Information Criterion, the convolution model using distance-based weight matrix and covariate information on female sex workers, literacy rate and intravenous drug users is found to have the best fit. The relative risk of HIV for the various states is estimated using the best model and the states are then classified into the risk categories based on these estimated values. An HIV risk map of India is constructed based on these results. The choice of the final model suggests that an HIV control strategy which focuses on the female sex workers, intravenous drug users and literacy rate would be most effective. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Determination of the matrix element V(ub) from inclusive B meson decays

    NASA Astrophysics Data System (ADS)

    Low, Ian

    For years the extraction of |Vub| was tainted by large errors due to theoretical uncertainties. Because of our inability to calculate hadronic dynamics, we are forced to resort to ad hoc models when making theoretical predictions, hence introduce errors which are very hard to quantify. However, an accurate measurement of |Vub| is very important for testing the Cabbibo-Kobayashi-Maskawa picture of CP violation in the minimal standard model. It is highly desirable to be able to extract |Vub| with well-defined and reasonable theoretical uncertainties. In this dissertation, a strategy to extract |Vub| from the electron energy spectrum of the inclusive semi-leptonic B decays is proposed, without having to model the hadronic dynamics. It is based on the observation that the long distance physics involving hadronization, of which we are ignorant, is insensitive to the short distance interactions. Therefore, the uncalculable part in B → Xuℓn is the same as that in the radiative B decays B → Xsgamma. We are able to write down an analytic expression for Vub2/ V*tsVtb in terms of known functions. The theoretical uncertainty in this method is well-defined and estimated to be less than 10% in | Vub|. We also apply our method to the case of hadronic mass spectrum of the inclusive semi-leptonic decays, which has the virtue that the quark-hadron duality is expected to work better.

  20. Talker Localization Based on Interference between Transmitted and Reflected Audible Sound

    NASA Astrophysics Data System (ADS)

    Nakayama, Masato; Nakasako, Noboru; Shinohara, Toshihiro; Uebo, Tetsuji

    In many engineering fields, distance to targets is very important. General distance measurement method uses a time delay between transmitted and reflected waves, but it is difficult to estimate the short distance. On the other hand, the method using phase interference to measure the short distance has been known in the field of microwave radar. Therefore, we have proposed the distance estimation method based on interference between transmitted and reflected audible sound, which can measure the distance between microphone and target with one microphone and one loudspeaker. In this paper, we propose talker localization method based on distance estimation using phase interference. We expand the distance estimation method using phase interference into two microphones (microphone array) in order to estimate talker position. The proposed method can estimate talker position by measuring the distance and direction between target and microphone array. In addition, talker's speech is regarded as a noise in the proposed method. Therefore, we also propose combination of the proposed method and CSP (Cross-power Spectrum Phase analysis) method which is one of the DOA (Direction Of Arrival) estimation methods. We evaluated the performance of talker localization in real environments. The experimental result shows the effectiveness of the proposed method.

  1. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  2. Movement behaviour within and beyond perceptual ranges in three small mammals: effects of matrix type and body mass.

    PubMed

    Prevedello, Jayme Augusto; Forero-Medina, Germán; Vieira, Marcus Vinícius

    2010-11-01

    1. For animal species inhabiting heterogeneous landscapes, the tortuosity of the dispersal path is a key determinant of the success in locating habitat patches. Path tortuosity within and beyond perceptual range must differ, and may be differently affected by intrinsic attributes of individuals and extrinsic environmental factors. Understanding how these factors interact to determine path tortuosity allows more accurate inference of successful movements between habitat patches. 2. We experimentally determined the effects of intrinsic (body mass and species identity) and extrinsic factors (distance to nearest forest fragment and matrix type) on the tortuosity of movements of three forest-dwelling didelphid marsupials, in a fragmented landscape of the Atlantic Forest, Brazil. 3. A total of 202 individuals were captured in forest fragments and released in three unsuitable matrix types (mowed pasture, abandoned pasture and manioc plantation), carrying spool-and-line devices. 4. Twenty-four models were formulated representing a priori hypotheses of major determinants of path tortuosity, grouped in three scenarios (only intrinsic factors, only extrinsic factors and models with combinations of both), and compared using a model selection approach. Models were tested separately for individuals released within the perceptual range of the species, and for individuals released beyond the perceptual range. 5. Matrix type strongly affected path tortuosity, with more obstructed matrix types hampering displacement of animals. Body mass was more important than species identity to determine path tortuosity, with larger animals moving more linearly. Increased distance to the fragment resulted in more tortuous paths, but actually reflects a threshold in perceptual range: linear paths within perceptual range, tortuous paths beyond. 6. The variables tested explained successfully path tortuosity, but only for animals released within the perceptual range. Other factors, such as wind intensity and direction of plantation rows, may be more important for individuals beyond their perceptual range. 7. Simplistic scenarios considering only intrinsic or extrinsic factors are inadequate to predict path tortuosity, and to infer dispersal success in heterogeneous landscapes. Perceptual range represents a fundamental threshold where the effects of matrix type, body mass and individual behaviour change drastically. © 2010 The Authors. Journal compilation © 2010 British Ecological Society.

  3. Inference and Analysis of Population Structure Using Genetic Data and Network Theory

    PubMed Central

    Greenbaum, Gili; Templeton, Alan R.; Bar-David, Shirli

    2016-01-01

    Clustering individuals to subpopulations based on genetic data has become commonplace in many genetic studies. Inference about population structure is most often done by applying model-based approaches, aided by visualization using distance-based approaches such as multidimensional scaling. While existing distance-based approaches suffer from a lack of statistical rigor, model-based approaches entail assumptions of prior conditions such as that the subpopulations are at Hardy-Weinberg equilibria. Here we present a distance-based approach for inference about population structure using genetic data by defining population structure using network theory terminology and methods. A network is constructed from a pairwise genetic-similarity matrix of all sampled individuals. The community partition, a partition of a network to dense subgraphs, is equated with population structure, a partition of the population to genetically related groups. Community-detection algorithms are used to partition the network into communities, interpreted as a partition of the population to subpopulations. The statistical significance of the structure can be estimated by using permutation tests to evaluate the significance of the partition’s modularity, a network theory measure indicating the quality of community partitions. To further characterize population structure, a new measure of the strength of association (SA) for an individual to its assigned community is presented. The strength of association distribution (SAD) of the communities is analyzed to provide additional population structure characteristics, such as the relative amount of gene flow experienced by the different subpopulations and identification of hybrid individuals. Human genetic data and simulations are used to demonstrate the applicability of the analyses. The approach presented here provides a novel, computationally efficient model-free method for inference about population structure that does not entail assumption of prior conditions. The method is implemented in the software NetStruct (available at https://giligreenbaum.wordpress.com/software/). PMID:26888080

  4. Inference and Analysis of Population Structure Using Genetic Data and Network Theory.

    PubMed

    Greenbaum, Gili; Templeton, Alan R; Bar-David, Shirli

    2016-04-01

    Clustering individuals to subpopulations based on genetic data has become commonplace in many genetic studies. Inference about population structure is most often done by applying model-based approaches, aided by visualization using distance-based approaches such as multidimensional scaling. While existing distance-based approaches suffer from a lack of statistical rigor, model-based approaches entail assumptions of prior conditions such as that the subpopulations are at Hardy-Weinberg equilibria. Here we present a distance-based approach for inference about population structure using genetic data by defining population structure using network theory terminology and methods. A network is constructed from a pairwise genetic-similarity matrix of all sampled individuals. The community partition, a partition of a network to dense subgraphs, is equated with population structure, a partition of the population to genetically related groups. Community-detection algorithms are used to partition the network into communities, interpreted as a partition of the population to subpopulations. The statistical significance of the structure can be estimated by using permutation tests to evaluate the significance of the partition's modularity, a network theory measure indicating the quality of community partitions. To further characterize population structure, a new measure of the strength of association (SA) for an individual to its assigned community is presented. The strength of association distribution (SAD) of the communities is analyzed to provide additional population structure characteristics, such as the relative amount of gene flow experienced by the different subpopulations and identification of hybrid individuals. Human genetic data and simulations are used to demonstrate the applicability of the analyses. The approach presented here provides a novel, computationally efficient model-free method for inference about population structure that does not entail assumption of prior conditions. The method is implemented in the software NetStruct (available at https://giligreenbaum.wordpress.com/software/). Copyright © 2016 by the Genetics Society of America.

  5. Generation of gas-phase ions from charged clusters: an important ionization step causing suppression of matrix and analyte ions in matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Lou, Xianwen; van Dongen, Joost L J; Milroy, Lech-Gustav; Meijer, E W

    2016-12-30

    Ionization in matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is a very complicated process. It has been reported that quaternary ammonium salts show extremely strong matrix and analyte suppression effects which cannot satisfactorily be explained by charge transfer reactions. Further investigation of the reasons causing these effects can be useful to improve our understanding of the MALDI process. The dried-droplet and modified thin-layer methods were used as sample preparation methods. In the dried-droplet method, analytes were co-crystallized with matrix, whereas in the modified thin-layer method analytes were deposited on the surface of matrix crystals. Model compounds, tetrabutylammonium iodide ([N(Bu) 4 ]I), cesium iodide (CsI), trihexylamine (THA) and polyethylene glycol 600 (PEG 600), were selected as the test analytes given their ability to generate exclusively pre-formed ions, protonated ions and metal ion adducts respectively in MALDI. The strong matrix suppression effect (MSE) observed using the dried-droplet method might disappear using the modified thin-layer method, which suggests that the incorporation of analytes in matrix crystals contributes to the MSE. By depositing analytes on the matrix surface instead of incorporating in the matrix crystals, the competition for evaporation/ionization from charged matrix/analyte clusters could be weakened resulting in reduced MSE. Further supporting evidence for this inference was found by studying the analyte suppression effect using the same two sample deposition methods. By comparing differences between the mass spectra obtained via the two sample preparation methods, we present evidence suggesting that the generation of gas-phase ions from charged matrix/analyte clusters may induce significant suppression of matrix and analyte ions. The results suggest that the generation of gas-phase ions from charged matrix/analyte clusters is an important ionization step in MALDI-MS. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. A robust method of computing finite difference coefficients based on Vandermonde matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Yijie; Gao, Jinghuai; Peng, Jigen; Han, Weimin

    2018-05-01

    When the finite difference (FD) method is employed to simulate the wave propagation, high-order FD method is preferred in order to achieve better accuracy. However, if the order of FD scheme is high enough, the coefficient matrix of the formula for calculating finite difference coefficients is close to be singular. In this case, when the FD coefficients are computed by matrix inverse operator of MATLAB, inaccuracy can be produced. In order to overcome this problem, we have suggested an algorithm based on Vandermonde matrix in this paper. After specified mathematical transformation, the coefficient matrix is transformed into a Vandermonde matrix. Then the FD coefficients of high-order FD method can be computed by the algorithm of Vandermonde matrix, which prevents the inverse of the singular matrix. The dispersion analysis and numerical results of a homogeneous elastic model and a geophysical model of oil and gas reservoir demonstrate that the algorithm based on Vandermonde matrix has better accuracy compared with matrix inverse operator of MATLAB.

  7. Occupancy dynamics in human-modified landscapes in a tropical island: implications for conservation design

    USGS Publications Warehouse

    Irizarry, Julissa I.; Collazo, Jaime A.; Dinsmore, Stephen J.

    2016-01-01

    AimAvian communities in human-modified landscapes exhibit varying patterns of local colonization and extinction rates, determinants of species occurrence. Our objective was to model these processes to identify habitat features that might enable movements and account for occupancy patterns in habitat matrices between the Guanica and Susua forest reserves. This knowledge is central to conservation design, particularly in ever changing insular landscapes.LocationSouth-western Puerto Rico.MethodsWe used a multiseason occupancy modelling approach to quantify seasonal estimates of occupancy, and colonization and extinction rates of seven resident avian species surveyed over five seasons from January 2010 to June 2011. We modelled parameters by matrix type, expressions of survey station isolation, quality, amount of forest cover and context (embedded in forest patch).ResultsSeasonal occupancy remained stable throughout the study for all species, consistent with seasonally constant colonization and extinction probabilities. Occupancy was mediated by matrix type, higher in reserves and forested matrix than in the urban and agricultural matrices. This pattern is in accord with the forest affinities of all but an open-habitat specialist. Puerto Rican Spindalis (Spindalis portoricensis) exhibited high occupancy in the urban matrix, highlighting the adaptability of some insular species to novel environments. Highest colonization rates occurred when perching structures were at ≤ 500 m. Survey stations with at least three fruiting tree species and 61% forest cover exhibited lowest seasonal extinction rates.Main conclusionsOur work identified habitat features that influenced seasonal probabilities of colonization and extinction in a human-modified landscape. Conservation design decisions are better informed with increased knowledge about interpatch distances to improve matrix permeability, and habitat features that increase persistence or continued use of habitat stepping stones. A focus on dynamic processes is valuable because conservation actions directly influence colonization and extinction rates, and thus, a quantitative means to gauge their benefit.

  8. Quantum confinement of nanocrystals within amorphous matrices

    NASA Astrophysics Data System (ADS)

    Lusk, Mark T.; Collins, Reuben T.; Nourbakhsh, Zahra; Akbarzadeh, Hadi

    2014-02-01

    Nanocrystals encapsulated within an amorphous matrix are computationally analyzed to quantify the degree to which the matrix modifies the nature of their quantum-confinement power—i.e., the relationship between nanocrystal size and the gap between valence- and conduction-band edges. A special geometry allows exactly the same amorphous matrix to be applied to nanocrystals of increasing size to precisely quantify changes in confinement without the noise typically associated with encapsulating structures that are different for each nanocrystal. The results both explain and quantify the degree to which amorphous matrices redshift the character of quantum confinement. The character of this confinement depends on both the type of encapsulating material and the separation distance between the nanocrystals within it. Surprisingly, the analysis also identifies a critical nanocrystal threshold below which quantum confinement is not possible—a feature unique to amorphous encapsulation. Although applied to silicon nanocrystals within an amorphous silicon matrix, the methodology can be used to accurately analyze the confinement softening of other amorphous systems as well.

  9. Recursive inverse kinematics for robot arms via Kalman filtering and Bryson-Frazier smoothing

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1987-01-01

    This paper applies linear filtering and smoothing theory to solve recursively the inverse kinematics problem for serial multilink manipulators. This problem is to find a set of joint angles that achieve a prescribed tip position and/or orientation. A widely applicable numerical search solution is presented. The approach finds the minimum of a generalized distance between the desired and the actual manipulator tip position and/or orientation. Both a first-order steepest-descent gradient search and a second-order Newton-Raphson search are developed. The optimal relaxation factor required for the steepest descent method is computed recursively using an outward/inward procedure similar to those used typically for recursive inverse dynamics calculations. The second-order search requires evaluation of a gradient and an approximate Hessian. A Gauss-Markov approach is used to approximate the Hessian matrix in terms of products of first-order derivatives. This matrix is inverted recursively using a two-stage process of inward Kalman filtering followed by outward smoothing. This two-stage process is analogous to that recently developed by the author to solve by means of spatial filtering and smoothing the forward dynamics problem for serial manipulators.

  10. Correlation between the length reduction of carbon nanotubes and the electrical percolation threshold of melt compounded polyolefin composites.

    PubMed

    Vasileiou, Alexandros A; Kontopoulou, Marianna; Gui, Hua; Docoslis, Aristides

    2015-01-28

    The objectives of this work are to quantify the degree of multiwalled carbon nanotube (MWCNT) length reduction upon melt compounding and to demonstrate unambiguously that the length reduction is mainly responsible for the increase in electrical percolation threshold of the resulting composites. Polyolefin matrices of varying viscosities and different functional groups are melt compounded with MWCNTs. A simple method is developed to solubilize the polymer matrix and isolate the MWCNTs, enabling detailed imaging analysis. In spite of the perceived strength of the MWCNTs, the results demonstrate that the shear forces developed during melt mixing are sufficient to cause significant nanotube breakage and length reduction. Breakage is promoted when higher MWCNT contents are used, due to increased probability of particle collisions. Furthermore, the higher shear forces transmitted to the nanotubes in the presence of higher matrix viscosities and functional groups that promote interfacial interactions, shift the nanotube distribution toward smaller sizes. The length reduction of the MWCNTs causes significant increases in the percolation threshold, due to the loss of interconnectivity, which results in fewer conductive pathways. These findings are validated by comparing the experimental percolation threshold values with those predicted by the improved interparticle distance theoretical model.

  11. Comparison of two Galerkin quadrature methods

    DOE PAGES

    Morel, Jim E.; Warsa, James; Franke, Brian C.; ...

    2017-02-21

    Here, we compare two methods for generating Galerkin quadratures. In method 1, the standard S N method is used to generate the moment-to-discrete matrix and the discrete-to-moment matrix is generated by inverting the moment-to-discrete matrix. This is a particular form of the original Galerkin quadrature method. In method 2, which we introduce here, the standard S N method is used to generate the discrete-to-moment matrix and the moment-to-discrete matrix is generated by inverting the discrete-to-moment matrix. With an N-point quadrature, method 1 has the advantage that it preserves N eigenvalues and N eigenvectors of the scattering operator in a pointwisemore » sense. With an N-point quadrature, method 2 has the advantage that it generates consistent angular moment equations from the corresponding S N equations while preserving N eigenvalues of the scattering operator. Our computational results indicate that these two methods are quite comparable for the test problem considered.« less

  12. Comparison of two Galerkin quadrature methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, Jim E.; Warsa, James; Franke, Brian C.

    Here, we compare two methods for generating Galerkin quadratures. In method 1, the standard S N method is used to generate the moment-to-discrete matrix and the discrete-to-moment matrix is generated by inverting the moment-to-discrete matrix. This is a particular form of the original Galerkin quadrature method. In method 2, which we introduce here, the standard S N method is used to generate the discrete-to-moment matrix and the moment-to-discrete matrix is generated by inverting the discrete-to-moment matrix. With an N-point quadrature, method 1 has the advantage that it preserves N eigenvalues and N eigenvectors of the scattering operator in a pointwisemore » sense. With an N-point quadrature, method 2 has the advantage that it generates consistent angular moment equations from the corresponding S N equations while preserving N eigenvalues of the scattering operator. Our computational results indicate that these two methods are quite comparable for the test problem considered.« less

  13. Theory of activated penetrant diffusion in viscous fluids and colloidal suspensions

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Schweizer, Kenneth S.

    2015-10-01

    We heuristically formulate a microscopic, force level, self-consistent nonlinear Langevin equation theory for activated barrier hopping and non-hydrodynamic diffusion of a hard sphere penetrant in very dense hard sphere fluid matrices. Penetrant dynamics is controlled by a rich competition between force relaxation due to penetrant self-motion and collective matrix structural (alpha) relaxation. In the absence of penetrant-matrix attraction, three activated dynamical regimes are predicted as a function of penetrant-matrix size ratio which are physically distinguished by penetrant jump distance and the nature of matrix motion required to facilitate its hopping. The penetrant diffusion constant decreases the fastest with size ratio for relatively small penetrants where the matrix effectively acts as a vibrating amorphous solid. Increasing penetrant-matrix attraction strength reduces penetrant diffusivity due to physical bonding. For size ratios approaching unity, a distinct dynamical regime emerges associated with strong slaving of penetrant hopping to matrix structural relaxation. A crossover regime at intermediate penetrant-matrix size ratio connects the two limiting behaviors for hard penetrants, but essentially disappears if there are strong attractions with the matrix. Activated penetrant diffusivity decreases strongly with matrix volume fraction in a manner that intensifies as the size ratio increases. We propose and implement a quasi-universal approach for activated diffusion of a rigid atomic/molecular penetrant in a supercooled liquid based on a mapping between the hard sphere system and thermal liquids. Calculations for specific systems agree reasonably well with experiments over a wide range of temperature, covering more than 10 orders of magnitude of variation of the penetrant diffusion constant.

  14. Theory of activated penetrant diffusion in viscous fluids and colloidal suspensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Rui; Schweizer, Kenneth S., E-mail: kschweiz@illinois.edu

    2015-10-14

    We heuristically formulate a microscopic, force level, self-consistent nonlinear Langevin equation theory for activated barrier hopping and non-hydrodynamic diffusion of a hard sphere penetrant in very dense hard sphere fluid matrices. Penetrant dynamics is controlled by a rich competition between force relaxation due to penetrant self-motion and collective matrix structural (alpha) relaxation. In the absence of penetrant-matrix attraction, three activated dynamical regimes are predicted as a function of penetrant-matrix size ratio which are physically distinguished by penetrant jump distance and the nature of matrix motion required to facilitate its hopping. The penetrant diffusion constant decreases the fastest with size ratiomore » for relatively small penetrants where the matrix effectively acts as a vibrating amorphous solid. Increasing penetrant-matrix attraction strength reduces penetrant diffusivity due to physical bonding. For size ratios approaching unity, a distinct dynamical regime emerges associated with strong slaving of penetrant hopping to matrix structural relaxation. A crossover regime at intermediate penetrant-matrix size ratio connects the two limiting behaviors for hard penetrants, but essentially disappears if there are strong attractions with the matrix. Activated penetrant diffusivity decreases strongly with matrix volume fraction in a manner that intensifies as the size ratio increases. We propose and implement a quasi-universal approach for activated diffusion of a rigid atomic/molecular penetrant in a supercooled liquid based on a mapping between the hard sphere system and thermal liquids. Calculations for specific systems agree reasonably well with experiments over a wide range of temperature, covering more than 10 orders of magnitude of variation of the penetrant diffusion constant.« less

  15. A comparison of linear approaches to filter out environmental effects in structural health monitoring

    NASA Astrophysics Data System (ADS)

    Deraemaeker, A.; Worden, K.

    2018-05-01

    This paper discusses the possibility of using the Mahalanobis squared-distance to perform robust novelty detection in the presence of important environmental variability in a multivariate feature vector. By performing an eigenvalue decomposition of the covariance matrix used to compute that distance, it is shown that the Mahalanobis squared-distance can be written as the sum of independent terms which result from a transformation from the feature vector space to a space of independent variables. In general, especially when the size of the features vector is large, there are dominant eigenvalues and eigenvectors associated with the covariance matrix, so that a set of principal components can be defined. Because the associated eigenvalues are high, their contribution to the Mahalanobis squared-distance is low, while the contribution of the other components is high due to the low value of the associated eigenvalues. This analysis shows that the Mahalanobis distance naturally filters out the variability in the training data. This property can be used to remove the effect of the environment in damage detection, in much the same way as two other established techniques, principal component analysis and factor analysis. The three techniques are compared here using real experimental data from a wooden bridge for which the feature vector consists in eigenfrequencies and modeshapes collected under changing environmental conditions, as well as damaged conditions simulated with an added mass. The results confirm the similarity between the three techniques and the ability to filter out environmental effects, while keeping a high sensitivity to structural changes. The results also show that even after filtering out the environmental effects, the normality assumption cannot be made for the residual feature vector. An alternative is demonstrated here based on extreme value statistics which results in a much better threshold which avoids false positives in the training data, while allowing detection of all damaged cases.

  16. Construction of phylogenetic trees by kernel-based comparative analysis of metabolic networks.

    PubMed

    Oh, S June; Joung, Je-Gun; Chang, Jeong-Ho; Zhang, Byoung-Tak

    2006-06-06

    To infer the tree of life requires knowledge of the common characteristics of each species descended from a common ancestor as the measuring criteria and a method to calculate the distance between the resulting values of each measure. Conventional phylogenetic analysis based on genomic sequences provides information about the genetic relationships between different organisms. In contrast, comparative analysis of metabolic pathways in different organisms can yield insights into their functional relationships under different physiological conditions. However, evaluating the similarities or differences between metabolic networks is a computationally challenging problem, and systematic methods of doing this are desirable. Here we introduce a graph-kernel method for computing the similarity between metabolic networks in polynomial time, and use it to profile metabolic pathways and to construct phylogenetic trees. To compare the structures of metabolic networks in organisms, we adopted the exponential graph kernel, which is a kernel-based approach with a labeled graph that includes a label matrix and an adjacency matrix. To construct the phylogenetic trees, we used an unweighted pair-group method with arithmetic mean, i.e., a hierarchical clustering algorithm. We applied the kernel-based network profiling method in a comparative analysis of nine carbohydrate metabolic networks from 81 biological species encompassing Archaea, Eukaryota, and Eubacteria. The resulting phylogenetic hierarchies generally support the tripartite scheme of three domains rather than the two domains of prokaryotes and eukaryotes. By combining the kernel machines with metabolic information, the method infers the context of biosphere development that covers physiological events required for adaptation by genetic reconstruction. The results show that one may obtain a global view of the tree of life by comparing the metabolic pathway structures using meta-level information rather than sequence information. This method may yield further information about biological evolution, such as the history of horizontal transfer of each gene, by studying the detailed structure of the phylogenetic tree constructed by the kernel-based method.

  17. Method of producing a hybrid matrix fiber composite

    DOEpatents

    Deteresa, Steven J [Livermore, CA; Lyon, Richard E [Absecon, NJ; Groves, Scott E [Brentwood, CA

    2006-03-28

    Hybrid matrix fiber composites having enhanced compressive performance as well as enhanced stiffness, toughness and durability suitable for compression-critical applications. The methods for producing the fiber composites using matrix hybridization. The hybrid matrix fiber composites comprised of two chemically or physically bonded matrix materials, whereas the first matrix materials are used to impregnate multi-filament fibers formed into ribbons and the second matrix material is placed around and between the fiber ribbons that are impregnated with the first matrix material and both matrix materials are cured and solidified.

  18. A new statistical distance scale for planetary nebulae

    NASA Astrophysics Data System (ADS)

    Ali, Alaa; Ismail, H. A.; Alsolami, Z.

    2015-05-01

    In the first part of the present article we discuss the consistency among different individual distance methods of Galactic planetary nebulae, while in the second part we develop a new statistical distance scale based on a calibrating sample of well determined distances. A set composed of 315 planetary nebulae with individual distances are extracted from the literature. Inspecting the data set indicates that the accuracy of distances is varying among different individual methods and also among different sources where the same individual method was applied. Therefore, we derive a reliable weighted mean distance for each object by considering the influence of the distance error and the weight of each individual method. The results reveal that the discussed individual methods are consistent with each other, except the gravity method that produces higher distances compared to other individual methods. From the initial data set, we construct a standard calibrating sample consists of 82 objects. This sample is restricted only to the objects with distances determined from at least two different individual methods, except few objects with trusted distances determined from the trigonometric, spectroscopic, and cluster membership methods. In addition to the well determined distances for this sample, it shows a lot of advantages over that used in the prior distance scales. This sample is used to recalibrate the mass-radius and radio surface brightness temperature-radius relationships. An average error of ˜30 % is estimated for the new distance scale. The newly distance scale is compared with the most widely used statistical scales in literature, where the results show that it is roughly similar to the majority of them within ˜±20 % difference. Furthermore, the new scale yields a weighted mean distance to the Galactic center of 7.6±1.35 kpc, which in good agreement with the very recent measure of Malkin 2013.

  19. Approximate method of variational Bayesian matrix factorization/completion with sparse prior

    NASA Astrophysics Data System (ADS)

    Kawasumi, Ryota; Takeda, Koujin

    2018-05-01

    We derive the analytical expression of a matrix factorization/completion solution by the variational Bayes method, under the assumption that the observed matrix is originally the product of low-rank, dense and sparse matrices with additive noise. We assume the prior of a sparse matrix is a Laplace distribution by taking matrix sparsity into consideration. Then we use several approximations for the derivation of a matrix factorization/completion solution. By our solution, we also numerically evaluate the performance of a sparse matrix reconstruction in matrix factorization, and completion of a missing matrix element in matrix completion.

  20. Geochemistry of Enceladus and the Galilean Moons from in situ Analysis of Ejecta

    NASA Astrophysics Data System (ADS)

    Postberg, F.; Schmidt, J.; Hillier, J. K.; Kempf, S.; Srama, R.

    2012-09-01

    The contribution of Cassini's dust detector CDA in revealing subsurface liquid water on Enceladus has demonstrated how questions in planetary science can be addressed by in situ analyses of icy dust particles. As the measurements are particularly sensitive to non-ice compounds embedded in an ice matrix, concentrations of various salts and organic compounds can be identified in different dust populations. This has successfully been demonstrated at Enceladus, giving insights in the moons subsurface geochemistry. This method can be applied to any planetary body that ejects particles to distances suitable for spacecraft sensing. The Galilean moons are of particular relevance since they are believed to steadily emit grains from their surfaces either by active volcanism (Io) or stimulated by micrometeoroid bombardment (Europa, Ganymede, Callisto).

  1. Proposal of the genus Sphingomonas sensu stricto and three new genera, Sphingobium, Novosphingobium and Sphingopyxis, on the basis of phylogenetic and chemotaxonomic analyses.

    PubMed

    Takeuchi, M; Hamana, K; Hiraishi, A

    2001-07-01

    Phylogenetic analyses of 16S rRNA gene sequences by distance matrix and parsimony methods indicated that the currently known species of the genus Sphingomonas can be divided into four clusters. Some chemotaxonomic and phenotypic differences were noted among these clusters. Three new genera, Sphingobium, Novosphingobium and Sphingopyxis, are proposed in addition to the genus Sphingomonas sensu stricto. The genus Sphingobium is proposed to accommodate Sphingomonas chlorophenolica, Sphingomonas herbicidovorans and Sphingomonas yanoikuyae. The genus Novosphingobium is proposed for Sphingomonas aromaticivorans, Sphingomonas capsulata, Sphingomonas rosa, Sphingomonas stygia, Sphingomonas subarctica and Sphingomonas subterranea. Sphingomonas macrogoltabidus and Sphingomonas terrae are reclassified in the genus Sphingopyxis. The type species of Sphingobium, Novosphingobium and Sphingopyxis are Sphingobium yanoikuyae, Novosphingobium capsulatum and Sphingopyxis macrogoltabida, respectively.

  2. The fast algorithm of spark in compressive sensing

    NASA Astrophysics Data System (ADS)

    Xie, Meihua; Yan, Fengxia

    2017-01-01

    Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.

  3. A sparse matrix-vector multiplication based algorithm for accurate density matrix computations on systems of millions of atoms

    NASA Astrophysics Data System (ADS)

    Ghale, Purnima; Johnson, Harley T.

    2018-06-01

    We present an efficient sparse matrix-vector (SpMV) based method to compute the density matrix P from a given Hamiltonian in electronic structure computations. Our method is a hybrid approach based on Chebyshev-Jackson approximation theory and matrix purification methods like the second order spectral projection purification (SP2). Recent methods to compute the density matrix scale as O(N) in the number of floating point operations but are accompanied by large memory and communication overhead, and they are based on iterative use of the sparse matrix-matrix multiplication kernel (SpGEMM), which is known to be computationally irregular. In addition to irregularity in the sparse Hamiltonian H, the nonzero structure of intermediate estimates of P depends on products of H and evolves over the course of computation. On the other hand, an expansion of the density matrix P in terms of Chebyshev polynomials is straightforward and SpMV based; however, the resulting density matrix may not satisfy the required constraints exactly. In this paper, we analyze the strengths and weaknesses of the Chebyshev-Jackson polynomials and the second order spectral projection purification (SP2) method, and propose to combine them so that the accurate density matrix can be computed using the SpMV computational kernel only, and without having to store the density matrix P. Our method accomplishes these objectives by using the Chebyshev polynomial estimate as the initial guess for SP2, which is followed by using sparse matrix-vector multiplications (SpMVs) to replicate the behavior of the SP2 algorithm for purification. We demonstrate the method on a tight-binding model system of an oxide material containing more than 3 million atoms. In addition, we also present the predicted behavior of our method when applied to near-metallic Hamiltonians with a wide energy spectrum.

  4. Gene Expression Data to Mouse Atlas Registration Using a Nonlinear Elasticity Smoother and Landmark Points Constraints

    PubMed Central

    Lin, Tungyou; Guyader, Carole Le; Dinov, Ivo; Thompson, Paul; Toga, Arthur; Vese, Luminita

    2013-01-01

    This paper proposes a numerical algorithm for image registration using energy minimization and nonlinear elasticity regularization. Application to the registration of gene expression data to a neuroanatomical mouse atlas in two dimensions is shown. We apply a nonlinear elasticity regularization to allow larger and smoother deformations, and further enforce optimality constraints on the landmark points distance for better feature matching. To overcome the difficulty of minimizing the nonlinear elasticity functional due to the nonlinearity in the derivatives of the displacement vector field, we introduce a matrix variable to approximate the Jacobian matrix and solve for the simplified Euler-Lagrange equations. By comparison with image registration using linear regularization, experimental results show that the proposed nonlinear elasticity model also needs fewer numerical corrections such as regridding steps for binary image registration, it renders better ground truth, and produces larger mutual information; most importantly, the landmark points distance and L2 dissimilarity measure between the gene expression data and corresponding mouse atlas are smaller compared with the registration model with biharmonic regularization. PMID:24273381

  5. Optical drift effects in general relativity

    NASA Astrophysics Data System (ADS)

    Korzyński, Mikołaj; Kopiński, Jarosław

    2018-03-01

    We consider the question of determining the optical drift effects in general relativity, i.e. the rate of change of the apparent position, redshift, Jacobi matrix, angular distance and luminosity distance of a distant object as registered by an observer in an arbitrary spacetime. We present a fully relativistic and covariant approach, in which the problem is reduced to a hierarchy of ODE's solved along the line of sight. The 4-velocities and 4-accelerations of the observer and the emitter and the geometry of the spacetime along the line of sight constitute the input data. We build on the standard relativistic geometric optics formalism and extend it to include the time derivatives of the observables. In the process we obtain two general, non-perturbative relations: the first one between the gravitational lensing, represented by the Jacobi matrix, and the apparent position drift, also called the cosmic parallax, and the second one between the apparent position drift and the redshift drift. The applications of the results include the theoretical study of the drift effects of cosmological origin (so-called real-time cosmology) in numerical or exact Universe models.

  6. Morphological and Wear behaviour of new Al-SiCmicro-SiCnano hybrid nanocomposites fabricated through powder metallurgy

    NASA Astrophysics Data System (ADS)

    Arif, Sajjad; Tanwir Alam, Md; Aziz, Tariq; Ansari, Akhter H.

    2018-04-01

    In the present work, aluminium matrix composites reinforced with 10 wt% SiC micro particles along with x% SiC nano particles (x = 0, 1, 3, 5 and 7 wt%) were fabricated through powder metallurgy. The fabricated hybrid composites were characterized by x-ray diffractometer (XRD), scanning electron microscope (SEM), energy dispersive spectrum (EDS) and elemental mapping. The relative density, hardness and wear behaviour of all hybrid nanocomposites were studied. The influence of various control factors like SiC reinforcement, sliding distance (300, 600, 900 and 1200 m) and applied load (20, 30 and 40 N) were explored using pin-on-disc wear apparatus. The uniform distribution of micro and nano SiC particles in aluminium matrix is confirmed by elemental maps. The hardness and wear test results showed that properties of the hybrid composite containing 5 wt% nano SiC was better than other hybrid composites. Additionally, the wear loss of all hybrid nanocomposites increases with increasing sliding distance and applied load. The identification of wear phenomenon were studied through the SEM images of worn surface.

  7. Quarkonium polarization and the long distance matrix elements hierarchies using jet substructure

    NASA Astrophysics Data System (ADS)

    Dai, Lin; Shrivastava, Prashant

    2017-08-01

    We investigate the quarkonium production mechanisms in jets at the LHC, using the fragmenting jet functions (FJF) approach. Specifically, we discuss the jet energy dependence of the J /ψ production cross section at the LHC. By comparing the cross sections for the different NRQCD production channels (1S0[8], 3S1[8], 3PJ[8], and 3cripts>S1[1]), we find that at fixed values of energy fraction z carried by the J /ψ , if the normalized cross section is a decreasing function of the jet energy, in particular for z >0.5 , then the depolarizing 1S0[8] must be the dominant channel. This makes the prediction made in [Baumgart et al., J. High Energy Phys. 11 (2014) 003, 10.1007/JHEP11(2014)003] for the FJF's also true for the cross section. We also make comparisons between the long distance matrix elements extracted by various groups. This analysis could potentially shed light on the polarization properties of the J /ψ production in high pT region.

  8. Optimized Projection Matrix for Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Xu, Jianping; Pi, Yiming; Cao, Zongjie

    2010-12-01

    Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

  9. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    PubMed

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  10. The applicability of ordinary least squares to consistently short distances between taxa in phylogenetic tree construction and the normal distribution test consequences.

    PubMed

    Roux, C Z

    2009-05-01

    Short phylogenetic distances between taxa occur, for example, in studies on ribosomal RNA-genes with slow substitution rates. For consistently short distances, it is proved that in the completely singular limit of the covariance matrix ordinary least squares (OLS) estimates are minimum variance or best linear unbiased (BLU) estimates of phylogenetic tree branch lengths. Although OLS estimates are in this situation equal to generalized least squares (GLS) estimates, the GLS chi-square likelihood ratio test will be inapplicable as it is associated with zero degrees of freedom. Consequently, an OLS normal distribution test or an analogous bootstrap approach will provide optimal branch length tests of significance for consistently short phylogenetic distances. As the asymptotic covariances between branch lengths will be equal to zero, it follows that the product rule can be used in tree evaluation to calculate an approximate simultaneous confidence probability that all interior branches are positive.

  11. The ability of individuals to assess population density influences the evolution of emigration propensity and dispersal distance.

    PubMed

    Poethke, Hans Joachim; Gros, Andreas; Hovestadt, Thomas

    2011-08-07

    We analyze the simultaneous evolution of emigration and settlement decisions for actively dispersing species differing in their ability to assess population density. Using an individual-based model we simulate dispersal as a multi-step (patch to patch) movement in a world consisting of habitat patches surrounded by a hostile matrix. Each such step is associated with the same mortality risk. Our simulations show that individuals following an informed strategy, where emigration (and settlement) probability depends on local population density, evolve a lower (natal) emigration propensity but disperse over significantly larger distances - i.e. postpone settlement longer - than individuals performing density-independent emigration. This holds especially when variation in environmental conditions is spatially correlated. Both effects can be traced to the informed individuals' ability to better exploit existing heterogeneity in reproductive chances. Yet, already moderate distance-dependent dispersal costs prevent the evolution of multi-step (long-distance) dispersal, irrespective of the dispersal strategy. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Continuous fiber ceramic matrix composites for heat engine components

    NASA Technical Reports Server (NTRS)

    Tripp, David E.

    1988-01-01

    High strength at elevated temperatures, low density, resistance to wear, and abundance of nonstrategic raw materials make structural ceramics attractive for advanced heat engine applications. Unfortunately, ceramics have a low fracture toughness and fail catastrophically because of overload, impact, and contact stresses. Ceramic matrix composites provide the means to achieve improved fracture toughness while retaining desirable characteristics, such as high strength and low density. Materials scientists and engineers are trying to develop the ideal fibers and matrices to achieve the optimum ceramic matrix composite properties. A need exists for the development of failure models for the design of ceramic matrix composite heat engine components. Phenomenological failure models are currently the most frequently used in industry, but they are deterministic and do not adequately describe ceramic matrix composite behavior. Semi-empirical models were proposed, which relate the failure of notched composite laminates to the stress a characteristic distance away from the notch. Shear lag models describe composite failure modes at the micromechanics level. The enhanced matrix cracking stress occurs at the same applied stress level predicted by the two models of steady state cracking. Finally, statistical models take into consideration the distribution in composite failure strength. The intent is to develop these models into computer algorithms for the failure analysis of ceramic matrix composites under monotonically increasing loads. The algorithms will be included in a postprocessor to general purpose finite element programs.

  13. Assessment of gene order computing methods for Alzheimer's disease

    PubMed Central

    2013-01-01

    Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541

  14. Comparison of matrix effects in HPLC-MS/MS and UPLC-MS/MS analysis of nine basic pharmaceuticals in surface waters.

    PubMed

    Van De Steene, Jet C; Lambert, Willy E

    2008-05-01

    When developing an LC-MS/MS-method matrix effects are a major issue. The effect of co-eluting compounds arising from the matrix can result in signal enhancement or suppression. During method development much attention should be paid to diminishing matrix effects as much as possible. The present work evaluates matrix effects from aqueous environmental samples in the simultaneous analysis of a group of 9 specific pharmaceuticals with HPLC-ESI/MS/MS and UPLC-ESI/MS/MS: flubendazole, propiconazole, pipamperone, cinnarizine, ketoconazole, miconazole, rabeprazole, itraconazole and domperidone. When HPLC-MS/MS is used, matrix effects are substantial and can not be compensated for with analogue internal standards. For different surface water samples different matrix effects are found. For accurate quantification the standard addition approach is necessary. Due to the better resolution and more narrow peaks in UPLC, analytes will co-elute less with interferences during ionisation, so matrix effects could be lower, or even eliminated. If matrix effects are eliminated with this technique, the standard addition method for quantification can be omitted and the overall method will be simplified. Results show that matrix effects are almost eliminated if internal standards (structural analogues) are used. Instead of the time-consuming and labour-intensive standard addition method, with UPLC the internal standardization can be used for quantification and the overall method is substantially simplified.

  15. Palmprint verification using Lagrangian decomposition and invariant interest points

    NASA Astrophysics Data System (ADS)

    Gupta, P.; Rattani, A.; Kisku, D. R.; Hwang, C. J.; Sing, J. K.

    2011-06-01

    This paper presents a palmprint based verification system using SIFT features and Lagrangian network graph technique. We employ SIFT for feature extraction from palmprint images whereas the region of interest (ROI) which has been extracted from wide palm texture at the preprocessing stage, is considered for invariant points extraction. Finally, identity is established by finding permutation matrix for a pair of reference and probe palm graphs drawn on extracted SIFT features. Permutation matrix is used to minimize the distance between two graphs. The propsed system has been tested on CASIA and IITK palmprint databases and experimental results reveal the effectiveness and robustness of the system.

  16. Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models

    NASA Astrophysics Data System (ADS)

    Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing

    2018-06-01

    The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.

  17. Molecular phylogeny of the hominoid primates as indicated by two-dimensional protein electrophoresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldman, D.; Giri, P.R.; O'Brien, J.O.

    1987-05-01

    A molecular phylogeny for the hominoid primates was constructed by using genetic distances from a survey of 383 radiolabeled fibroblast polypeptides resolved by two-dimensional electrophoresis (2DE). An internally consistent matrix of Nei genetic distances was generated on the basis of variants in electrophoretic position. The derived phylogenetic tree indicated a branching sequence, from oldest to most recent, of cercopithecoids (Macaca fascicularis), gibbon-siamang, orangutan, gorilla, and human-chimpanzee. A cladistic analysis of 240 electrophoretic characters that varied between ape species produced an identical tree. Genetic distance measures obtained by 2DE are largely consistent with those generated by other molecular procedures. In addition,more » the 2DE data set appears to resolve the human-chimpanzee-gorilla trichotomy in favor of a more recent association of chimpanzees and humans.« less

  18. Genetic code, hamming distance and stochastic matrices.

    PubMed

    He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E

    2004-09-01

    In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.

  19. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    NASA Astrophysics Data System (ADS)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  20. A Truncated Nuclear Norm Regularization Method Based on Weighted Residual Error for Matrix Completion.

    PubMed

    Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin

    2016-01-01

    Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.

  1. Matrix method for acoustic levitation simulation.

    PubMed

    Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C

    2011-08-01

    A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.

  2. Optical matrix-matrix multiplication method demonstrated by the use of a multifocus hololens

    NASA Technical Reports Server (NTRS)

    Liu, H. K.; Liang, Y.-Z.

    1984-01-01

    A method of optical matrix-matrix multiplication is presented. The feasibility of the method is also experimentally demonstrated by the use of a dichromated-gelatin multifocus holographic lens (hololens). With the specific values of matrices chosen, the average percentage error between the theoretical and experimental data of the elements of the output matrix of the multiplication of some specific pairs of 3 x 3 matrices is 0.4 percent, which corresponds to an 8-bit accuracy.

  3. Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method

    PubMed Central

    Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing

    2017-01-01

    Abstract. Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach’s feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method. PMID:28464120

  4. Entanglement Entropy in Two-Dimensional String Theory.

    PubMed

    Hartnoll, Sean A; Mazenc, Edward A

    2015-09-18

    To understand an emergent spacetime is to understand the emergence of locality. Entanglement entropy is a powerful diagnostic of locality, because locality leads to a large amount of short distance entanglement. Two-dimensional string theory is among the very simplest instances of an emergent spatial dimension. We compute the entanglement entropy in the large-N matrix quantum mechanics dual to two-dimensional string theory in the semiclassical limit of weak string coupling. We isolate a logarithmically large, but finite, contribution that corresponds to the short distance entanglement of the tachyon field in the emergent spacetime. From the spacetime point of view, the entanglement is regulated by a nonperturbative "graininess" of space.

  5. Trailing Vortex-Induced Loads During Close Encounters in Cruise

    NASA Technical Reports Server (NTRS)

    Mendenhall, Michael R.; Lesieutre, Daniel J; Kelly, Michael J.

    2015-01-01

    The trailing vortex induced aerodynamic loads on a Falcon 20G business jet flying in the wake of a DC-8 are predicted to provide a preflight estimate of safe trail distances during flight test measurements in the wake. Static and dynamic loads on the airframe flying in the near wake are shown at a matrix of locations, and the dynamic motion of the Falcon 20G during traverses of the DC-8 primary trailing vortex is simulated. Safe trailing distances for the test flights are determined, and optimum vortex traverse schemes are identified to moderate the motion of the trailing aircraft during close encounters with the vortex wake.

  6. (Non-)Arguments in Long-Distance Extractions.

    PubMed

    Nyvad, Anne Mette; Kizach, Johannes; Christensen, Ken Ramshøj

    2015-10-01

    Previous research has shown that in fully grammatical sentences, response time increases and acceptability decreases when the filler in a long-distance extraction is incompatible with the matrix verb. This effect could potentially be due to a difference between argument and adjunct extraction. In this paper we investigate the effect of long extraction of arguments and adjuncts where incompatibility is kept constant. Based on the results from two offline surveys and an online experiment, we argue that the argument/adjunct asymmetry in terms of acceptability is due to differences in processing difficulty, but that both types of extraction involve the same intermediate attachment sites in the online processing.

  7. [Comparison of film-screen combinations in contrast-detail diagram and with interactive image analysis. 3: Trimodal histograms of gray scale distribution in bar groups of lead pattern images].

    PubMed

    Hagemann, G; Eichbaum, G; Stamm, G

    1998-05-01

    The following four screen film combinations were compared: a) a combination of anticrossover film and UV-light emitting screens, b) a combination of blue-light emitting screens and film and c) two conventional green fluorescing screen film combinations. Radiographs of a specially designed plexiglass phantom (0.2 x 0.2 x 0.12 m3) with bar patterns of lead and plaster and of air, respectively were obtained using the following parameters: 12 pulse generator, 0.6 mm focus size, 4.7 mm aluminum prefilter, a grid with 40 lines/cm (12:1) and a focus-detector distance of 1.15 m. Image analysis was performed using an Ibas system and a Zeiss Kontron computer. Display conditions were the following: display distance 0.12 m, a vario film objective 35/70 (Zeiss), a video camera tube with a PbO photocathode, 625 lines (Siemens Heimann), an Ibas image matrix of 512 x 512 pixels with a spatial resolution of ca. 7 cycles/mm, the projected matrix area was 5000 micron 2. Maxima in the histograms of a grouped bar pattern were estimated as mean values from the bar and gap regions ("mean value method"). They were used to calculate signal contrast, standard deviations of the means and scatter fraction. Comparing the histograms with respect to spatial resolution and kV setting a clear advantage of the UVR system becomes obvious. The quantitative analysis yielded a maximum spatial resolution of approx. 3 cycles/mm for the UVR system at 60 kV which decreased to half of this value at 117 kV caused by the increasing influence of scattered radiation. A ranking of screen-film systems with respect to image quality and dose requirement is presented. For its evaluation an interactive image analysis using the mean value method was found to be superior to signal/noise ratio measurements and visual analysis in respect to diagnostic relevance and saving of time.

  8. The Bioactivity of Cartilage Extracellular Matrix in Articular Cartilage Regeneration

    PubMed Central

    Sutherland, Amanda J.; Converse, Gabriel L.; Hopkins, Richard A.; Detamore, Michael S.

    2014-01-01

    Cartilage matrix is a particularly promising acellular material for cartilage regeneration given the evidence supporting its chondroinductive character. The ‘raw materials’ of cartilage matrix can serve as building blocks and signals for enhanced tissue regeneration. These matrices can be created by chemical or physical methods: physical methods disrupt cellular membranes and nuclei but may not fully remove all cell components and DNA, whereas chemical methods when combined with physical methods are particularly effective in fully decellularizing such materials. Critical endpoints include no detectable residual DNA or immunogenic antigens. It is important to first delineate between the sources of the cartilage matrix, i.e., derived from matrix produced by cells in vitro or from native tissue, and then to further characterize the cartilage matrix based on the processing method, i.e., decellularization or devitalization. With these distinctions, four types of cartilage matrices exist: decellularized native cartilage (DCC), devitalized native cartilage (DVC), decellularized cell derived matrix (DCCM), and devitalized cell derived matrix (DVCM). Delivery of cartilage matrix may be a straightforward approach without the need for additional cells or growth factors. Without additional biological additives, cartilage matrix may be attractive from a regulatory and commercialization standpoint. Source and delivery method are important considerations for clinical translation. Only one currently marketed cartilage matrix medical device is decellularized, although trends in filed patents suggest additional decellularized products may be available in the future. To choose the most relevant source and processing for cartilage matrix, qualifying testing needs to include targeting the desired application, optimizing delivery of the material, identify relevant FDA regulations, assess availability of raw materials, and immunogenic properties of the product. PMID:25044502

  9. Novel image analysis methods for quantification of in situ 3-D tendon cell and matrix strain.

    PubMed

    Fung, Ashley K; Paredes, J J; Andarawis-Puri, Nelly

    2018-01-23

    Macroscopic tendon loads modulate the cellular microenvironment leading to biological outcomes such as degeneration or repair. Previous studies have shown that damage accumulation and the phases of tendon healing are marked by significant changes in the extracellular matrix, but it remains unknown how mechanical forces of the extracellular matrix are translated to mechanotransduction pathways that ultimately drive the biological response. Our overarching hypothesis is that the unique relationship between extracellular matrix strain and cell deformation will dictate biological outcomes, prompting the need for quantitative methods to characterize the local strain environment. While 2-D methods have successfully calculated matrix strain and cell deformation, 3-D methods are necessary to capture the increased complexity that can arise due to high levels of anisotropy and out-of-plane motion, particularly in the disorganized, highly cellular, injured state. In this study, we validated the use of digital volume correlation methods to quantify 3-D matrix strain using images of naïve tendon cells, the collagen fiber matrix, and injured tendon cells. Additionally, naïve tendon cell images were used to develop novel methods for 3-D cell deformation and 3-D cell-matrix strain, which is defined as a quantitative measure of the relationship between matrix strain and cell deformation. The results support that these methods can be used to detect strains with high accuracy and can be further extended to an in vivo setting for observing temporal changes in cell and matrix mechanics during degeneration and healing. Copyright © 2017. Published by Elsevier Ltd.

  10. Implementation of total focusing method for phased array ultrasonic imaging on FPGA

    NASA Astrophysics Data System (ADS)

    Guo, JianQiang; Li, Xi; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2015-02-01

    This paper describes a multi-FPGA imaging system dedicated for the real-time imaging using the Total Focusing Method (TFM) and Full Matrix Capture (FMC). The system was entirely described using Verilog HDL language and implemented on Altera Stratix IV GX FPGA development board. The whole algorithm process is to: establish a coordinate system of image and divide it into grids; calculate the complete acoustic distance of array element between transmitting array element and receiving array element, and transform it into index value; then index the sound pressure values from ROM and superimpose sound pressure values to get pixel value of one focus point; and calculate the pixel values of all focus points to get the final imaging. The imaging result shows that this algorithm has high SNR of defect imaging. And FPGA with parallel processing capability can provide high speed performance, so this system can provide the imaging interface, with complete function and good performance.

  11. Spatial analysis of extension fracture systems: A process modeling approach

    USGS Publications Warehouse

    Ferguson, C.C.

    1985-01-01

    Little consensus exists on how best to analyze natural fracture spacings and their sequences. Field measurements and analyses published in geotechnical literature imply fracture processes radically different from those assumed by theoretical structural geologists. The approach adopted in this paper recognizes that disruption of rock layers by layer-parallel extension results in two spacing distributions, one representing layer-fragment lengths and another separation distances between fragments. These two distributions and their sequences reflect mechanics and history of fracture and separation. Such distributions and sequences, represented by a 2 ?? n matrix of lengthsL, can be analyzed using a method that is history sensitive and which yields also a scalar estimate of bulk extension, e (L). The method is illustrated by a series of Monte Carlo experiments representing a variety of fracture-and-separation processes, each with distinct implications for extension history. Resulting distributions of e (L)are process-specific, suggesting that the inverse problem of deducing fracture-and-separation history from final structure may be tractable. ?? 1985 Plenum Publishing Corporation.

  12. Optimization of friction and wear behaviour of Al7075-Al2O3-B4C metal matrix composites using Taguchi method

    NASA Astrophysics Data System (ADS)

    Dhanalakshmi, S.; Mohanasundararaju, N.; Venkatakrishnan, P. G.; Karthik, V.

    2018-02-01

    The present study deals with investigations relating to dry sliding wear behaviour of the Al 7075 alloy, reinforced with Al2O3 and B4C. The hybrid composites are produced through Liquid Metallurgy route - Stir casting method. The amount of Al2O3 particles is varied as 3, 6, 9, 12 and 15 wt% and the amount of B4C is kept constant as 3wt%. Experiments were conducted based on the plan of experiments generated through Taguchi’s technique. A L27 Orthogonal array was selected for analysis of the data. The investigation is to find the effect of applied load, sliding speed and sliding distance on wear rate and Coefficient of Friction (COF) of the hybrid Al7075- Al2O3-B4C composite and to determine the optimal parameters for obtaining minimum wear rate. The samples were examined using scanning electronic microscopy after wear testing and analyzed.

  13. Rock fracture skeleton tracing by image processing and quantitative analysis by geometry features

    NASA Astrophysics Data System (ADS)

    Liang, Yanjie

    2016-06-01

    In rock engineering, fracture measurement is important for many applications. This paper proposes a novel method for rock fracture skeleton tracing and analyzing. As for skeleton localizing, the curvilinear fractures are multiscale enhanced based on a Hessian matrix, after image binarization, and clutters are post-processed by image analysis; subsequently, the fracture skeleton is extracted via ridge detection combined with a distance transform and thinning algorithm, after which gap sewing and burrs removal repair the skeleton. In regard to skeleton analyzing, the roughness and distribution of a fracture network are respectively described by the fractal dimensions D s and D b; the intersection and fragmentation of a fracture network are respectively characterized by the average number of ends and junctions per fracture N average and the average length per fracture L average. Three rock fracture surfaces are analyzed for experiments and the results verify that both the fracture tracing accuracy and the analysis feasibility are satisfactory using the new method.

  14. DEVELOPMENT OF A METHOD FOR THE OBSERVATION OF LIGHTNING IN PROTOPLANETARY DISKS USING ION LINES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muranushi, Takayuki; Akiyama, Eiji; Inutsuka, Shu-ichiro

    2015-12-20

    In this paper, we propose observational methods for detecting lightning in protoplanetary disks. We do so by calculating the critical electric field strength in the lightning matrix gas (LMG), the parts of the disk where the electric field is strong enough to cause lightning. That electric field accelerates multiple positive ion species to characteristic terminal velocities. In this paper, we present three distinct discharge models with corresponding critical electric fields. We simulate the position–velocity diagrams and the integrated emission maps for the models. We calculate the measure-of-sensitivity values for detection of the models and for distinguishing between the models. Atmore » the distance of TW Hya (54 pc), LMG that occupies 2π in azimuth and has 25 AU < r < 50 AU is detectable at 1200σ to 4000σ. The lower limits of the radii of 5σ-detectable LMG clumps are between 1.6 AU and 5.3 AU, depending on the models.« less

  15. Optimizing the well pumping rate and its distance from a stream

    NASA Astrophysics Data System (ADS)

    Abdel-Hafez, M. H.; Ogden, F. L.

    2008-12-01

    Both ground water and surface water are very important component of the water resources. Since they are coupled systems in riparian areas, management strategies that neglect interactions between them penalize senior surface water rights to the benefit of junior ground water rights holders in the prior appropriation rights system. Water rights managers face a problem in deciding which wells need to be shut down and when, in the case of depleted stream flow. A simulation model representing a combined hypothetical aquifer and stream has been developed using MODFLOW 2000 to capture parameter sensitivity, test management strategies and guide field data collection campaigns to support modeling. An optimization approach has been applied to optimize both the well distance from the stream and the maximum pumping rate that does not affect the stream discharge downstream the pumping wells. Conjunctive management can be modeled by coupling the numerical simulation model with the optimization techniques using the response matrix technique. The response matrix can be obtained by calculating the response coefficient for each well and stream. The main assumption of the response matrix technique is that the amount of water out of the stream to the aquifer is linearly proportional to the well pumping rate (Barlow et al. 2003). The results are presented in dimensionless form, which can be used by the water managers to solve conflicts between surface water and ground water holders by making the appropriate decision to choose which well need to be shut down first.

  16. Methods and Applications for Advancing Distance Education Technologies: International Issues and Solutions

    ERIC Educational Resources Information Center

    Syed, Mahbubur Rahman, Ed.

    2009-01-01

    The emerging field of advanced distance education delivers academic courses across time and distance, allowing educators and students to participate in a convenient learning method. "Methods and Applications for Advancing Distance Education Technologies: International Issues and Solutions" demonstrates communication technologies, intelligent…

  17. Similarity-balanced discriminant neighbor embedding and its application to cancer classification based on gene expression data.

    PubMed

    Zhang, Li; Qian, Liqiang; Ding, Chuntao; Zhou, Weida; Li, Fanzhang

    2015-09-01

    The family of discriminant neighborhood embedding (DNE) methods is typical graph-based methods for dimension reduction, and has been successfully applied to face recognition. This paper proposes a new variant of DNE, called similarity-balanced discriminant neighborhood embedding (SBDNE) and applies it to cancer classification using gene expression data. By introducing a novel similarity function, SBDNE deals with two data points in the same class and the different classes with different ways. The homogeneous and heterogeneous neighbors are selected according to the new similarity function instead of the Euclidean distance. SBDNE constructs two adjacent graphs, or between-class adjacent graph and within-class adjacent graph, using the new similarity function. According to these two adjacent graphs, we can generate the local between-class scatter and the local within-class scatter, respectively. Thus, SBDNE can maximize the between-class scatter and simultaneously minimize the within-class scatter to find the optimal projection matrix. Experimental results on six microarray datasets show that SBDNE is a promising method for cancer classification. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Efficient evaluation of nonlocal operators in density functional theory

    NASA Astrophysics Data System (ADS)

    Chen, Ying-Chih; Chen, Jing-Zhe; Michaud-Rioux, Vincent; Shi, Qing; Guo, Hong

    2018-02-01

    We present a method which combines plane waves (PW) and numerical atomic orbitals (NAO) to efficiently evaluate nonlocal operators in density functional theory with periodic boundary conditions. Nonlocal operators are first expanded using PW and then transformed to NAO so that the problem of distance-truncation is avoided. The general formalism is implemented using the hybrid functional HSE06 where the nonlocal operator is the exact exchange. Comparison of electronic structures of a wide range of semiconductors to a pure PW scheme validates the accuracy of our method. Due to the locality of NAO, thus sparsity of matrix representations of the operators, the computational complexity of the method is asymptotically quadratic in the number of electrons. Finally, we apply the technique to investigate the electronic structure of the interface between a single-layer black phosphorous and the high-κ dielectric material c -HfO2 . We predict that the band offset between the two materials is 1.29 eV and 2.18 eV for valence and conduction band edges, respectively, and such offsets are suitable for 2D field-effect transistor applications.

  19. Multitask SVM learning for remote sensing data classification

    NASA Astrophysics Data System (ADS)

    Leiva-Murillo, Jose M.; Gómez-Chova, Luis; Camps-Valls, Gustavo

    2010-10-01

    Many remote sensing data processing problems are inherently constituted by several tasks that can be solved either individually or jointly. For instance, each image in a multitemporal classification setting could be taken as an individual task but relation to previous acquisitions should be properly considered. In such problems, different modalities of the data (temporal, spatial, angular) gives rise to changes between the training and test distributions, which constitutes a difficult learning problem known as covariate shift. Multitask learning methods aim at jointly solving a set of prediction problems in an efficient way by sharing information across tasks. This paper presents a novel kernel method for multitask learning in remote sensing data classification. The proposed method alleviates the dataset shift problem by imposing cross-information in the classifiers through matrix regularization. We consider the support vector machine (SVM) as core learner and two regularization schemes are introduced: 1) the Euclidean distance of the predictors in the Hilbert space; and 2) the inclusion of relational operators between tasks. Experiments are conducted in the challenging remote sensing problems of cloud screening from multispectral MERIS images and for landmine detection.

  20. Maximum entropy formalism for the analytic continuation of matrix-valued Green's functions

    NASA Astrophysics Data System (ADS)

    Kraberger, Gernot J.; Triebl, Robert; Zingl, Manuel; Aichhorn, Markus

    2017-10-01

    We present a generalization of the maximum entropy method to the analytic continuation of matrix-valued Green's functions. To treat off-diagonal elements correctly based on Bayesian probability theory, the entropy term has to be extended for spectral functions that are possibly negative in some frequency ranges. In that way, all matrix elements of the Green's function matrix can be analytically continued; we introduce a computationally cheap element-wise method for this purpose. However, this method cannot ensure important constraints on the mathematical properties of the resulting spectral functions, namely positive semidefiniteness and Hermiticity. To improve on this, we present a full matrix formalism, where all matrix elements are treated simultaneously. We show the capabilities of these methods using insulating and metallic dynamical mean-field theory (DMFT) Green's functions as test cases. Finally, we apply the methods to realistic material calculations for LaTiO3, where off-diagonal matrix elements in the Green's function appear due to the distorted crystal structure.

  1. Reactive solute transport in an asymmetrical fracture-rock matrix system

    NASA Astrophysics Data System (ADS)

    Zhou, Renjie; Zhan, Hongbin

    2018-02-01

    The understanding of reactive solute transport in a single fracture-rock matrix system is the foundation of studying transport behavior in the complex fractured porous media. When transport properties are asymmetrically distributed in the adjacent rock matrixes, reactive solute transport has to be considered as a coupled three-domain problem, which is more complex than the symmetric case with identical transport properties in the adjacent rock matrixes. This study deals with the transport problem in a single fracture-rock matrix system with asymmetrical distribution of transport properties in the rock matrixes. Mathematical models are developed for such a problem under the first-type and the third-type boundary conditions to analyze the spatio-temporal concentration and mass distribution in the fracture and rock matrix with the help of Laplace transform technique and de Hoog numerical inverse Laplace algorithm. The newly acquired solutions are then tested extensively against previous analytical and numerical solutions and are proven to be robust and accurate. Furthermore, a water flushing phase is imposed on the left boundary of system after a certain time. The diffusive mass exchange along the fracture/rock matrixes interfaces and the relative masses stored in each of three domains (fracture, upper rock matrix, and lower rock matrix) after the water flushing provide great insights of transport with asymmetric distribution of transport properties. This study has the following findings: 1) Asymmetric distribution of transport properties imposes greater controls on solute transport in the rock matrixes. However, transport in the fracture is mildly influenced. 2) The mass stored in the fracture responses quickly to water flushing, while the mass stored in the rock matrix is much less sensitive to the water flushing. 3) The diffusive mass exchange during the water flushing phase has similar patterns under symmetric and asymmetric cases. 4) The characteristic distance which refers to the zero diffusion between the fracture and the rock matrix during the water flushing phase is closely associated with dispersive process in the fracture.

  2. Theory and implementation of H-matrix based iterative and direct solvers for Helmholtz and elastodynamic oscillatory kernels

    NASA Astrophysics Data System (ADS)

    Chaillat, Stéphanie; Desiderio, Luca; Ciarlet, Patrick

    2017-12-01

    In this work, we study the accuracy and efficiency of hierarchical matrix (H-matrix) based fast methods for solving dense linear systems arising from the discretization of the 3D elastodynamic Green's tensors. It is well known in the literature that standard H-matrix based methods, although very efficient tools for asymptotically smooth kernels, are not optimal for oscillatory kernels. H2-matrix and directional approaches have been proposed to overcome this problem. However the implementation of such methods is much more involved than the standard H-matrix representation. The central questions we address are twofold. (i) What is the frequency-range in which the H-matrix format is an efficient representation for 3D elastodynamic problems? (ii) What can be expected of such an approach to model problems in mechanical engineering? We show that even though the method is not optimal (in the sense that more involved representations can lead to faster algorithms) an efficient solver can be easily developed. The capabilities of the method are illustrated on numerical examples using the Boundary Element Method.

  3. Analytical quality assurance in veterinary drug residue analysis methods: matrix effects determination and monitoring for sulfonamides analysis.

    PubMed

    Hoff, Rodrigo Barcellos; Rübensam, Gabriel; Jank, Louise; Barreto, Fabiano; Peralba, Maria do Carmo Ruaro; Pizzolato, Tânia Mara; Silvia Díaz-Cruz, M; Barceló, Damià

    2015-01-01

    In residue analysis of veterinary drugs in foodstuff, matrix effects are one of the most critical points. This work present a discuss considering approaches used to estimate, minimize and monitoring matrix effects in bioanalytical methods. Qualitative and quantitative methods for estimation of matrix effects such as post-column infusion, slopes ratios analysis, calibration curves (mathematical and statistical analysis) and control chart monitoring are discussed using real data. Matrix effects varying in a wide range depending of the analyte and the sample preparation method: pressurized liquid extraction for liver samples show matrix effects from 15.5 to 59.2% while a ultrasound-assisted extraction provide values from 21.7 to 64.3%. The matrix influence was also evaluated: for sulfamethazine analysis, losses of signal were varying from -37 to -96% for fish and eggs, respectively. Advantages and drawbacks are also discussed considering a workflow for matrix effects assessment proposed and applied to real data from sulfonamides residues analysis. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Maximum parsimony, substitution model, and probability phylogenetic trees.

    PubMed

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  5. SU-C-BRD-07: Three-Dimensional Dose Reconstruction in the Presence of Inhomogeneities Using Fast EPID-Based Back-Projection Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Q; Cao, R; Pei, X

    2015-06-15

    Purpose: Three-dimensional dose verification can detect errors introduced by the treatment planning system (TPS) or differences between planned and delivered dose distribution during the treatment. The aim of the study is to extend a previous in-house developed three-dimensional dose reconstructed model in homogeneous phantom to situtions in which tissue inhomogeneities are present. Methods: The method was based on the portal grey images from an electronic portal imaging device (EPID) and the relationship between beamlets and grey-scoring voxels at the position of the EPID. The relationship was expressed in the form of grey response matrix that was quantified using thickness-dependence scattermore » kernels determined by series of experiments. From the portal grey-value distribution information measured by the EPID the two-dimensional incident fluence distribution was reconstructed based on the grey response matrix using a fast iterative algorithm. The accuracy of this approach was verified using a four-field intensity-modulated radiotherapy (IMRT) plan for the treatment of lung cancer in anthopomorphic phantom. Each field had between twenty and twenty-eight segments and was evaluated by comparing the reconstructed dose distribution with the measured dose. Results: The gamma-evaluation method was used with various evaluation criteria of dose difference and distance-to-agreement: 3%/3mm and 2%/2 mm. The dose comparison for all irradiated fields showed a pass rate of 100% with the criterion of 3%/3mm, and a pass rate of higher than 92% with the criterion of 2%/2mm. Conclusion: Our experimental results demonstrate that our method is capable of accurately reconstructing three-dimensional dose distribution in the presence of inhomogeneities. Using the method, the combined planning and treatment delivery process is verified, offing an easy-to-use tool for the verification of complex treatments.« less

  6. Fast Minimum Variance Beamforming Based on Legendre Polynomials.

    PubMed

    Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae

    2016-09-01

    Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.

  7. Convergence analysis of modulus-based matrix splitting iterative methods for implicit complementarity problems.

    PubMed

    Wang, An; Cao, Yang; Shi, Quan

    2018-01-01

    In this paper, we demonstrate a complete version of the convergence theory of the modulus-based matrix splitting iteration methods for solving a class of implicit complementarity problems proposed by Hong and Li (Numer. Linear Algebra Appl. 23:629-641, 2016). New convergence conditions are presented when the system matrix is a positive-definite matrix and an [Formula: see text]-matrix, respectively.

  8. Algorithms for Solvents and Spectral Factors of Matrix Polynomials

    DTIC Science & Technology

    1981-01-01

    spectral factors of matrix polynomials LEANG S. SHIEHt, YIH T. TSAYt and NORMAN P. COLEMANt A generalized Newton method , based on the contracted gradient...of a matrix poly- nomial, is derived for solving the right (left) solvents and spectral factors of matrix polynomials. Two methods of selecting initial...estimates for rapid convergence of the newly developed numerical method are proposed. Also, new algorithms for solving complete sets of the right

  9. Kantorovich-Wasserstein Distance for Identifying the Dynamic of Some Compartmental Models in Biology

    NASA Astrophysics Data System (ADS)

    Pousin, Jérôme

    2008-09-01

    Determining the influence of a biological species to the evolution of an other one strongly depends on the choice of mathematical models in biology. In this work we consider the case of distribution of lipids (docosahexaenoic acid (DHA)) in two compartments of the plasma, the platelets and the erythrocytes, and we compare three different mathematical approaches. The first one, consists of a system of differential equations the coefficients of which are identified through a least square procedure. The second one is made of a system of differential equations on a graph, the adjacency matrix of which represents the interplay between the species. The third one consists of mapping the provider curves to the target curves. Thus we have a distance between two families of curves, the curves of providers and the curves of targets, and by comparing the distances, we are able to decide which provider delivers preferentially to a target according to cumulative species mass curves. Numerical results are presented, and we show that the ordinary differential least square model provides qualitatively the same result as the Kantorovich-Wasserstein distance strategy. Finally, we discuss the potential ability of the presented Kantorovich-Wasserstein distance to perform the biological properties of a system.

  10. Single-Image Distance Measurement by a Smart Mobile Device.

    PubMed

    Chen, Shangwen; Fang, Xianyong; Shen, Jianbing; Wang, Linbo; Shao, Ling

    2017-12-01

    Existing distance measurement methods either require multiple images and special photographing poses or only measure the height with a special view configuration. We propose a novel image-based method that can measure various types of distance from single image captured by a smart mobile device. The embedded accelerometer is used to determine the view orientation of the device. Consequently, pixels can be back-projected to the ground, thanks to the efficient calibration method using two known distances. Then the distance in pixel is transformed to a real distance in centimeter with a linear model parameterized by the magnification ratio. Various types of distance specified in the image can be computed accordingly. Experimental results demonstrate the effectiveness of the proposed method.

  11. A sample preparation method for recovering suppressed analyte ions in MALDI TOF MS.

    PubMed

    Lou, Xianwen; de Waal, Bas F M; Milroy, Lech-Gustav; van Dongen, Joost L J

    2015-05-01

    In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS), analyte signals can be substantially suppressed by other compounds in the sample. In this technical note, we describe a modified thin-layer sample preparation method that significantly reduces the analyte suppression effect (ASE). In our method, analytes are deposited on top of the surface of matrix preloaded on the MALDI plate. To prevent embedding of analyte into the matrix crystals, the sample solution were prepared without matrix and efforts were taken not to re-dissolve the preloaded matrix. The results with model mixtures of peptides, synthetic polymers and lipids show that detection of analyte ions, which were completely suppressed using the conventional dried-droplet method, could be effectively recovered by using our method. Our findings suggest that the incorporation of analytes in the matrix crystals has an important contributory effect on ASE. By reducing ASE, our method should be useful for the direct MALDI MS analysis of multicomponent mixtures. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Quantitative evaluation of the matrix effect in bioanalytical methods based on LC-MS: A comparison of two approaches.

    PubMed

    Rudzki, Piotr J; Gniazdowska, Elżbieta; Buś-Kwaśnik, Katarzyna

    2018-06-05

    Liquid chromatography coupled to mass spectrometry (LC-MS) is a powerful tool for studying pharmacokinetics and toxicokinetics. Reliable bioanalysis requires the characterization of the matrix effect, i.e. influence of the endogenous or exogenous compounds on the analyte signal intensity. We have compared two methods for the quantitation of matrix effect. The CVs(%) of internal standard normalized matrix factors recommended by the European Medicines Agency were evaluated against internal standard normalized relative matrix effects derived from Matuszewski et al. (2003). Both methods use post-extraction spiked samples, but matrix factors require also neat solutions. We have tested both approaches using analytes of diverse chemical structures. The study did not reveal relevant differences in the results obtained with both calculation methods. After normalization with the internal standard, the CV(%) of the matrix factor was on average 0.5% higher than the corresponding relative matrix effect. The method adopted by the European Medicines Agency seems to be slightly more conservative in the analyzed datasets. Nine analytes of different structures enabled a general overview of the problem, still, further studies are encouraged to confirm our observations. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Eigenvector dynamics: General theory and some applications

    NASA Astrophysics Data System (ADS)

    Allez, Romain; Bouchaud, Jean-Philippe

    2012-10-01

    We propose a general framework to study the stability of the subspace spanned by P consecutive eigenvectors of a generic symmetric matrix H0 when a small perturbation is added. This problem is relevant in various contexts, including quantum dissipation (H0 is then the Hamiltonian) and financial risk control (in which case H0 is the assets' return covariance matrix). We argue that the problem can be formulated in terms of the singular values of an overlap matrix, which allows one to define an overlap distance. We specialize our results for the case of a Gaussian orthogonal H0, for which the full spectrum of singular values can be explicitly computed. We also consider the case when H0 is a covariance matrix and illustrate the usefulness of our results using financial data. The special case where the top eigenvalue is much larger than all the other ones can be investigated in full detail. In particular, the dynamics of the angle made by the top eigenvector and its true direction defines an interesting class of random processes.

  14. Giant magnetoresistive heterogeneous alloys and method of making same

    DOEpatents

    Bernardi, Johannes J.; Thomas, Gareth; Huetten, Andreas R.

    1999-01-01

    The inventive material exhibits giant magnetoresistance upon application of an external magnetic field at room temperature. The hysteresis is minimal. The inventive material has a magnetic phase formed by eutectic decomposition. The bulk material comprises a plurality of regions characterized by a) the presence of magnetic lamellae wherein the lamellae are separated by a distance smaller than the mean free path of the conduction electrons, and b) a matrix composition having nonmagnetic properties that is interposed between the lamellae within the regions. The inventive, rapidly quenched, eutectic alloys form microstructure lamellae having antiparallel antiferromagnetic coupling and give rise to GMR properties. The inventive materials made according to the inventive process yielded commercially acceptable quantities and timeframes. Annealing destroyed the microstructure lamellae and the GMR effect. Noneutectic alloys did not exhibit the antiparallel microstructure lamellae and did not possess GMR properties.

  15. Giant magnetoresistive heterogeneous alloys and method of making same

    DOEpatents

    Bernardi, J.J.; Thomas, G.; Huetten, A.R.

    1999-03-16

    The inventive material exhibits giant magnetoresistance upon application of an external magnetic field at room temperature. The hysteresis is minimal. The inventive material has a magnetic phase formed by eutectic decomposition. The bulk material comprises a plurality of regions characterized by (a) the presence of magnetic lamellae wherein the lamellae are separated by a distance smaller than the mean free path of the conduction electrons, and (b) a matrix composition having nonmagnetic properties that is interposed between the lamellae within the regions. The inventive, rapidly quenched, eutectic alloys form microstructure lamellae having antiparallel antiferromagnetic coupling and give rise to GMR properties. The inventive materials made according to the inventive process yielded commercially acceptable quantities and timeframes. Annealing destroyed the microstructure lamellae and the GMR effect. Noneutectic alloys did not exhibit the antiparallel microstructure lamellae and did not possess GMR properties. 7 figs.

  16. Giant magnetoresistive heterogeneous alloys and method of making same

    DOEpatents

    Bernardi, Johannes J.; Thomas, Gareth; Huetten, Andreas R.

    1998-01-01

    The inventive material exhibits giant magnetoresistance upon application of an external magnetic field at room temperature. The hysteresis is minimal. The inventive material has a magnetic phase formed by eutectic decomposition. The bulk material comprises a plurality of regions characterized by a) the presence of magnetic lamellae wherein the lamellae are separated by a distance smaller than the mean free path of the conduction electrons, and b) a matrix composition having nonmagnetic properties that is interposed between the lamellae within the regions. The inventive, rapidly quenched, eutectic alloys form microstructure lamellae having antiparallel antiferromagnetic coupling and give rise to GMR properties. The inventive materials made according to the inventive process yielded commercially acceptable quantities and timeframes. Annealing destroyed the microstructure lamellae and the GMR effect. Noneutectic alloys did not exhibit the antiparallel microstructure lamellae and did not possess GMR properties.

  17. Giant magnetoresistive heterogeneous alloys and method of making same

    DOEpatents

    Bernardi, J.J.; Thomas, G.; Huetten, A.R.

    1998-10-20

    The inventive material exhibits giant magnetoresistance upon application of an external magnetic field at room temperature. The hysteresis is minimal. The inventive material has a magnetic phase formed by eutectic decomposition. The bulk material comprises a plurality of regions characterized by (a) the presence of magnetic lamellae wherein the lamellae are separated by a distance smaller than the mean free path of the conduction electrons, and (b) a matrix composition having nonmagnetic properties that is interposed between the lamellae within the regions. The inventive, rapidly quenched, eutectic alloys form microstructure lamellae having antiparallel antiferromagnetic coupling and give rise to GMR properties. The inventive materials made according to the inventive process yielded commercially acceptable quantities and timeframes. Annealing destroyed the microstructure lamellae and the GMR effect. Noneutectic alloys did not exhibit the antiparallel microstructure lamellae and did not possess GMR properties. 7 figs.

  18. Design of Multi-Resonant Cavities Based on Metal-Coated Dielectric Nanocylinders

    NASA Astrophysics Data System (ADS)

    Dong, Junyuan; Yu, Guanxia; Fu, Jingjing; Luo, Min; Du, Wenwen

    2018-06-01

    In this paper, the light scattering properties for multiple silver-coated dielectric nanocylinders with the symmetrical distribution were investigated. Based on the transfer matrix method, we derive the general transmission and reflection coefficient matrices for multiple dielectric nanocylinders. When the incident light frequencies are less than the plasma frequencies, the surface plasmons (SPs) appear in the interface between the silver and dielectrics. Numerical simulations show that there are three peaks of absorption cross-section (ACS) in the relationship between the ACS and the frequencies of the incident light, when the distance between the silver-coated dielectric nanocylinders is chosen properly. These SPs resonance peaks are characterised as resonances intrinsic to the cylindrically periodic system corresponding to different inner cavity structures. These multi-resonant cavities may have potential applications in integrated devices, optical sensors and optical storage devices.

  19. Enhancement of photoluminescence intensity of erbium doped silica containing Ge nanocrystals: distance dependent interactions

    NASA Astrophysics Data System (ADS)

    Manna, S.; Aluguri, R.; Bar, R.; Das, S.; Prtljaga, N.; Pavesi, L.; Ray, S. K.

    2015-01-01

    Photo-physical processes in Er-doped silica glass matrix containing Ge nanocrystals prepared by the sol-gel method are presented in this article. Strong photoluminescence at 1.54 μm, important for fiber optics telecommunication systems, is observed from the different sol-gel derived glasses at room temperature. We demonstrate that Ge nanocrystals act as strong sensitizers for Er3+ ions emission and the effective Er excitation cross section increases by almost four orders of magnitude with respect to the one without Ge nanocrystals. Rate equations are considered to demonstrate the sensitization of erbium luminescence by Ge nanocrystals. Analyzing the erbium effective excitation cross section, extracted from the flux dependent rise and decay times, a Dexter type of short range energy transfer from a Ge nanocrystal to erbium ion is established.

  20. Marked Object Recognition Multitouch Screen Printed Touchpad for Interactive Applications.

    PubMed

    Nunes, Jivago Serrado; Castro, Nelson; Gonçalves, Sergio; Pereira, Nélson; Correia, Vitor; Lanceros-Mendez, Senentxu

    2017-12-01

    The market for interactive platforms is rapidly growing, and touchscreens have been incorporated in an increasing number of devices. Thus, the area of smart objects and devices is strongly increasing by adding interactive touch and multimedia content, leading to new uses and capabilities. In this work, a flexible screen printed sensor matrix is fabricated based on silver ink in a polyethylene terephthalate (PET) substrate. Diamond shaped capacitive electrodes coupled with conventional capacitive reading electronics enables fabrication of a highly functional capacitive touchpad, and also allows for the identification of marked objects. For the latter, the capacitive signatures are identified by intersecting points and distances between them. Thus, this work demonstrates the applicability of a low cost method using royalty-free geometries and technologies for the development of flexible multitouch touchpads for the implementation of interactive and object recognition applications.

Top