A New Computational Method to Fit the Weighted Euclidean Distance Model.
ERIC Educational Resources Information Center
De Leeuw, Jan; Pruzansky, Sandra
1978-01-01
A computational method for weighted euclidean distance scaling (a method of multidimensional scaling) which combines aspects of an "analytic" solution with an approach using loss functions is presented. (Author/JKS)
Generalising Ward's Method for Use with Manhattan Distances.
Strauss, Trudie; von Maltitz, Michael Johan
2017-01-01
The claim that Ward's linkage algorithm in hierarchical clustering is limited to use with Euclidean distances is investigated. In this paper, Ward's clustering algorithm is generalised to use with l1 norm or Manhattan distances. We argue that the generalisation of Ward's linkage method to incorporate Manhattan distances is theoretically sound and provide an example of where this method outperforms the method using Euclidean distances. As an application, we perform statistical analyses on languages using methods normally applied to biology and genetic classification. We aim to quantify differences in character traits between languages and use a statistical language signature based on relative bi-gram (sequence of two letters) frequencies to calculate a distance matrix between 32 Indo-European languages. We then use Ward's method of hierarchical clustering to classify the languages, using the Euclidean distance and the Manhattan distance. Results obtained from using the different distance metrics are compared to show that the Ward's algorithm characteristic of minimising intra-cluster variation and maximising inter-cluster variation is not violated when using the Manhattan metric.
Assessment of gene order computing methods for Alzheimer's disease
2013-01-01
Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541
Authenticating concealed private data while maintaining concealment
Thomas, Edward V [Albuquerque, NM; Draelos, Timothy J [Albuquerque, NM
2007-06-26
A method of and system for authenticating concealed and statistically varying multi-dimensional data comprising: acquiring an initial measurement of an item, wherein the initial measurement is subject to measurement error; applying a transformation to the initial measurement to generate reference template data; acquiring a subsequent measurement of an item, wherein the subsequent measurement is subject to measurement error; applying the transformation to the subsequent measurement; and calculating a Euclidean distance metric between the transformed measurements; wherein the calculated Euclidean distance metric is identical to a Euclidean distance metric between the measurement prior to transformation.
Ichikawa, Kazuki; Morishita, Shinichi
2014-01-01
K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.
Squared Euclidean distance: a statistical test to evaluate plant community change
Raymond D. Ratliff; Sylvia R. Mori
1993-01-01
The concepts and a procedure for evaluating plant community change using the squared Euclidean distance (SED) resemblance function are described. Analyses are based on the concept that Euclidean distances constitute a sample from a population of distances between sampling units (SUs) for a specific number of times and SUs. With different times, the distances will be...
Euclidean sections of protein conformation space and their implications in dimensionality reduction
Duan, Mojie; Li, Minghai; Han, Li; Huo, Shuanghong
2014-01-01
Dimensionality reduction is widely used in searching for the intrinsic reaction coordinates for protein conformational changes. We find the dimensionality–reduction methods using the pairwise root–mean–square deviation as the local distance metric face a challenge. We use Isomap as an example to illustrate the problem. We believe that there is an implied assumption for the dimensionality–reduction approaches that aim to preserve the geometric relations between the objects: both the original space and the reduced space have the same kind of geometry, such as Euclidean geometry vs. Euclidean geometry or spherical geometry vs. spherical geometry. When the protein free energy landscape is mapped onto a 2D plane or 3D space, the reduced space is Euclidean, thus the original space should also be Euclidean. For a protein with N atoms, its conformation space is a subset of the 3N-dimensional Euclidean space R3N. We formally define the protein conformation space as the quotient space of R3N by the equivalence relation of rigid motions. Whether the quotient space is Euclidean or not depends on how it is parameterized. When the pairwise root–mean–square deviation is employed as the local distance metric, implicit representations are used for the protein conformation space, leading to no direct correspondence to a Euclidean set. We have demonstrated that an explicit Euclidean-based representation of protein conformation space and the local distance metric associated to it improve the quality of dimensionality reduction in the tetra-peptide and β–hairpin systems. PMID:24913095
Multi-level bandwidth efficient block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1989-01-01
The multilevel technique is investigated for combining block coding and modulation. There are four parts. In the first part, a formulation is presented for signal sets on which modulation codes are to be constructed. Distance measures on a signal set are defined and their properties are developed. In the second part, a general formulation is presented for multilevel modulation codes in terms of component codes with appropriate Euclidean distances. The distance properties, Euclidean weight distribution and linear structure of multilevel modulation codes are investigated. In the third part, several specific methods for constructing multilevel block modulation codes with interdependency among component codes are proposed. Given a multilevel block modulation code C with no interdependency among the binary component codes, the proposed methods give a multilevel block modulation code C which has the same rate as C, a minimum squared Euclidean distance not less than that of code C, a trellis diagram with the same number of states as that of C and a smaller number of nearest neighbor codewords than that of C. In the last part, error performance of block modulation codes is analyzed for an AWGN channel based on soft-decision maximum likelihood decoding. Error probabilities of some specific codes are evaluated based on their Euclidean weight distributions and simulation results.
NASA Astrophysics Data System (ADS)
Durato, M. V.; Albano, A. M.; Rapp, P. E.; Nawang, S. A.
2015-06-01
The validity of ERPs as indices of stable neurophysiological traits is partially dependent on their stability over time. Previous studies on ERP stability, however, have reported diverse stability estimates despite using the same component scoring methods. This present study explores a novel approach in investigating the longitudinal stability of average ERPs—that is, by treating the ERP waveform as a time series and then applying Euclidean Distance and Kolmogorov-Smirnov analyses to evaluate the similarity or dissimilarity between the ERP time series of different sessions or run pairs. Nonlinear dynamical analysis show that in the absence of a change in medical condition, the average ERPs of healthy human adults are highly longitudinally stable—as evaluated by both the Euclidean distance and the Kolmogorov-Smirnov test.
Gómez, Daviel; Hernández, L Ázaro; Yabor, Lourdes; Beemster, Gerrit T S; Tebbe, Christoph C; Papenbrock, Jutta; Lorenzo, José Carlos
2018-03-15
Plant scientists usually record several indicators in their abiotic factor experiments. The common statistical management involves univariate analyses. Such analyses generally create a split picture of the effects of experimental treatments since each indicator is addressed independently. The Euclidean distance combined with the information of the control treatment could have potential as an integrating indicator. The Euclidean distance has demonstrated its usefulness in many scientific fields but, as far as we know, it has not yet been employed for plant experimental analyses. To exemplify the use of the Euclidean distance in this field, we performed an experiment focused on the effects of mannitol on sugarcane micropropagation in temporary immersion bioreactors. Five mannitol concentrations were compared: 0, 50, 100, 150 and 200 mM. As dependent variables we recorded shoot multiplication rate, fresh weight, and levels of aldehydes, chlorophylls, carotenoids and phenolics. The statistical protocol which we then carried out integrated all dependent variables to easily identify the mannitol concentration that produced the most remarkable integral effect. Results provided by the Euclidean distance demonstrate a gradually increasing distance from the control in function of increasing mannitol concentrations. 200 mM mannitol caused the most significant alteration of sugarcane biochemistry and physiology under the experimental conditions described here. This treatment showed the longest statistically significant Euclidean distance to the control treatment (2.38). In contrast, 50 and 100 mM mannitol showed the lowest Euclidean distances (0.61 and 0.84, respectively) and thus poor integrated effects of mannitol. The analysis shown here indicates that the use of the Euclidean distance can contribute to establishing a more integrated evaluation of the contrasting mannitol treatments.
Contracted time and expanded space: The impact of circumnavigation on judgements of space and time.
Brunec, Iva K; Javadi, Amir-Homayoun; Zisch, Fiona E L; Spiers, Hugo J
2017-09-01
The ability to estimate distance and time to spatial goals is fundamental for survival. In cases where a region of space must be navigated around to reach a location (circumnavigation), the distance along the path is greater than the straight-line Euclidean distance. To explore how such circumnavigation impacts on estimates of distance and time, we tested participants on their ability to estimate travel time and Euclidean distance to learned destinations in a virtual town. Estimates for approximately linear routes were compared with estimates for routes requiring circumnavigation. For all routes, travel times were significantly underestimated, and Euclidean distances overestimated. For routes requiring circumnavigation, travel time was further underestimated and the Euclidean distance further overestimated. Thus, circumnavigation appears to enhance existing biases in representations of travel time and distance. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Establishing an efficient way to utilize the drought resistance germplasm population in wheat.
Wang, Jiancheng; Guan, Yajing; Wang, Yang; Zhu, Liwei; Wang, Qitian; Hu, Qijuan; Hu, Jin
2013-01-01
Drought resistance breeding provides a hopeful way to improve yield and quality of wheat in arid and semiarid regions. Constructing core collection is an efficient way to evaluate and utilize drought-resistant germplasm resources in wheat. In the present research, 1,683 wheat varieties were divided into five germplasm groups (high resistant, HR; resistant, R; moderate resistant, MR; susceptible, S; and high susceptible, HS). The least distance stepwise sampling (LDSS) method was adopted to select core accessions. Six commonly used genetic distances (Euclidean distance, Euclid; Standardized Euclidean distance, Seuclid; Mahalanobis distance, Mahal; Manhattan distance, Manhat; Cosine distance, Cosine; and Correlation distance, Correlation) were used to assess genetic distances among accessions. Unweighted pair-group average (UPGMA) method was used to perform hierarchical cluster analysis. Coincidence rate of range (CR) and variable rate of coefficient of variation (VR) were adopted to evaluate the representativeness of the core collection. A method for selecting the ideal constructing strategy was suggested in the present research. A wheat core collection for the drought resistance breeding programs was constructed by the strategy selected in the present research. The principal component analysis showed that the genetic diversity was well preserved in that core collection.
Nearest Neighbor Classification Using a Density Sensitive Distance Measurement
2009-09-01
both the proposed density sensitive distance measurement and Euclidean distance are compared on the Wisconsin Diagnostic Breast Cancer dataset and...proposed density sensitive distance measurement and Euclidean distance are compared on the Wisconsin Diagnostic Breast Cancer dataset and the MNIST...35 1. The Wisconsin Diagnostic Breast Cancer (WDBC) Dataset..........35 2. The
Why conventional detection methods fail in identifying the existence of contamination events.
Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han
2016-04-15
Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.
Williams, C.J.; Heglund, P.J.
2009-01-01
Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.
Multivariate Welch t-test on distances
2016-01-01
Motivation: Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. Results: We develop a solution in the form of a distance-based Welch t-test, TW2, for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and TW2 in reanalysis of two existing microbiome datasets, where the methodology has originated. Availability and Implementation: The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2. Further guidance on application of these methods can be obtained from the author. Contact: alekseye@musc.edu PMID:27515741
Multivariate Welch t-test on distances.
Alekseyenko, Alexander V
2016-12-01
Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. We develop a solution in the form of a distance-based Welch t-test, [Formula: see text], for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and [Formula: see text] in reanalysis of two existing microbiome datasets, where the methodology has originated. The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2 Further guidance on application of these methods can be obtained from the author. alekseye@musc.edu. © The Author 2016. Published by Oxford University Press.
Automatic lung nodule matching for the follow-up in temporal chest CT scans
NASA Astrophysics Data System (ADS)
Hong, Helen; Lee, Jeongjin; Shin, Yeong Gil
2006-03-01
We propose a fast and robust registration method for matching lung nodules of temporal chest CT scans. Our method is composed of four stages. First, the lungs are extracted from chest CT scans by the automatic segmentation method. Second, the gross translational mismatch is corrected by the optimal cube registration. This initial registration does not require extracting any anatomical landmarks. Third, initial alignment is step by step refined by the iterative surface registration. To evaluate the distance measure between surface boundary points, a 3D distance map is generated by the narrow-band distance propagation, which drives fast and robust convergence to the optimal location. Fourth, nodule correspondences are established by the pairs with the smallest Euclidean distances. The results of pulmonary nodule alignment of twenty patients are reported on a per-center-of mass point basis using the average Euclidean distance (AED) error between corresponding nodules of initial and follow-up scans. The average AED error of twenty patients is significantly reduced to 4.7mm from 30.0mm by our registration. Experimental results show that our registration method aligns the lung nodules much faster than the conventional ones using a distance measure. Accurate and fast result of our method would be more useful for the radiologist's evaluation of pulmonary nodules on chest CT scans.
Goldstein, R.M.; Meador, M.R.
2005-01-01
We used species traits to examine the variation in fish assemblages for 21 streams in the Northern Lakes and Forests Ecoregion along a gradient of habitat disturbance. Fish species were classified based on five species trait-classes (trophic ecology, substrate preference, geomorphic preference, locomotion morphology, and reproductive strategy) and 29 categories within those classes. We used a habitat quality index to define a reference stream and then calculated Euclidean distances between the reference and each of the other sites for the five traits. Three levels of species trait analyses were conducted: (1) a composite measure (the sum of Euclidean distances across all five species traits), (2) Euclidean distances for the five individual species trait-classes, and (3) frequencies of occurrence of individual trait categories. The composite Euclidean distance was significantly correlated to the habitat index (r = -0.81; P = 0.001), as were the Euclidean distances for four of the five individual species traits (substrate preference: r = -0.70, P = 0.001; geomorphic preference: r = -0.69, P = 0.001; trophic ecology: r = -0.73, P = 0.001; and reproductive strategy: r = -0.64, P = 0.002). Although Euclidean distances for locomotion morphology were not significantly correlated to habitat index scores (r = -0.21; P = 0.368), analysis of variance and principal components analysis indicated that Euclidean distances for locomotion morphology contributed to significant variation in the fish assemblages among sites. Examination of trait categories indicated that low habitat index scores (degraded streams) were associated with changes in frequency of occurrence within the categories of all five of the species traits. Though the objectives and spatial scale of a study will dictate the level of species trait information required, our results suggest that species traits can provide critical information at multiple levels of data analysis. ?? Copyright by the American Fisheries Society 2005.
Zourmand, Alireza; Ting, Hua-Nong; Mirhassani, Seyed Mostafa
2013-03-01
Speech is one of the prevalent communication mediums for humans. Identifying the gender of a child speaker based on his/her speech is crucial in telecommunication and speech therapy. This article investigates the use of fundamental and formant frequencies from sustained vowel phonation to distinguish the gender of Malay children aged between 7 and 12 years. The Euclidean minimum distance and multilayer perceptron were used to classify the gender of 360 Malay children based on different combinations of fundamental and formant frequencies (F0, F1, F2, and F3). The Euclidean minimum distance with normalized frequency data achieved a classification accuracy of 79.44%, which was higher than that of the nonnormalized frequency data. Age-dependent modeling was used to improve the accuracy of gender classification. The Euclidean distance method obtained 84.17% based on the optimal classification accuracy for all age groups. The accuracy was further increased to 99.81% using multilayer perceptron based on mel-frequency cepstral coefficients. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Approximate geodesic distances reveal biologically relevant structures in microarray data.
Nilsson, Jens; Fioretos, Thoas; Höglund, Mattias; Fontes, Magnus
2004-04-12
Genome-wide gene expression measurements, as currently determined by the microarray technology, can be represented mathematically as points in a high-dimensional gene expression space. Genes interact with each other in regulatory networks, restricting the cellular gene expression profiles to a certain manifold, or surface, in gene expression space. To obtain knowledge about this manifold, various dimensionality reduction methods and distance metrics are used. For data points distributed on curved manifolds, a sensible distance measure would be the geodesic distance along the manifold. In this work, we examine whether an approximate geodesic distance measure captures biological similarities better than the traditionally used Euclidean distance. We computed approximate geodesic distances, determined by the Isomap algorithm, for one set of lymphoma and one set of lung cancer microarray samples. Compared with the ordinary Euclidean distance metric, this distance measure produced more instructive, biologically relevant, visualizations when applying multidimensional scaling. This suggests the Isomap algorithm as a promising tool for the interpretation of microarray data. Furthermore, the results demonstrate the benefit and importance of taking nonlinearities in gene expression data into account.
Emotion-independent face recognition
NASA Astrophysics Data System (ADS)
De Silva, Liyanage C.; Esther, Kho G. P.
2000-12-01
Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2006-01-01
A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.
Evaluation of Image Segmentation and Object Recognition Algorithms for Image Parsing
2013-09-01
generation of the features from the key points. OpenCV uses Euclidean distance to match the key points and has the option to use Manhattan distance...feature vector includes polarity and intensity information. Final step is matching the key points. In OpenCV , Euclidean distance or Manhattan...the code below is one way and OpenCV offers the function radiusMatch (a pair must have a distance less than a given maximum distance). OpenCV’s
Feature Extraction of High-Dimensional Structures for Exploratory Analytics
2013-04-01
Comparison of Euclidean vs. geodesic distance. LDRs use metric based on the Euclidean distance between two points, while the NLDRs are based on...geodesic distance. An NLDR successfully unrolls the curved manifold, whereas an LDR fails. ...........................3 1 1. Introduction An...and classical metric multidimensional scaling, are a linear DR ( LDR ). An LDR is based on a linear combination of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyun Jung; McDonnell, Kevin T.; Zelenyuk, Alla
2014-03-01
Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly inmore » high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our MDS plots also exhibit similar visual relationships as the method of parallel coordinates which is often used alongside to visualize the high-dimensional data in raw form. We then cast our metric into a bi-scale framework which distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.« less
Using BMDP and SPSS for a Q factor analysis.
Tanner, B A; Koning, S M
1980-12-01
While Euclidean distances and Q factor analysis may sometimes be preferred to correlation coefficients and cluster analysis for developing a typology, commercially available software does not always facilitate their use. Commands are provided for using BMDP and SPSS in a Q factor analysis with Euclidean distances.
Sexual dimorphism in the human face assessed by euclidean distance matrix analysis.
Ferrario, V F; Sforza, C; Pizzini, G; Vogel, G; Miani, A
1993-01-01
The form of any object can be viewed as a combination of size and shape. A recently proposed method (euclidean distance matrix analysis) can differentiate between size and shape differences. It has been applied to analyse the sexual dimorphism in facial form in a sample of 108 healthy young adults (57 men, 51 women). The face was wider and longer in men than in women. A global shape difference was demonstrated, the male face being more rectangular and the female face more square. Gender variations involved especially the lower third of the face and, in particular, the position of the pogonion relative to the other structures. PMID:8300436
A Latent Class Approach to Fitting the Weighted Euclidean Model, CLASCAL.
ERIC Educational Resources Information Center
Winsberg, Suzanne; De Soete, Geert
1993-01-01
A weighted Euclidean distance model is proposed that incorporates a latent class approach (CLASCAL). The contribution to the distance function between two stimuli is per dimension weighted identically by all subjects in the same latent class. A model selection strategy is proposed and illustrated. (SLD)
ERIC Educational Resources Information Center
Bhattacharyya, Pratip; Chakrabarti, Bikas K.
2008-01-01
We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…
Video-based face recognition via convolutional neural networks
NASA Astrophysics Data System (ADS)
Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming
2017-06-01
Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.
Multivariate pattern analysis for MEG: A comparison of dissimilarity measures.
Guggenmos, Matthias; Sterzer, Philipp; Cichy, Radoslaw Martin
2018-06-01
Multivariate pattern analysis (MVPA) methods such as decoding and representational similarity analysis (RSA) are growing rapidly in popularity for the analysis of magnetoencephalography (MEG) data. However, little is known about the relative performance and characteristics of the specific dissimilarity measures used to describe differences between evoked activation patterns. Here we used a multisession MEG data set to qualitatively characterize a range of dissimilarity measures and to quantitatively compare them with respect to decoding accuracy (for decoding) and between-session reliability of representational dissimilarity matrices (for RSA). We tested dissimilarity measures from a range of classifiers (Linear Discriminant Analysis - LDA, Support Vector Machine - SVM, Weighted Robust Distance - WeiRD, Gaussian Naïve Bayes - GNB) and distances (Euclidean distance, Pearson correlation). In addition, we evaluated three key processing choices: 1) preprocessing (noise normalisation, removal of the pattern mean), 2) weighting decoding accuracies by decision values, and 3) computing distances in three different partitioning schemes (non-cross-validated, cross-validated, within-class-corrected). Four main conclusions emerged from our results. First, appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures. Second, LDA, SVM and WeiRD yielded high peak decoding accuracies and nearly identical time courses. Third, while using decoding accuracies for RSA was markedly less reliable than continuous distances, this disadvantage was ameliorated by decision-value-weighting of decoding accuracies. Fourth, the cross-validated Euclidean distance provided unbiased distance estimates and highly replicable representational dissimilarity matrices. Overall, we strongly advise the use of multivariate noise normalisation as a general preprocessing step, recommend LDA, SVM and WeiRD as classifiers for decoding and highlight the cross-validated Euclidean distance as a reliable and unbiased default choice for RSA. Copyright © 2018 Elsevier Inc. All rights reserved.
Bullock, Joshua Matthew Allen; Schwab, Jannik; Thalassinos, Konstantinos; Topf, Maya
2016-01-01
Crosslinking mass spectrometry (XL-MS) is becoming an increasingly popular technique for modeling protein monomers and complexes. The distance restraints garnered from these experiments can be used alone or as part of an integrative modeling approach, incorporating data from many sources. However, modeling practices are varied and the difference in their usefulness is not clear. Here, we develop a new scoring procedure for models based on crosslink data—Matched and Nonaccessible Crosslink score (MNXL). We compare its performance with that of other commonly-used scoring functions (Number of Violations and Sum of Violation Distances) on a benchmark of 14 protein domains, each with 300 corresponding models (at various levels of quality) and associated, previously published, experimental crosslinks (XLdb). The distances between crosslinked lysines are calculated either as Euclidean distances or Solvent Accessible Surface Distances (SASD) using a newly-developed method (Jwalk). MNXL takes into account whether a crosslink is nonaccessible, i.e. an experimentally observed crosslink has no corresponding SASD in a model due to buried lysines. This metric alone is shown to have a significant impact on modeling performance and is a concept that is not considered at present if only Euclidean distances are used. Additionally, a comparison between modeling with SASD or Euclidean distance shows that SASD is superior, even when factoring out the effect of the nonaccessible crosslinks. Our benchmarking also shows that MNXL outperforms the other tested scoring functions in terms of precision and correlation to Cα-RMSD from the crystal structure. We finally test the MNXL at different levels of crosslink recovery (i.e. the percentage of crosslinks experimentally observed out of all theoretical ones) and set a target recovery of ∼20% after which the performance plateaus. PMID:27150526
Mathematical Formulation of Multivariate Euclidean Models for Discrimination Methods.
ERIC Educational Resources Information Center
Mullen, Kenneth; Ennis, Daniel M.
1987-01-01
Multivariate models for the triangular and duo-trio methods are described, and theoretical methods are compared to a Monte Carlo simulation. Implications are discussed for a new theory of multidimensional scaling which challenges the traditional assumption that proximity measures and perceptual distances are monotonically related. (Author/GDC)
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
Using P-Stat, BMDP and SPSS for a cross-products factor analysis.
Tanner, B A; Leiman, J M
1983-06-01
The major disadvantage of the Q factor analysis with Euclidean distances described by Tanner and Koning [Comput. Progr. Biomed. 12 (1980) 201-202] is the considerable editing required. An alternative procedure with commercially distributed software, and with cross-products in place of Euclidean distances is described. This procedure does not require any editing.
On the Partitioning of Squared Euclidean Distance and Its Applications in Cluster Analysis.
ERIC Educational Resources Information Center
Carter, Randy L.; And Others
1989-01-01
The partitioning of squared Euclidean--E(sup 2)--distance between two vectors in M-dimensional space into the sum of squared lengths of vectors in mutually orthogonal subspaces is discussed. Applications to specific cluster analysis problems are provided (i.e., to design Monte Carlo studies for performance comparisons of several clustering methods…
Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas
NASA Astrophysics Data System (ADS)
Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.
2017-12-01
Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.
Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT
Nguyen, Thu L. N.; Shin, Yoan
2016-01-01
Localization in wireless sensor networks (WSNs) is one of the primary functions of the intelligent Internet of Things (IoT) that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton’s method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach. PMID:27213378
Oppugning the assumptions of spatial averaging of segment and joint orientations.
Pierrynowski, Michael Raymond; Ball, Kevin Arthur
2009-02-09
Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.
Li, Longxiang; Gong, Jianhua; Zhou, Jieping
2014-01-01
Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health. PMID:24798197
Li, Longxiang; Gong, Jianhua; Zhou, Jieping
2014-01-01
Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.
Multidimensional Risk Analysis: MRISK
NASA Technical Reports Server (NTRS)
McCollum, Raymond; Brown, Douglas; O'Shea, Sarah Beth; Reith, William; Rabulan, Jennifer; Melrose, Graeme
2015-01-01
Multidimensional Risk (MRISK) calculates the combined multidimensional score using Mahalanobis distance. MRISK accounts for covariance between consequence dimensions, which de-conflicts the interdependencies of consequence dimensions, providing a clearer depiction of risks. Additionally, in the event the dimensions are not correlated, Mahalanobis distance reduces to Euclidean distance normalized by the variance and, therefore, represents the most flexible and optimal method to combine dimensions. MRISK is currently being used in NASA's Environmentally Responsible Aviation (ERA) project o assess risk and prioritize scarce resources.
Lazy orbits: An optimization problem on the sphere
NASA Astrophysics Data System (ADS)
Vincze, Csaba
2018-01-01
Non-transitive subgroups of the orthogonal group play an important role in the non-Euclidean geometry. If G is a closed subgroup in the orthogonal group such that the orbit of a single Euclidean unit vector does not cover the (Euclidean) unit sphere centered at the origin then there always exists a non-Euclidean Minkowski functional such that the elements of G preserve the Minkowskian length of vectors. In other words the Minkowski geometry is an alternative of the Euclidean geometry for the subgroup G. It is rich of isometries if G is "close enough" to the orthogonal group or at least to one of its transitive subgroups. The measure of non-transitivity is related to the Hausdorff distances of the orbits under the elements of G to the Euclidean sphere. Its maximum/minimum belongs to the so-called lazy/busy orbits, i.e. they are the solutions of an optimization problem on the Euclidean sphere. The extremal distances allow us to characterize the reducible/irreducible subgroups. We also formulate an upper and a lower bound for the ratio of the extremal distances. As another application of the analytic tools we introduce the rank of a closed non-transitive group G. We shall see that if G is of maximal rank then it is finite or reducible. Since the reducible and the finite subgroups form two natural prototypes of non-transitive subgroups, the rank seems to be a fundamental notion in their characterization. Closed, non-transitive groups of rank n - 1 will be also characterized. Using the general results we classify all their possible types in lower dimensional cases n = 2 , 3 and 4. Finally we present some applications of the results to the holonomy group of a metric linear connection on a connected Riemannian manifold.
An Isometric Mapping Based Co-Location Decision Tree Algorithm
NASA Astrophysics Data System (ADS)
Zhou, G.; Wei, J.; Zhou, X.; Zhang, R.; Huang, W.; Sha, H.; Chen, J.
2018-05-01
Decision tree (DT) induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information) as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT) method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT), which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1) The extraction method of exposed carbonate rocks is of high accuracy. (2) The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.
The distance function effect on k-nearest neighbor classification for medical datasets.
Hu, Li-Yu; Huang, Min-Wei; Ke, Shih-Wen; Tsai, Chih-Fong
2016-01-01
K-nearest neighbor (k-NN) classification is conventional non-parametric classifier, which has been used as the baseline classifier in many pattern classification problems. It is based on measuring the distances between the test data and each of the training data to decide the final classification output. Since the Euclidean distance function is the most widely used distance metric in k-NN, no study examines the classification performance of k-NN by different distance functions, especially for various medical domain problems. Therefore, the aim of this paper is to investigate whether the distance function can affect the k-NN performance over different medical datasets. Our experiments are based on three different types of medical datasets containing categorical, numerical, and mixed types of data and four different distance functions including Euclidean, cosine, Chi square, and Minkowsky are used during k-NN classification individually. The experimental results show that using the Chi square distance function is the best choice for the three different types of datasets. However, using the cosine and Euclidean (and Minkowsky) distance function perform the worst over the mixed type of datasets. In this paper, we demonstrate that the chosen distance function can affect the classification accuracy of the k-NN classifier. For the medical domain datasets including the categorical, numerical, and mixed types of data, K-NN based on the Chi square distance function performs the best.
New descriptor for skeletons of planar shapes: the calypter
NASA Astrophysics Data System (ADS)
Pirard, Eric; Nivart, Jean-Francois
1994-05-01
The mathematical definition of the skeleton as the locus of centers of maximal inscribed discs is a nondigitizable one. The idea presented in this paper is to incorporate the skeleton information and the chain-code of the contour into a single descriptor by associating to each point of a contour the center and radius of the maximum inscribed disc tangent at that point. This new descriptor is called calypter. The encoding of a calypter is a three stage algorithm: (1) chain coding of the contour; (2) euclidean distance transformation, (3) climbing on the distance relief from each point of the contour towards the corresponding maximal inscribed disc center. Here we introduce an integer euclidean distance transform called the holodisc distance transform. The major interest of this holodisc transform is to confer 8-connexity to the isolevels of the generated distance relief thereby allowing a climbing algorithm to proceed step by step towards the centers of the maximal inscribed discs. The calypter has a cyclic structure delivering high speed access to the skeleton data. Its potential uses are in high speed euclidean mathematical morphology, shape processing, and analysis.
NASA Astrophysics Data System (ADS)
Sneath, P. H. A.
A BASIC program is presented for significance tests to determine whether a dendrogram is derived from clustering of points that belong to a single multivariate normal distribution. The significance tests are based on statistics of the Kolmogorov—Smirnov type, obtained by comparing the observed cumulative graph of branch levels with a graph for the hypothesis of multivariate normality. The program also permits testing whether the dendrogram could be from a cluster of lower dimensionality due to character correlations. The program makes provision for three similarity coefficients, (1) Euclidean distances, (2) squared Euclidean distances, and (3) Simple Matching Coefficients, and for five cluster methods (1) WPGMA, (2) UPGMA, (3) Single Linkage (or Minimum Spanning Trees), (4) Complete Linkage, and (5) Ward's Increase in Sums of Squares. The program is entitled DENBRAN.
An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion
Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng
2015-01-01
The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278
An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.
Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng
2015-08-31
The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy.
Targeting specific facial variation for different identification tasks.
Aeria, Gillian; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald
2010-09-10
A conceptual framework that allows faces to be studied and compared objectively with biological validity is presented. The framework is a logical extension of modern morphometrics and statistical shape analysis techniques. Three dimensional (3D) facial scans were collected from 255 healthy young adults. One scan depicted a smiling facial expression and another scan depicted a neutral expression. These facial scans were modelled in a Principal Component Analysis (PCA) space where Euclidean (ED) and Mahalanobis (MD) distances were used to form similarity measures. Within this PCA space, property pathways were calculated that expressed the direction of change in facial expression. Decomposition of distances into property-independent (D1) and dependent components (D2) along these pathways enabled the comparison of two faces in terms of the extent of a smiling expression. The performance of all distances was tested and compared in dual types of experiments: Classification tasks and a Recognition task. In the Classification tasks, individual facial scans were assigned to one or more population groups of smiling or neutral scans. The property-dependent (D2) component of both Euclidean and Mahalanobis distances performed best in the Classification task, by correctly assigning 99.8% of scans to the right population group. The recognition task tested if a scan of an individual depicting a smiling/neutral expression could be positively identified when shown a scan of the same person depicting a neutral/smiling expression. ED1 and MD1 performed best, and correctly identified 97.8% and 94.8% of individual scans respectively as belonging to the same person despite differences in facial expression. It was concluded that decomposed components are superior to straightforward distances in achieving positive identifications and presents a novel method for quantifying facial similarity. Additionally, although the undecomposed Mahalanobis distance often used in practice outperformed that of the Euclidean, it was the opposite result for the decomposed distances. Crown Copyright 2010. Published by Elsevier Ireland Ltd. All rights reserved.
Artificial immune system via Euclidean Distance Minimization for anomaly detection in bearings
NASA Astrophysics Data System (ADS)
Montechiesi, L.; Cocconcelli, M.; Rubini, R.
2016-08-01
In recent years new diagnostics methodologies have emerged, with particular interest into machinery operating in non-stationary conditions. In fact continuous speed changes and variable loads make non-trivial the spectrum analysis. A variable speed means a variable characteristic fault frequency related to the damage that is no more recognizable in the spectrum. To overcome this problem the scientific community proposed different approaches listed in two main categories: model-based approaches and expert systems. In this context the paper aims to present a simple expert system derived from the mechanisms of the immune system called Euclidean Distance Minimization, and its application in a real case of bearing faults recognition. The proposed method is a simplification of the original process, adapted by the class of Artificial Immune Systems, which proved to be useful and promising in different application fields. Comparative results are provided, with a complete explanation of the algorithm and its functioning aspects.
Modified fuzzy c-means applied to a Bragg grating-based spectral imager for material clustering
NASA Astrophysics Data System (ADS)
Rodríguez, Aida; Nieves, Juan Luis; Valero, Eva; Garrote, Estíbaliz; Hernández-Andrés, Javier; Romero, Javier
2012-01-01
We have modified the Fuzzy C-Means algorithm for an application related to segmentation of hyperspectral images. Classical fuzzy c-means algorithm uses Euclidean distance for computing sample membership to each cluster. We have introduced a different distance metric, Spectral Similarity Value (SSV), in order to have a more convenient similarity measure for reflectance information. SSV distance metric considers both magnitude difference (by the use of Euclidean distance) and spectral shape (by the use of Pearson correlation). Experiments confirmed that the introduction of this metric improves the quality of hyperspectral image segmentation, creating spectrally more dense clusters and increasing the number of correctly classified pixels.
ERIC Educational Resources Information Center
Henry, Gary T.; And Others
1992-01-01
A statistical technique is presented for developing performance standards based on benchmark groups. The benchmark groups are selected using a multivariate technique that relies on a squared Euclidean distance method. For each observation unit (a school district in the example), a unique comparison group is selected. (SLD)
Complex networks in the Euclidean space of communicability distances
NASA Astrophysics Data System (ADS)
Estrada, Ernesto
2012-06-01
We study the properties of complex networks embedded in a Euclidean space of communicability distances. The communicability distance between two nodes is defined as the difference between the weighted sum of walks self-returning to the nodes and the weighted sum of walks going from one node to the other. We give some indications that the communicability distance identifies the least crowded routes in networks where simultaneous submission of packages is taking place. We define an index Q based on communicability and shortest path distances, which allows reinterpreting the “small-world” phenomenon as the region of minimum Q in the Watts-Strogatz model. It also allows the classification and analysis of networks with different efficiency of spatial uses. Consequently, the communicability distance displays unique features for the analysis of complex networks in different scenarios.
Exploring New Geometric Worlds
ERIC Educational Resources Information Center
Nirode, Wayne
2015-01-01
When students work with a non-Euclidean distance formula, geometric objects such as circles and segment bisectors can look very different from their Euclidean counterparts. Students and even teachers can experience the thrill of creative discovery when investigating these differences among geometric worlds. In this article, the author describes a…
Euclidean, Spherical, and Hyperbolic Shadows
ERIC Educational Resources Information Center
Hoban, Ryan
2013-01-01
Many classical problems in elementary calculus use Euclidean geometry. This article takes such a problem and solves it in hyperbolic and in spherical geometry instead. The solution requires only the ability to compute distances and intersections of points in these geometries. The dramatically different results we obtain illustrate the effect…
Multi-instance multi-label distance metric learning for genome-wide protein function prediction.
Xu, Yonghui; Min, Huaqing; Song, Hengjie; Wu, Qingyao
2016-08-01
Multi-instance multi-label (MIML) learning has been proven to be effective for the genome-wide protein function prediction problems where each training example is associated with not only multiple instances but also multiple class labels. To find an appropriate MIML learning method for genome-wide protein function prediction, many studies in the literature attempted to optimize objective functions in which dissimilarity between instances is measured using the Euclidean distance. But in many real applications, Euclidean distance may be unable to capture the intrinsic similarity/dissimilarity in feature space and label space. Unlike other previous approaches, in this paper, we propose to learn a multi-instance multi-label distance metric learning framework (MIMLDML) for genome-wide protein function prediction. Specifically, we learn a Mahalanobis distance to preserve and utilize the intrinsic geometric information of both feature space and label space for MIML learning. In addition, we try to deal with the sparsely labeled data by giving weight to the labeled data. Extensive experiments on seven real-world organisms covering the biological three-domain system (i.e., archaea, bacteria, and eukaryote; Woese et al., 1990) show that the MIMLDML algorithm is superior to most state-of-the-art MIML learning algorithms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Genetic divergence in the common bean (Phaseolus vulgaris L.) in the Cerrado-Pantanal ecotone.
da Silva, F A; Corrêa, A M; Teodoro, P E; Lopes, K V; Corrêa, C C G
2017-03-30
Evaluating genetic diversity among genotypes is important for providing parameters for the identification of superior genotypes, because the choice of parents that form segregating populations is crucial. Our objectives were to i) evaluate agronomic performance; ii) compare clustering methods; iii) ascertain the relative contributions of the variables evaluated; and iv) identify the most promising hybrids to produce superior segregating populations. The trial was conducted in 2015 at the State University of Mato Grosso do Sul, Brazil. We used a randomized block design with three replications, and recorded the days to emergence, days to flowering, days to maturity, plant height, number of branches, number of pods, number of seeds per pod, weight of 100 grains, and productivity. The genetic diversity of the genotypes was determined by cluster analysis using two dissimilarity measures: the Euclidean distance and the standardized mean Mahalanobis distance using the Ward hierarchical method. The genotypes 'CNFC 10762', 'IAC Dawn', and 'BRS Style' had the highest grain yields, and clusters that were based on the Euclidean distance differed from those based on the Mahalanobis distance, the second being more precise. The yield grain character has greater relevance to the dispute. Hybrids with a high heterotic effect can be obtained by crossing 'IAC Alvorada' with 'CNFC 10762', 'IAC Alvorada' with 'CNFC 10764', and 'BRS Style' with 'IAC Alvorada'.
NASA Astrophysics Data System (ADS)
Celenk, Mehmet; Song, Yinglei; Ma, Limin; Zhou, Min
2003-05-01
A new algorithm that can be used to automatically recognize and classify malignant lymphomas and lukemia is proposed in this paper. The algorithm utilizes the morphological watershed to extract boundaries of cells from their grey-level images. It generates a sequence of Euclidean distances by selecting pixels in clockwise direction on the boundary of the cell and calculating the Euclidean distances of the selected pixels from the centroid of the cell. A feature vector associated with each cell is then obtained by applying the auto-regressive moving-average (ARMA) model to the generated sequence of Euclidean distances. The clustering measure J3=trace{inverse(Sw-1)Sm} involving the within (Sw) and mixed (Sm) class-scattering matrices is computed for both cell classes to provide an insight into the extent to which different cell classes in the training data are separated. Our test results suggest that the algorithm is highly accurate for the development of an interactive, computer-assisted diagnosis (CAD) tool.
Teaching Activity-Based Taxicab Geometry
ERIC Educational Resources Information Center
Ada, Tuba
2013-01-01
This study aimed on the process of teaching taxicab geometry, a non-Euclidean geometry that is easy to understand and similar to Euclidean geometry with its axiomatic structure. In this regard, several teaching activities were designed such as measuring taxicab distance, defining a taxicab circle, finding a geometric locus in taxicab geometry, and…
Jat, Prahlad; Serre, Marc L
2016-12-01
Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R 2 by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles. Copyright © 2016. Published by Elsevier Ltd.
Texture classification using non-Euclidean Minkowski dilation
NASA Astrophysics Data System (ADS)
Florindo, Joao B.; Bruno, Odemir M.
2018-03-01
This study presents a new method to extract meaningful descriptors of gray-scale texture images using Minkowski morphological dilation based on the Lp metric. The proposed approach is motivated by the success previously achieved by Bouligand-Minkowski fractal descriptors on texture classification. In essence, such descriptors are directly derived from the morphological dilation of a three-dimensional representation of the gray-level pixels using the classical Euclidean metric. In this way, we generalize the dilation for different values of p in the Lp metric (Euclidean is a particular case when p = 2) and obtain the descriptors from the cumulated distribution of the distance transform computed over the texture image. The proposed method is compared to other state-of-the-art approaches (such as local binary patterns and textons for example) in the classification of two benchmark data sets (UIUC and Outex). The proposed descriptors outperformed all the other approaches in terms of rate of images correctly classified. The interesting results suggest the potential of these descriptors in this type of task, with a wide range of possible applications to real-world problems.
USDA-ARS?s Scientific Manuscript database
Mitochondria are essential subcellular organelles found in eukaryotic cells. Knowing information on a protein’s subcellular or sub subcellular location provides in-depth insights about the microenvironment where it interacts with other molecules and is crucial for inferring the protein’s function. T...
Li, Tao; Hua, Zhendong; Meng, Xin; Liu, Cuimei
2018-03-01
Methamphetamine (MA) tablet production confers chemical and physical properties. This study developed a simple and effective physical characteristic profiling method for MA tablets with capital letter "WY" logos, which realized the discrimination between linked and unlinked seizures. Seventeen signature distances extracted from the "WY" logo were explored as factors for multivariate analysis and demonstrated to be effective to represent the features of tablets in the drug intelligence perspective. Receiver operating characteristic (ROC) curve was used to evaluate efficiency of different pretreatments and distance/correlation metrics, while "Standardization + Euclidean" and "Logarithm + Euclidean" algorithms outperformed the rest. Finally, hierarchical cluster analysis (HCA) was applied to the data set of 200 MA tablet seizures randomly selected from cases all around China in 2015, and 76% of them were classified into a group named after "WY-001." Moreover, the "WY-001" tablets occupied 51-80% tablet seizures from 2011 to 2015 in China, indicating the existence of a huge clandestine factory incessantly manufacturing MA tablets. © 2017 American Academy of Forensic Sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoecker, Christina; Moltz, Jan H.; Lassen, Bianca
Purpose: Computed tomography (CT) imaging is the modality of choice for lung cancer diagnostics. With the increasing number of lung interventions on sublobar level in recent years, determining and visualizing pulmonary segments in CT images and, in oncological cases, reliable segment-related information about the location of tumors has become increasingly desirable. Computer-assisted identification of lung segments in CT images is subject of this work.Methods: The authors present a new interactive approach for the segmentation of lung segments that uses the Euclidean distance of each point in the lung to the segmental branches of the pulmonary artery. The aim is tomore » analyze the potential of the method. Detailed manual pulmonary artery segmentations are used to achieve the best possible segment approximation results. A detailed description of the method and its evaluation on 11 CT scans from clinical routine are given.Results: An accuracy of 2–3 mm is measured for the segment boundaries computed by the pulmonary artery-based method. On average, maximum deviations of 8 mm are observed. 135 intersegmental pulmonary veins detected in the 11 test CT scans serve as reference data. Furthermore, a comparison of the presented pulmonary artery-based approach to a similar approach that uses the Euclidean distance to the segmental branches of the bronchial tree is presented. It shows a significantly higher accuracy for the pulmonary artery-based approach in lung regions at least 30 mm distal to the lung hilum.Conclusions: A pulmonary artery-based determination of lung segments in CT images is promising. In the tests, the pulmonary artery-based determination has been shown to be superior to the bronchial tree-based determination. The suitability of the segment approximation method for application in the planning of segment resections in clinical practice has already been verified in experimental cases. However, automation of the method accompanied by an evaluation on a larger number of test cases is required before application in the daily clinical routine.« less
Probability Distributions of Minkowski Distances between Discrete Random Variables.
ERIC Educational Resources Information Center
Schroger, Erich; And Others
1993-01-01
Minkowski distances are used to indicate similarity of two vectors in an N-dimensional space. How to compute the probability function, the expectation, and the variance for Minkowski distances and the special cases City-block distance and Euclidean distance. Critical values for tests of significance are presented in tables. (SLD)
Ramme, Austin J; Voss, Kevin; Lesporis, Jurinus; Lendhey, Matin S; Coughlin, Thomas R; Strauss, Eric J; Kennedy, Oran D
2017-05-01
MicroCT imaging allows for noninvasive microstructural evaluation of mineralized bone tissue, and is essential in studies of small animal models of bone and joint diseases. Automatic segmentation and evaluation of articular surfaces is challenging. Here, we present a novel method to create knee joint surface models, for the evaluation of PTOA-related joint changes in the rat using an atlas-based diffeomorphic registration to automatically isolate bone from surrounding tissues. As validation, two independent raters manually segment datasets and the resulting segmentations were compared to our novel automatic segmentation process. Data were evaluated using label map volumes, overlap metrics, Euclidean distance mapping, and a time trial. Intraclass correlation coefficients were calculated to compare methods, and were greater than 0.90. Total overlap, union overlap, and mean overlap were calculated to compare the automatic and manual methods and ranged from 0.85 to 0.99. A Euclidean distance comparison was also performed and showed no measurable difference between manual and automatic segmentations. Furthermore, our new method was 18 times faster than manual segmentation. Overall, this study describes a reliable, accurate, and automatic segmentation method for mineralized knee structures from microCT images, and will allow for efficient assessment of bony changes in small animal models of PTOA.
Fluegge, Kyle; Malone, LaShaunda L; Nsereko, Mary; Okware, Brenda; Wejse, Christian; Kisingo, Hussein; Mupere, Ezekiel; Boom, W Henry; Stein, Catherine M
2018-06-26
Appraisal delay is the time a patient takes to consider a symptom as not only noticeable, but a sign of illness. The study's objective was to determine the association between appraisal delay in seeking tuberculosis (TB) treatment and geographic distance measured by network travel (driving and pedestrian) time (in minutes) and distance (Euclidean and self-reported) (in kilometers) and to identify other risk factors from selected covariates and how they modify the core association between delay and distance. This was part of a longitudinal cohort study known as the Kawempe Community Health Study based in Kampala, Uganda. The study enrolled households from April 2002 to July 2012. Multivariable interval regression with multiplicative heteroscedasticity was used to assess the impact of time and distance on delay. The delay interval outcome was defined using a comprehensive set of 28 possible self-reported symptoms. The main independent variables were network travel time (in minutes) and Euclidean distance (in kilometers). Other covariates were organized according to the Andersen utilization conceptual framework. A total of 838 patients with both distance and delay data were included in the network analysis. Bivariate analyses did not reveal a significant association of any distance metric with the delay outcome. However, adjusting for patient characteristics and cavitary disease status, the multivariable model indicated that each minute of driving time to the clinic significantly (p = 0.02) and positively predicted 0.25 days' delay. At the median distance value of 47 min, this represented an additional delay of about 12 (95% CI: [3, 21]) days to the mean of 40 days (95% CI: [25, 56]). Increasing Euclidean distance significantly predicted (p = 0.02) reduced variance in the delay outcome, thereby increasing precision of the mean delay estimate. At the median Euclidean distance of 2.8 km, the variance in the delay was reduced by more than 25%. Of the four geographic distance measures, network travel driving time was a better and more robust predictor of mean delay in this setting. Including network travel driving time with other risk factors may be important in identifying populations especially vulnerable to delay.
Interspecific utilisation of wax in comb building by honeybees
NASA Astrophysics Data System (ADS)
Hepburn, H. Randall; Radloff, Sarah E.; Duangphakdee, Orawan; Phaincharoen, Mananya
2009-06-01
Beeswaxes of honeybee species share some homologous neutral lipids; but species-specific differences remain. We analysed behavioural variation for wax choice in honeybees, calculated the Euclidean distances for different beeswaxes and assessed the relationship of Euclidean distances to wax choice. We tested the beeswaxes of Apis mellifera capensis, Apis florea, Apis cerana and Apis dorsata and the plant and mineral waxes Japan, candelilla, bayberry and ozokerite as sheets placed in colonies of A. m. capensis, A. florea and A. cerana. A. m. capensis accepted the four beeswaxes but removed Japan and bayberry wax and ignored candelilla and ozokerite. A. cerana colonies accepted the wax of A. cerana, A. florea and A. dorsata but rejected or ignored that of A. m. capensis, the plant and mineral waxes. A. florea colonies accepted A. cerana, A. dorsata and A. florea wax but rejected that of A. m. capensis. The Euclidean distances for the beeswaxes are consistent with currently prevailing phylogenies for Apis. Despite post-speciation chemical differences in the beeswaxes, they remain largely acceptable interspecifically while the plant and mineral waxes are not chemically close enough to beeswax for their acceptance.
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A
2013-02-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Body-Earth Mover's Distance: A Matching-Based Approach for Sleep Posture Recognition.
Xu, Xiaowei; Lin, Feng; Wang, Aosen; Hu, Yu; Huang, Ming-Chun; Xu, Wenyao
2016-10-01
Sleep posture is a key component in sleep quality assessment and pressure ulcer prevention. Currently, body pressure analysis has been a popular method for sleep posture recognition. In this paper, a matching-based approach, Body-Earth Mover's Distance (BEMD), for sleep posture recognition is proposed. BEMD treats pressure images as weighted 2D shapes, and combines EMD and Euclidean distance for similarity measure. Compared with existing work, sleep posture recognition is achieved with posture similarity rather than multiple features for specific postures. A pilot study is performed with 14 persons for six different postures. The experimental results show that the proposed BEMD can achieve 91.21% accuracy, which outperforms the previous method with an improvement of 8.01%.
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
Ultrasound Image Despeckling Using Stochastic Distance-Based BM3D.
Santos, Cid A N; Martins, Diego L N; Mascarenhas, Nelson D A
2017-06-01
Ultrasound image despeckling is an important research field, since it can improve the interpretability of one of the main categories of medical imaging. Many techniques have been tried over the years for ultrasound despeckling, and more recently, a great deal of attention has been focused on patch-based methods, such as non-local means and block-matching collaborative filtering (BM3D). A common idea in these recent methods is the measure of distance between patches, originally proposed as the Euclidean distance, for filtering additive white Gaussian noise. In this paper, we derive new stochastic distances for the Fisher-Tippett distribution, based on well-known statistical divergences, and use them as patch distance measures in a modified version of the BM3D algorithm for despeckling log-compressed ultrasound images. State-of-the-art results in filtering simulated, synthetic, and real ultrasound images confirm the potential of the proposed approach.
Follicle Detection on the USG Images to Support Determination of Polycystic Ovary Syndrome
NASA Astrophysics Data System (ADS)
Adiwijaya; Purnama, B.; Hasyim, A.; Septiani, M. D.; Wisesty, U. N.; Astuti, W.
2015-06-01
Polycystic Ovary Syndrome(PCOS) is the most common endocrine disorders affected to female in their reproductive cycle. This has gained the attention from married couple which affected by infertility. One of the diagnostic criteria considereded by the doctor is analysing manually the ovary USG image to detect the number and size of ovary's follicle. This analysis may affect low varibilites, reproducibility, and efficiency. To overcome this problems. automatic scheme is suggested to detect the follicle on USG image in supporting PCOS diagnosis. The first scheme is determining the initial homogeneous region which will be segmented into real follicle form The next scheme is selecting the appropriate regions to follicle criteria. then measuring the segmented region attribute as the follicle. The measurement remains the number and size that aimed at categorizing the image into the PCOS or non-PCOS. The method used is region growing which includes region-based and seed-based. To measure the follicle diameter. there will be the different method including stereology and euclidean distance. The most optimum system plan to detect PCO is by using region growing and by using euclidean distance on quantification of follicle.
Use of units of measurement error in anthropometric comparisons.
Lucas, Teghan; Henneberg, Maciej
2017-09-01
Anthropometrists attempt to minimise measurement errors, however, errors cannot be eliminated entirely. Currently, measurement errors are simply reported. Measurement errors should be included into analyses of anthropometric data. This study proposes a method which incorporates measurement errors into reported values, replacing metric units with 'units of technical error of measurement (TEM)' by applying these to forensics, industrial anthropometry and biological variation. The USA armed forces anthropometric survey (ANSUR) contains 132 anthropometric dimensions of 3982 individuals. Concepts of duplication and Euclidean distance calculations were applied to the forensic-style identification of individuals in this survey. The National Size and Shape Survey of Australia contains 65 anthropometric measurements of 1265 women. This sample was used to show how a woman's body measurements expressed in TEM could be 'matched' to standard clothing sizes. Euclidean distances show that two sets of repeated anthropometric measurements of the same person cannot be matched (> 0) on measurements expressed in millimetres but can in units of TEM (= 0). Only 81 women can fit into any standard clothing size when matched using centimetres, with units of TEM, 1944 women fit. The proposed method can be applied to all fields that use anthropometry. Units of TEM are considered a more reliable unit of measurement for comparisons.
Fourier Magnitude-Based Privacy-Preserving Clustering on Time-Series Data
NASA Astrophysics Data System (ADS)
Kim, Hea-Suk; Moon, Yang-Sae
Privacy-preserving clustering (PPC in short) is important in publishing sensitive time-series data. Previous PPC solutions, however, have a problem of not preserving distance orders or incurring privacy breach. To solve this problem, we propose a new PPC approach that exploits Fourier magnitudes of time-series. Our magnitude-based method does not cause privacy breach even though its techniques or related parameters are publicly revealed. Using magnitudes only, however, incurs the distance order problem, and we thus present magnitude selection strategies to preserve as many Euclidean distance orders as possible. Through extensive experiments, we showcase the superiority of our magnitude-based approach.
Pearson, Amber L
2016-09-20
Most water access studies involve self-reported measures such as time spent or simple spatial measures such as Euclidean distance from home to source. GPS-based measures of access are often considered actual access and have shown little correlation with self-reported measures. One main obstacle to widespread use of GPS-based measurement of access to water has been technological limitations (e.g., battery life). As such, GPS-based measures have been limited by time and in sample size. The aim of this pilot study was to develop and test a novel GPS unit, (≤4-week battery life, waterproof) to measure access to water. The GPS-based method was pilot-tested to estimate number of trips per day, time spent and distance traveled to source for all water collected over a 3-day period in five households in south-western Uganda. This method was then compared to self-reported measures and commonly used spatial measures of access for the same households. Time spent collecting water was significantly overestimated using a self-reported measure, compared to GPS-based (p < 0.05). In contrast, both the GIS Euclidean distances to nearest and actual primary source significantly underestimated distances traveled, compared to the GPS-based measurement of actual travel paths to water source (p < 0.05). Households did not consistently collect water from the source nearest their home. Comparisons between the GPS-based measure and self-reported meters traveled were not made, as respondents did not feel that they could accurately estimate distance. However, there was complete agreement between self-reported primary source and GPS-based. Reliance on cross-sectional self-reported or simple GIS measures leads to misclassification in water access measurement. This new method offers reductions in such errors and may aid in understanding dynamic measures of access to water for health studies.
Intelligent bar chart plagiarism detection in documents.
Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Rehman, Amjad; Alkawaz, Mohammed Hazim; Saba, Tanzila; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah
2014-01-01
This paper presents a novel features mining approach from documents that could not be mined via optical character recognition (OCR). By identifying the intimate relationship between the text and graphical components, the proposed technique pulls out the Start, End, and Exact values for each bar. Furthermore, the word 2-gram and Euclidean distance methods are used to accurately detect and determine plagiarism in bar charts.
Intelligent Bar Chart Plagiarism Detection in Documents
Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Alkawaz, Mohammed Hazim; Saba, Tanzila; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah
2014-01-01
This paper presents a novel features mining approach from documents that could not be mined via optical character recognition (OCR). By identifying the intimate relationship between the text and graphical components, the proposed technique pulls out the Start, End, and Exact values for each bar. Furthermore, the word 2-gram and Euclidean distance methods are used to accurately detect and determine plagiarism in bar charts. PMID:25309952
NASA Astrophysics Data System (ADS)
Liu, Hu-Chen; Liu, Long; Li, Ping
2014-10-01
Failure mode and effects analysis (FMEA) has shown its effectiveness in examining potential failures in products, process, designs or services and has been extensively used for safety and reliability analysis in a wide range of industries. However, its approach to prioritise failure modes through a crisp risk priority number (RPN) has been criticised as having several shortcomings. The aim of this paper is to develop an efficient and comprehensive risk assessment methodology using intuitionistic fuzzy hybrid weighted Euclidean distance (IFHWED) operator to overcome the limitations and improve the effectiveness of the traditional FMEA. The diversified and uncertain assessments given by FMEA team members are treated as linguistic terms expressed in intuitionistic fuzzy numbers (IFNs). Intuitionistic fuzzy weighted averaging (IFWA) operator is used to aggregate the FMEA team members' individual assessments into a group assessment. IFHWED operator is applied thereafter to the prioritisation and selection of failure modes. Particularly, both subjective and objective weights of risk factors are considered during the risk evaluation process. A numerical example for risk assessment is given to illustrate the proposed method finally.
Bessell, Paul R; Shaw, Darren J; Savill, Nicholas J; Woolhouse, Mark E J
2008-10-03
Models of Foot and Mouth Disease (FMD) transmission have assumed a homogeneous landscape across which Euclidean distance is a suitable measure of the spatial dependency of transmission. This paper investigated features of the landscape and their impact on transmission during the period of predominantly local spread which followed the implementation of the national movement ban during the 2001 UK FMD epidemic. In this study 113 farms diagnosed with FMD which had a known source of infection within 3 km (cases) were matched to 188 control farms which were either uninfected or infected at a later timepoint. Cases were matched to controls by Euclidean distance to the source of infection and farm size. Intervening geographical features and connectivity between the source of infection and case and controls were compared. Road distance between holdings, access to holdings, presence of forest, elevation change between holdings and the presence of intervening roads had no impact on the risk of local FMD transmission (p > 0.2). However the presence of linear features in the form of rivers and railways acted as barriers to FMD transmission (odds ratio = 0.507, 95% CIs = 0.297,0.887, p = 0.018). This paper demonstrated that although FMD spread can generally be modelled using Euclidean distance and numbers of animals on susceptible holdings, the presence of rivers and railways has an additional protective effect reducing the probability of transmission between holdings.
NASA Astrophysics Data System (ADS)
Kadampur, Mohammad Ali; D. v. L. N., Somayajulu
Privacy preserving data mining is an art of knowledge discovery without revealing the sensitive data of the data set. In this paper a data transformation technique using wavelets is presented for privacy preserving data mining. Wavelets use well known energy compaction approach during data transformation and only the high energy coefficients are published to the public domain instead of the actual data proper. It is found that the transformed data preserves the Eucleadian distances and the method can be used in privacy preserving clustering. Wavelets offer the inherent improved time complexity.
NASA Astrophysics Data System (ADS)
Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.
2017-07-01
Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
Numerical analysis of interface debonding detection in bonded repair with Rayleigh waves
NASA Astrophysics Data System (ADS)
Xu, Ying; Li, BingCheng; Lu, Miaomiao
2017-01-01
This paper studied how to use the variation of the dispersion curves of Rayleigh wave group velocity to detect interfacial debonding damage between FRP plate and steel beam. Since FRP strengthened steel beam is two layers medium, Rayleigh wave velocity dispersion phenomenon will happen. The interface debonding damage of FRP strengthened steel beam have an obvious effect on the Rayleigh wave velocity dispersion curve. The paper first put forward average Euclidean distance and Angle separation degree to describe the relationship between the different dispersion curves. Numerical results indicate that there is a approximate linear mapping relationship between the average Euclidean distance of dispersion curves and the length of interfacial debonding damage.
Partially supervised speaker clustering.
Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S
2012-05-01
Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.
Diagnosing synaesthesia with online colour pickers: maximising sensitivity and specificity.
Rothen, Nicolas; Seth, Anil K; Witzel, Christoph; Ward, Jamie
2013-04-30
The most commonly used method for formally assessing grapheme-colour synaesthesia (i.e., experiencing colours in response to letter and/or number stimuli) involves selecting colours from a large colour palette on several occasions and measuring consistency of the colours selected. However, the ability to diagnose synaesthesia using this method depends on several factors that have not been directly contrasted. These include the type of colour space used (e.g., RGB, HSV, CIELUV, CIELAB) and different measures of consistency (e.g., city block and Euclidean distance in colour space). This study aims to find the most reliable way of diagnosing grapheme-colour synaesthesia based on maximising sensitivity (i.e., ability of a test to identify true synaesthetes) and specificity (i.e., ability of a test to identify true non-synaesthetes). We show, applying ROC (receiver operating characteristics) to binary classification of a large sample of self-declared synaesthetes and non-synaesthetes, that the consistency criterion (i.e., cut-off value) for diagnosing synaesthesia is considerably higher than the current standard in the field. We also show that methods based on perceptual CIELUV and CIELAB colour models (rather than RGB and HSV colour representations) and Euclidean distances offer an even greater sensitivity and specificity than most currently used measures. Together, these findings offer improved heuristics for the behavioural assessment of grapheme-colour synaesthesia. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Di, Nur Faraidah Muhammad; Satari, Siti Zanariah
2017-05-01
Outlier detection in linear data sets has been done vigorously but only a small amount of work has been done for outlier detection in circular data. In this study, we proposed multiple outliers detection in circular regression models based on the clustering algorithm. Clustering technique basically utilizes distance measure to define distance between various data points. Here, we introduce the similarity distance based on Euclidean distance for circular model and obtain a cluster tree using the single linkage clustering algorithm. Then, a stopping rule for the cluster tree based on the mean direction and circular standard deviation of the tree height is proposed. We classify the cluster group that exceeds the stopping rule as potential outlier. Our aim is to demonstrate the effectiveness of proposed algorithms with the similarity distances in detecting the outliers. It is found that the proposed methods are performed well and applicable for circular regression model.
An algorithm for calculating minimum Euclidean distance between two geographic features
NASA Astrophysics Data System (ADS)
Peuquet, Donna J.
1992-09-01
An efficient algorithm is presented for determining the shortest Euclidean distance between two features of arbitrary shape that are represented in quadtree form. These features may be disjoint point sets, lines, or polygons. It is assumed that the features do not overlap. Features also may be intertwined and polygons may be complex (i.e. have holes). Utilizing a spatial divide-and-conquer approach inherent in the quadtree data model, the basic rationale is to narrow-in on portions of each feature quickly that are on a facing edge relative to the other feature, and to minimize the number of point-to-point Euclidean distance calculations that must be performed. Besides offering an efficient, grid-based alternative solution, another unique and useful aspect of the current algorithm is that is can be used for rapidly calculating distance approximations at coarser levels of resolution. The overall process can be viewed as a top-down parallel search. Using one list of leafcode addresses for each of the two features as input, the algorithm is implemented by successively dividing these lists into four sublists for each descendant quadrant. The algorithm consists of two primary phases. The first determines facing adjacent quadrant pairs where part or all of the two features are separated between the two quadrants, respectively. The second phase then determines the closest pixel-level subquadrant pairs within each facing quadrant pair at the lowest level. The key element of the second phase is a quick estimate distance heuristic for further elimination of locations that are not as near as neighboring locations.
A Heuristic Derivation of Minkowski Distance and Lorentz Transformation
ERIC Educational Resources Information Center
Hassani, Sadri
2008-01-01
Students learn new abstract concepts best when these concepts are connected through a well-designed analogy, to familiar ideas. Since the concept of the relativistic spacetime distance is highly abstract, it would be desirable to connect it to the familiar Euclidean distance, but present the latter in such a way that it makes a transparent contact…
Efficient distance calculation using the spherically-extended polytope (s-tope) model
NASA Technical Reports Server (NTRS)
Hamlin, Gregory J.; Kelley, Robert B.; Tornero, Josep
1991-01-01
An object representation scheme which allows for Euclidean distance calculation is presented. The object model extends the polytope model by representing objects as the convex hull of a finite set of spheres. An algorithm for calculating distances between objects is developed which is linear in the total number of spheres specifying the two objects.
Han, Xue; Jiang, Hong; Zhang, Dingkun; Zhang, Yingying; Xiong, Xi; Jiao, Jiaojiao; Xu, Runchun; Yang, Ming; Han, Li; Lin, Junzhi
2017-01-01
Background: The current astringency evaluation for herbs has become dissatisfied with the requirement of pharmaceutical process. It needed a new method to accurately assess astringency. Methods: First, quinine, sucrose, citric acid, sodium chloride, monosodium glutamate, and tannic acid (TA) were analyzed by electronic tongue (e-tongue) to determine the approximate region of astringency in partial least square (PLS) map. Second, different concentrations of TA were detected to define the standard curve of astringency. Meanwhile, coordinate-concentration relationship could be obtained by fitting the PLS abscissa of standard curve and corresponding concentration. Third, Chebulae Fructus (CF), Yuganzi throat tablets (YGZTT), and Sanlejiang oral liquid (SLJOL) were tested to define the region in PLS map. Finally, the astringent intensities of samples were calculated combining with the standard coordinate-concentration relationship and expressed by concentrations of TA. Then, Euclidean distance (Ed) analysis and human sensory test were processed to verify the results. Results: The fitting equation between concentration and abscissa of TA was Y = 0.00498 × e(−X/0.51035) + 0.10905 (r = 0.999). The astringency of 1, 0.1 mg/mL CF was predicted at 0.28, 0.12 mg/mL TA; 2, 0.2 mg/mL YGZTTs was predicted at 0.18, 0.11 mg/mL TA; 0.002, 0.0002 mg/mL SLJOL was predicted at 0.15, 0.10 mg/mL TA. The validation results showed that the predicted astringency of e-tongue was basically consistent to human sensory and was more accuracy than Ed analysis. Conclusion: The study indicated the established method was objective and feasible. It provided a new quantitative method for astringency of herbs. SUMMARY The astringency of Chebulae Fructus, Yuganzi throat tablets, and Sanlejiang oral liquid was predicted by electronic tongueEuclidean distance analysis and human sensory test verified the resultsA new strategy which was objective, simple, and sensitive to compare astringent intensity of herbs and preparations was provided. Abbreviations used: CF: Chebulae Fructus; E-tongue: Electronic tongue; Ed: Euclidean distance; PLS: Partial least square; PCA: Principal component analysis; SLJOL: Sanlejiang oral liquid; TA: Tannic acid; VAS: Visual analog scale; YGZTT: Yuganzi throat tablets. PMID:28839378
On-line Adaptive Radiation Treatment of Prostate Cancer
2008-01-01
novel imaging system using a linear x-ray source and a linear detector . This imaging system may significantly improve the quality of online images...yielded the Euclidean voxel distances nside the ROI. The two distance maps were combined with ositive distances outside and negative distances inside...is reduced by 1cm. IMRT is more sensitive to organ motion. Large discrepancies of bladder and rectum doses were observed compared to the actual
Nearest neighbor imputation using spatial–temporal correlations in wireless sensor networks
Li, YuanYuan; Parker, Lynne E.
2016-01-01
Missing data is common in Wireless Sensor Networks (WSNs), especially with multi-hop communications. There are many reasons for this phenomenon, such as unstable wireless communications, synchronization issues, and unreliable sensors. Unfortunately, missing data creates a number of problems for WSNs. First, since most sensor nodes in the network are battery-powered, it is too expensive to have the nodes retransmit missing data across the network. Data re-transmission may also cause time delays when detecting abnormal changes in an environment. Furthermore, localized reasoning techniques on sensor nodes (such as machine learning algorithms to classify states of the environment) are generally not robust enough to handle missing data. Since sensor data collected by a WSN is generally correlated in time and space, we illustrate how replacing missing sensor values with spatially and temporally correlated sensor values can significantly improve the network’s performance. However, our studies show that it is important to determine which nodes are spatially and temporally correlated with each other. Simple techniques based on Euclidean distance are not sufficient for complex environmental deployments. Thus, we have developed a novel Nearest Neighbor (NN) imputation method that estimates missing data in WSNs by learning spatial and temporal correlations between sensor nodes. To improve the search time, we utilize a kd-tree data structure, which is a non-parametric, data-driven binary search tree. Instead of using traditional mean and variance of each dimension for kd-tree construction, and Euclidean distance for kd-tree search, we use weighted variances and weighted Euclidean distances based on measured percentages of missing data. We have evaluated this approach through experiments on sensor data from a volcano dataset collected by a network of Crossbow motes, as well as experiments using sensor data from a highway traffic monitoring application. Our experimental results show that our proposed 𝒦-NN imputation method has a competitive accuracy with state-of-the-art Expectation–Maximization (EM) techniques, while using much simpler computational techniques, thus making it suitable for use in resource-constrained WSNs. PMID:28435414
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1991-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.
Analysis of k-means clustering approach on the breast cancer Wisconsin dataset.
Dubey, Ashutosh Kumar; Gupta, Umesh; Jain, Sonal
2016-11-01
Breast cancer is one of the most common cancers found worldwide and most frequently found in women. An early detection of breast cancer provides the possibility of its cure; therefore, a large number of studies are currently going on to identify methods that can detect breast cancer in its early stages. This study was aimed to find the effects of k-means clustering algorithm with different computation measures like centroid, distance, split method, epoch, attribute, and iteration and to carefully consider and identify the combination of measures that has potential of highly accurate clustering accuracy. K-means algorithm was used to evaluate the impact of clustering using centroid initialization, distance measures, and split methods. The experiments were performed using breast cancer Wisconsin (BCW) diagnostic dataset. Foggy and random centroids were used for the centroid initialization. In foggy centroid, based on random values, the first centroid was calculated. For random centroid, the initial centroid was considered as (0, 0). The results were obtained by employing k-means algorithm and are discussed with different cases considering variable parameters. The calculations were based on the centroid (foggy/random), distance (Euclidean/Manhattan/Pearson), split (simple/variance), threshold (constant epoch/same centroid), attribute (2-9), and iteration (4-10). Approximately, 92 % average positive prediction accuracy was obtained with this approach. Better results were found for the same centroid and the highest variance. The results achieved using Euclidean and Manhattan were better than the Pearson correlation. The findings of this work provided extensive understanding of the computational parameters that can be used with k-means. The results indicated that k-means has a potential to classify BCW dataset.
A fast non-local means algorithm based on integral image and reconstructed similar kernel
NASA Astrophysics Data System (ADS)
Lin, Zheng; Song, Enmin
2018-03-01
Image denoising is one of the essential methods in digital image processing. The non-local means (NLM) denoising approach is a remarkable denoising technique. However, its time complexity of the computation is high. In this paper, we design a fast NLM algorithm based on integral image and reconstructed similar kernel. First, the integral image is introduced in the traditional NLM algorithm. In doing so, it reduces a great deal of repetitive operations in the parallel processing, which will greatly improves the running speed of the algorithm. Secondly, in order to amend the error of the integral image, we construct a similar window resembling the Gaussian kernel in the pyramidal stacking pattern. Finally, in order to eliminate the influence produced by replacing the Gaussian weighted Euclidean distance with Euclidean distance, we propose a scheme to construct a similar kernel with a size of 3 x 3 in a neighborhood window which will reduce the effect of noise on a single pixel. Experimental results demonstrate that the proposed algorithm is about seventeen times faster than the traditional NLM algorithm, yet produce comparable results in terms of Peak Signal-to- Noise Ratio (the PSNR increased 2.9% in average) and perceptual image quality.
Peterson, Leif E
2002-01-01
CLUSFAVOR (CLUSter and Factor Analysis with Varimax Orthogonal Rotation) 5.0 is a Windows-based computer program for hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles. CLUSFAVOR 5.0 standardizes input data; sorts data according to gene-specific coefficient of variation, standard deviation, average and total expression, and Shannon entropy; performs hierarchical cluster analysis using nearest-neighbor, unweighted pair-group method using arithmetic averages (UPGMA), or furthest-neighbor joining methods, and Euclidean, correlation, or jack-knife distances; and performs principal-component analysis. PMID:12184816
The depth estimation of 3D face from single 2D picture based on manifold learning constraints
NASA Astrophysics Data System (ADS)
Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia
2018-04-01
The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.
Phillipsen, Ivan C; Kirk, Emily H; Bogan, Michael T; Mims, Meryl C; Olden, Julian D; Lytle, David A
2015-01-01
Species occupying the same geographic range can exhibit remarkably different population structures across the landscape, ranging from highly diversified to panmictic. Given limitations on collecting population-level data for large numbers of species, ecologists seek to identify proximate organismal traits-such as dispersal ability, habitat preference and life history-that are strong predictors of realized population structure. We examined how dispersal ability and habitat structure affect the regional balance of gene flow and genetic drift within three aquatic insects that represent the range of dispersal abilities and habitat requirements observed in desert stream insect communities. For each species, we tested for linear relationships between genetic distances and geographic distances using Euclidean and landscape-based metrics of resistance. We found that the moderate-disperser Mesocapnia arizonensis (Plecoptera: Capniidae) has a strong isolation-by-distance pattern, suggesting migration-drift equilibrium. By contrast, population structure in the flightless Abedus herberti (Hemiptera: Belostomatidae) is influenced by genetic drift, while gene flow is the dominant force in the strong-flying Boreonectes aequinoctialis (Coleoptera: Dytiscidae). The best-fitting landscape model for M. arizonensis was based on Euclidean distance. Analyses also identified a strong spatial scale-dependence, where landscape genetic methods only performed well for species that were intermediate in dispersal ability. Our results highlight the fact that when either gene flow or genetic drift dominates in shaping population structure, no detectable relationship between genetic and geographic distances is expected at certain spatial scales. This study provides insight into how gene flow and drift interact at the regional scale for these insects as well as the organisms that share similar habitats and dispersal abilities. © 2014 John Wiley & Sons Ltd.
Discrimination of different sub-basins on Tajo River based on water influence factor
NASA Astrophysics Data System (ADS)
Bermudez, R.; Gascó, J. M.; Tarquis, A. M.; Saa-Requejo, A.
2009-04-01
Numeric taxonomy has been applied to classify Tajo basin water (Spain) till Portugal border. Several stations, a total of 52, that estimate 15 water variables have been used in this study. The different groups have been obtained applying a Euclidean distance among stations (distance classification) and a Euclidean distance between each station and the centroid estimated among them (centroid classification), varying the number of parameters and with or without variable typification. In order to compare the classification a log-log relation has been established, between number of groups created and distances, to select the best one. It has been observed that centroid classification is more appropriate following in a more logic way the natural constrictions than the minimum distance among stations. Variable typification doesn't improve the classification except when the centroid method is applied. Taking in consideration the ions and the sum of them as variables, the classification improved. Stations are grouped based on electric conductivity (CE), total anions (TA), total cations (TC) and ions ratio (Na/Ca and Mg/Ca). For a given classification and comparing the different groups created a certain variation in ions concentration and ions ratio are observed. However, the variation in each ion among groups is different depending on the case. For the last group, regardless the classification, the increase in all ions is general. Comparing the dendrograms, and groups that originated, Tajo river basin can be sub dived in five sub-basins differentiated by the main influence on water: 1. With a higher ombrogenic influence (rain fed). 2. With ombrogenic and pedogenic influence (rain and groundwater fed). 3. With pedogenic influence. 4. With lithogenic influence (geological bedrock). 5. With a higher ombrogenic and lithogenic influence added.
Water quality assessment with hierarchical cluster analysis based on Mahalanobis distance.
Du, Xiangjun; Shao, Fengjing; Wu, Shunyao; Zhang, Hanlin; Xu, Si
2017-07-01
Water quality assessment is crucial for assessment of marine eutrophication, prediction of harmful algal blooms, and environment protection. Previous studies have developed many numeric modeling methods and data driven approaches for water quality assessment. The cluster analysis, an approach widely used for grouping data, has also been employed. However, there are complex correlations between water quality variables, which play important roles in water quality assessment but have always been overlooked. In this paper, we analyze correlations between water quality variables and propose an alternative method for water quality assessment with hierarchical cluster analysis based on Mahalanobis distance. Further, we cluster water quality data collected form coastal water of Bohai Sea and North Yellow Sea of China, and apply clustering results to evaluate its water quality. To evaluate the validity, we also cluster the water quality data with cluster analysis based on Euclidean distance, which are widely adopted by previous studies. The results show that our method is more suitable for water quality assessment with many correlated water quality variables. To our knowledge, it is the first attempt to apply Mahalanobis distance for coastal water quality assessment.
A Case-Based Reasoning Method with Rank Aggregation
NASA Astrophysics Data System (ADS)
Sun, Jinhua; Du, Jiao; Hu, Jian
2018-03-01
In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.
Triangles with Given Distances from a Centre
ERIC Educational Resources Information Center
Maloo, Alok K.; Lal, Arbind K.; Singh, Arindama
2002-01-01
There are four Euclidean centres of a triangle--the circumcentre, the centroid, the incentre and the orthocentre. In this article, the authors prove the following: if the centre is the incentre (resp. orthocentre) then there exists a triangle with given distances of its vertices from its incentre (resp. orthocentre). They also consider uniqueness…
An Axiom System for High School Geometry Based on Isometrics.
ERIC Educational Resources Information Center
Beard, Earl M. L.
Presented in this report is an approach to Euclidean geometry that makes use of distance preserving transformations as the primary approach in the development of the proposed course. The foundation of the course consists of an axiom set that is a combination of Binkhoff's, Hilbert's, and Klein's. Transformations and distance preserving…
Discrimination of malignant lymphomas and leukemia using Radon transform based-higher order spectra
NASA Astrophysics Data System (ADS)
Luo, Yi; Celenk, Mehmet; Bejai, Prashanth
2006-03-01
A new algorithm that can be used to automatically recognize and classify malignant lymphomas and leukemia is proposed in this paper. The algorithm utilizes the morphological watersheds to obtain boundaries of cells from cell images and isolate them from the surrounding background. The areas of cells are extracted from cell images after background subtraction. The Radon transform and higher-order spectra (HOS) analysis are utilized as an image processing tool to generate class feature vectors of different type cells and to extract testing cells' feature vectors. The testing cells' feature vectors are then compared with the known class feature vectors for a possible match by computing the Euclidean distances. The cell in question is classified as belonging to one of the existing cell classes in the least Euclidean distance sense.
Nearest neighbors by neighborhood counting.
Wang, Hui
2006-06-01
Finding nearest neighbors is a general idea that underlies many artificial intelligence tasks, including machine learning, data mining, natural language understanding, and information retrieval. This idea is explicitly used in the k-nearest neighbors algorithm (kNN), a popular classification method. In this paper, this idea is adopted in the development of a general methodology, neighborhood counting, for devising similarity functions. We turn our focus from neighbors to neighborhoods, a region in the data space covering the data point in question. To measure the similarity between two data points, we consider all neighborhoods that cover both data points. We propose to use the number of such neighborhoods as a measure of similarity. Neighborhood can be defined for different types of data in different ways. Here, we consider one definition of neighborhood for multivariate data and derive a formula for such similarity, called neighborhood counting measure or NCM. NCM was tested experimentally in the framework of kNN. Experiments show that NCM is generally comparable to VDM and its variants, the state-of-the-art distance functions for multivariate data, and, at the same time, is consistently better for relatively large k values. Additionally, NCM consistently outperforms HEOM (a mixture of Euclidean and Hamming distances), the "standard" and most widely used distance function for multivariate data. NCM has a computational complexity in the same order as the standard Euclidean distance function and NCM is task independent and works for numerical and categorical data in a conceptually uniform way. The neighborhood counting methodology is proven sound for multivariate data experimentally. We hope it will work for other types of data.
Size and shape measurement in contemporary cephalometrics.
McIntyre, Grant T; Mossey, Peter A
2003-06-01
The traditional method of analysing cephalograms--conventional cephalometric analysis (CCA)--involves the calculation of linear distance measurements, angular measurements, area measurements, and ratios. Because shape information cannot be determined from these 'size-based' measurements, an increasing number of studies employ geometric morphometric tools in the cephalometric analysis of craniofacial morphology. Most of the discussions surrounding the appropriateness of CCA, Procrustes superimposition, Euclidean distance matrix analysis (EDMA), thin-plate spline analysis (TPS), finite element morphometry (FEM), elliptical Fourier functions (EFF), and medial axis analysis (MAA) have centred upon mathematical and statistical arguments. Surprisingly, little information is available to assist the orthodontist in the clinical relevance of each technique. This article evaluates the advantages and limitations of the above methods currently used to analyse the craniofacial morphology on cephalograms and investigates their clinical relevance and possible applications.
Evidence for asymptotic safety from lattice quantum gravity.
Laiho, J; Coumbe, D
2011-10-14
We calculate the spectral dimension for nonperturbative quantum gravity defined via Euclidean dynamical triangulations. We find that it runs from a value of ∼3/2 at short distance to ∼4 at large distance scales, similar to results from causal dynamical triangulations. We argue that the short-distance value of 3/2 for the spectral dimension may resolve the tension between asymptotic safety and the holographic principle.
Klein-Júnior, Luiz C; Viaene, Johan; Salton, Juliana; Koetz, Mariana; Gasper, André L; Henriques, Amélia T; Vander Heyden, Yvan
2016-09-09
Extraction methods evaluation to access plants metabolome is usually performed visually, lacking a truthful method of data handling. In the present study the major aim was developing reliable time- and solvent-saving extraction and fractionation methods to access alkaloid profiling of Psychotria nemorosa leaves. Ultrasound assisted extraction was selected as extraction method. Determined from a Fractional Factorial Design (FFD) approach, yield, sum of peak areas, and peak numbers were rather meaningless responses. However, Euclidean distance calculations between the UPLC-DAD metabolic profiles and the blank injection evidenced the extracts are highly diverse. Coupled with the calculation and plotting of effects per time point, it was possible to indicate thermolabile peaks. After screening, time and temperature were selected for optimization, while plant:solvent ratio was set at 1:50 (m/v), number of extractions at one and particle size at ≤180μm. From Central Composite Design (CCD) results modeling heights of important peaks, previously indicated by the FFD metabolic profile analysis, time was set at 65min and temperature at 45°C, thus avoiding degradation. For the fractionation step, a solid phase extraction method was optimized by a Box-Behnken Design (BBD) approach using the sum of peak areas as response. Sample concentration was consequently set at 150mg/mL, % acetonitrile in dichloromethane at 40% as eluting solvent, and eluting volume at 30mL. Summarized, the Euclidean distance and the metabolite profiles provided significant responses for accessing P. nemorosa alkaloids, allowing developing reliable extraction and fractionation methods, avoiding degradation and decreasing the required time and solvent volume. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Martinetti, Pierre; Tomassini, Luca
2013-10-01
We study the metric aspect of the Moyal plane from Connes' noncommutative geometry point of view. First, we compute Connes' spectral distance associated with the natural isometric action of on the algebra of the Moyal plane . We show that the distance between any state of and any of its translated states is precisely the amplitude of the translation. As a consequence, we obtain the spectral distance between coherent states of the quantum harmonic oscillator as the Euclidean distance on the plane. We investigate the classical limit, showing that the set of coherent states equipped with Connes' spectral distance tends towards the Euclidean plane as the parameter of deformation goes to zero. The extension of these results to the action of the symplectic group is also discussed, with particular emphasis on the orbits of coherent states under rotations. Second, we compute the spectral distance in the double Moyal plane, intended as the product of (the minimal unitization of) by . We show that on the set of states obtained by translation of an arbitrary state of , this distance is given by the Pythagoras theorem. On the way, we prove some Pythagoras inequalities for the product of arbitrary unital and non-degenerate spectral triples. Applied to the Doplicher- Fredenhagen-Roberts model of quantum spacetime [DFR], these two theorems show that Connes' spectral distance and the DFR quantum length coincide on the set of states of optimal localization.
Methods for computing color anaglyphs
NASA Astrophysics Data System (ADS)
McAllister, David F.; Zhou, Ya; Sullivan, Sophia
2010-02-01
A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.
Learning Human Actions by Combining Global Dynamics and Local Appearance.
Luo, Guan; Yang, Shuang; Tian, Guodong; Yuan, Chunfeng; Hu, Weiming; Maybank, Stephen J
2014-12-01
In this paper, we address the problem of human action recognition through combining global temporal dynamics and local visual spatio-temporal appearance features. For this purpose, in the global temporal dimension, we propose to model the motion dynamics with robust linear dynamical systems (LDSs) and use the model parameters as motion descriptors. Since LDSs live in a non-Euclidean space and the descriptors are in non-vector form, we propose a shift invariant subspace angles based distance to measure the similarity between LDSs. In the local visual dimension, we construct curved spatio-temporal cuboids along the trajectories of densely sampled feature points and describe them using histograms of oriented gradients (HOG). The distance between motion sequences is computed with the Chi-Squared histogram distance in the bag-of-words framework. Finally we perform classification using the maximum margin distance learning method by combining the global dynamic distances and the local visual distances. We evaluate our approach for action recognition on five short clips data sets, namely Weizmann, KTH, UCF sports, Hollywood2 and UCF50, as well as three long continuous data sets, namely VIRAT, ADL and CRIM13. We show competitive results as compared with current state-of-the-art methods.
An improved real time image detection system for elephant intrusion along the forest border areas.
Sugumar, S J; Jayaparvathy, R
2014-01-01
Human-elephant conflict is a major problem leading to crop damage, human death and injuries caused by elephants, and elephants being killed by humans. In this paper, we propose an automated unsupervised elephant image detection system (EIDS) as a solution to human-elephant conflict in the context of elephant conservation. The elephant's image is captured in the forest border areas and is sent to a base station via an RF network. The received image is decomposed using Haar wavelet to obtain multilevel wavelet coefficients, with which we perform image feature extraction and similarity match between the elephant query image and the database image using image vision algorithms. A GSM message is sent to the forest officials indicating that an elephant has been detected in the forest border and is approaching human habitat. We propose an optimized distance metric to improve the image retrieval time from the database. We compare the optimized distance metric with the popular Euclidean and Manhattan distance methods. The proposed optimized distance metric retrieves more images with lesser retrieval time than the other distance metrics which makes the optimized distance method more efficient and reliable.
NASA Astrophysics Data System (ADS)
Sanhouse-García, Antonio J.; Rangel-Peraza, Jesús Gabriel; Bustos-Terrones, Yaneth; García-Ferrer, Alfonso; Mesas-Carrascosa, Francisco J.
2016-02-01
Land cover classification is often based on different characteristics between their classes, but with great homogeneity within each one of them. This cover is obtained through field work or by mean of processing satellite images. Field work involves high costs; therefore, digital image processing techniques have become an important alternative to perform this task. However, in some developing countries and particularly in Casacoima municipality in Venezuela, there is a lack of geographic information systems due to the lack of updated information and high costs in software license acquisition. This research proposes a low cost methodology to develop thematic mapping of local land use and types of coverage in areas with scarce resources. Thematic mapping was developed from CBERS-2 images and spatial information available on the network using open source tools. The supervised classification method per pixel and per region was applied using different classification algorithms and comparing them among themselves. Classification method per pixel was based on Maxver algorithms (maximum likelihood) and Euclidean distance (minimum distance), while per region classification was based on the Bhattacharya algorithm. Satisfactory results were obtained from per region classification, where overall reliability of 83.93% and kappa index of 0.81% were observed. Maxver algorithm showed a reliability value of 73.36% and kappa index 0.69%, while Euclidean distance obtained values of 67.17% and 0.61% for reliability and kappa index, respectively. It was demonstrated that the proposed methodology was very useful in cartographic processing and updating, which in turn serve as a support to develop management plans and land management. Hence, open source tools showed to be an economically viable alternative not only for forestry organizations, but for the general public, allowing them to develop projects in economically depressed and/or environmentally threatened areas.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension
NASA Astrophysics Data System (ADS)
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension.
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
DNA methylation intratumor heterogeneity in localized lung adenocarcinomas.
Quek, Kelly; Li, Jun; Estecio, Marcos; Zhang, Jiexin; Fujimoto, Junya; Roarty, Emily; Little, Latasha; Chow, Chi-Wan; Song, Xingzhi; Behrens, Carmen; Chen, Taiping; William, William N; Swisher, Stephen; Heymach, John; Wistuba, Ignacio; Zhang, Jianhua; Futreal, Andrew; Zhang, Jianjun
2017-03-28
Cancers are composed of cells with distinct molecular and phenotypic features within a given tumor, a phenomenon termed intratumor heterogeneity (ITH). Previously, we have demonstrated genomic ITH in localized lung adenocarcinomas; however, the nature of methylation ITH in lung cancers has not been well investigated. In this study, we generated methylation profiles of 48 spatially separated tumor regions from 11 localized lung adenocarcinomas and their matched normal lung tissues using Illumina Infinium Human Methylation 450K BeadChip array. We observed methylation ITH within the same tumors, but to a much less extent compared to inter-individual heterogeneity. On average, 25% of all differentially methylated probes compared to matched normal lung tissues were shared by all regions from the same tumors. This is in contrast to somatic mutations, of which approximately 77% were shared events amongst all regions of individual tumors, suggesting that while the majority of somatic mutations were early clonal events, the tumor-specific DNA methylation might be associated with later branched evolution of these 11 tumors. Furthermore, our data showed that a higher extent of DNA methylation ITH was associated with larger tumor size (average Euclidean distance of 35.64 (> 3cm, median size) versus 27.24 (<= 3cm), p = 0.014), advanced age (average Euclidean distance of 34.95 (above 65) verse 28.06 (below 65), p = 0.046) and increased risk of postsurgical recurrence (average Euclidean distance of 35.65 (relapsed patients) versus 29.03 (patients without relapsed), p = 0.039).
Novel Data Reduction Based on Statistical Similarity
Lee, Dongeun; Sim, Alex; Choi, Jaesik; ...
2016-07-18
Applications such as scientific simulations and power grid monitoring are generating so much data quickly that compression is essential to reduce storage requirement or transmission capacity. To achieve better compression, one is often willing to discard some repeated information. These lossy compression methods are primarily designed to minimize the Euclidean distance between the original data and the compressed data. But this measure of distance severely limits either reconstruction quality or compression performance. In this paper, we propose a new class of compression method by redefining the distance measure with a statistical concept known as exchangeability. This approach reduces the storagemore » requirement and captures essential features, while reducing the storage requirement. In this paper, we report our design and implementation of such a compression method named IDEALEM. To demonstrate its effectiveness, we apply it on a set of power grid monitoring data, and show that it can reduce the volume of data much more than the best known compression method while maintaining the quality of the compressed data. Finally, in these tests, IDEALEM captures extraordinary events in the data, while its compression ratios can far exceed 100.« less
Bot, Maarten; van den Munckhof, Pepijn; Bakay, Roy; Stebbins, Glenn; Verhagen Metman, Leo
2017-01-01
Objective To determine the accuracy of intraoperative computed tomography (iCT) in localizing deep brain stimulation (DBS) electrodes by comparing this modality with postoperative magnetic resonance imaging (MRI). Background Optimal lead placement is a critical factor for the outcome of DBS procedures and preferably confirmed during surgery. iCT offers 3-dimensional verification of both microelectrode and lead location during DBS surgery. However, accurate electrode representation on iCT has not been extensively studied. Methods DBS surgery was performed using the Leksell stereotactic G frame. Stereotactic coordinates of 52 DBS leads were determined on both iCT and postoperative MRI and compared with intended final target coordinates. The resulting absolute differences in X (medial-lateral), Y (anterior-posterior), and Z (dorsal-ventral) coordinates (ΔX, ΔY, and ΔZ) for both modalities were then used to calculate the euclidean distance. Results Euclidean distances were 2.7 ± 1.1 and 2.5 ± 1.2 mm for MRI and iCT, respectively (p = 0.2). Conclusion Postoperative MRI and iCT show equivalent DBS lead representation. Intraoperative localization of both microelectrode and DBS lead in stereotactic space enables direct adjustments. Verification of lead placement with postoperative MRI, considered to be the gold standard, is unnecessary. PMID:28601874
Isonymy structure of four Venezuelan states.
Rodríguez-Larralde, A; Barrai, I; Alfonzo, J C
1993-01-01
The isonymy structure of four Venezuelan States-Falcón, Mérida, Nueva Esparta and Yaracuy-was studied using the surnames of the Venezuelan register of electors updated in 1984. The surname distributions of 155 counties were obtained and, for each county, estimates of consanguinity due to random isonymy and Fisher's alpha were calculated. It was shown that for large sample sizes the inverse of Fisher's alpha is identical to the unbiased estimate of within-population random isonymy. A three-dimensional isometric surface plot was obtained for each State, based on the counties' random isonymy estimates. The highest estimates of random consanguinity were found in the States of Nueva Esparta and Mérida, while the lowest were found in Yaracuy. Other microdifferentiation indicators from the same data gave similar results, and an interpretation was attempted, based on the particular economic and geographic characteristics of each State. Four different genetic distances between all possible pairs of counties were calculated within States; geographic distance shows the highest correlations with random isonymy and Euclidean distance, with the exception of the State of Nueva Esparta, where there is no correlation between geographic distance and random isonymy. It was possible to group counties in clusters, from dendrograms based on Euclidean distance. Isonymy clustering was also consistent with socioeconomic and geographic characteristics of the counties.
ERIC Educational Resources Information Center
Carbon, Claus-Christian
2010-01-01
Participants with personal and without personal experiences with the Earth as a sphere estimated large-scale distances between six cities located on different continents. Cognitive distances were submitted to a specific multidimensional scaling algorithm in the 3D Euclidean space with the constraint that all cities had to lie on the same sphere. A…
Distance-Based Phylogenetic Methods Around a Polytomy.
Davidson, Ruth; Sullivant, Seth
2014-01-01
Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny.
A flexible new method for 3D measurement based on multi-view image sequences
NASA Astrophysics Data System (ADS)
Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu
2016-11-01
Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.
Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C
2016-08-01
Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.
Spatial interpolation of river channel topography using the shortest temporal distance
NASA Astrophysics Data System (ADS)
Zhang, Yanjun; Xian, Cuiling; Chen, Huajin; Grieneisen, Michael L.; Liu, Jiaming; Zhang, Minghua
2016-11-01
It is difficult to interpolate river channel topography due to complex anisotropy. As the anisotropy is often caused by river flow, especially the hydrodynamic and transport mechanisms, it is reasonable to incorporate flow velocity into topography interpolator for decreasing the effect of anisotropy. In this study, two new distance metrics defined as the time taken by water flow to travel between two locations are developed, and replace the spatial distance metric or Euclidean distance that is currently used to interpolate topography. One is a shortest temporal distance (STD) metric. The temporal distance (TD) of a path between two nodes is calculated by spatial distance divided by the tangent component of flow velocity along the path, and the STD is searched using the Dijkstra algorithm in all possible paths between two nodes. The other is a modified shortest temporal distance (MSTD) metric in which both the tangent and normal components of flow velocity were combined. They are used to construct the methods for the interpolation of river channel topography. The proposed methods are used to generate the topography of Wuhan Section of Changjiang River and compared with Universal Kriging (UK) and Inverse Distance Weighting (IDW). The results clearly showed that the STD and MSTD based on flow velocity were reliable spatial interpolators. The MSTD, followed by the STD, presents improvement in prediction accuracy relative to both UK and IDW.
Phylogenetic trees and Euclidean embeddings.
Layer, Mark; Rhodes, John A
2017-01-01
It was recently observed by de Vienne et al. (Syst Biol 60(6):826-832, 2011) that a simple square root transformation of distances between taxa on a phylogenetic tree allowed for an embedding of the taxa into Euclidean space. While the justification for this was based on a diffusion model of continuous character evolution along the tree, here we give a direct and elementary explanation for it that provides substantial additional insight. We use this embedding to reinterpret the differences between the NJ and BIONJ tree building algorithms, providing one illustration of how this embedding reflects tree structures in data.
Wang, Bing; Fang, Aiqin; Heim, John; Bogdanov, Bogdan; Pugh, Scott; Libardoni, Mark; Zhang, Xiang
2010-01-01
A novel peak alignment algorithm using a distance and spectrum correlation optimization (DISCO) method has been developed for two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC/TOF-MS) based metabolomics. This algorithm uses the output of the instrument control software, ChromaTOF, as its input data. It detects and merges multiple peak entries of the same metabolite into one peak entry in each input peak list. After a z-score transformation of metabolite retention times, DISCO selects landmark peaks from all samples based on both two-dimensional retention times and mass spectrum similarity of fragment ions measured by Pearson’s correlation coefficient. A local linear fitting method is employed in the original two-dimensional retention time space to correct retention time shifts. A progressive retention time map searching method is used to align metabolite peaks in all samples together based on optimization of the Euclidean distance and mass spectrum similarity. The effectiveness of the DISCO algorithm is demonstrated using data sets acquired under different experiment conditions and a spiked-in experiment. PMID:20476746
NASA Astrophysics Data System (ADS)
Dai, Jian; Song, Xing-Chang
2001-07-01
One of the key ingredients of Connes's noncommutative geometry is a generalized Dirac operator which induces a metric (Connes's distance) on the pure state space. We generalize such a Dirac operator devised by Dimakis et al, whose Connes distance recovers the linear distance on an one-dimensional lattice, to the two-dimensional case. This Dirac operator has the local eigenvalue property and induces a Euclidean distance on this two-dimensional lattice, which is referred to as `natural'. This kind of Dirac operator can be easily generalized into any higher-dimensional lattices.
Mullins, Jacinta; McDevitt, Allan D; Kowalczyk, Rafał; Ruczyńska, Iwona; Górny, Marcin; Wójcik, Jan M
2014-01-01
The red fox ( Vulpes vulpes ) has the widest global distribution among terrestrial carnivore species, occupying most of the Northern Hemisphere in its native range. Because it carries diseases that can be transmitted to humans and domestic animals, it is important to gather information about their movements and dispersal in their natural habitat but it is difficult to do so at a broad scale with trapping and telemetry. In this study, we have described the genetic diversity and structure of red fox populations in six areas of north-eastern Poland, based on samples collected from 2002-2003. We tested 22 microsatellite loci isolated from the dog and the red fox genome to select a panel of nine polymorphic loci suitable for this study. Genetic differentiation between the six studied populations was low to moderate and analysis in Structure revealed a panmictic population in the region. Spatial autocorrelation among all individuals showed a pattern of decreasing relatedness with increasing distance and this was not significantly negative until 93 km, indicating a pattern of isolation-by-distance over a large area. However, there was no correlation between genetic distance and either Euclidean distance or least-cost path distance at the population level. There was a significant relationship between genetic distance and the proportion of large forests and water along the Euclidean distances. These types of habitats may influence dispersal paths taken by red foxes, which is useful information in terms of wildlife disease management.
Geometric comparison of popular mixture-model distances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Scott A.
2010-09-01
Statistical Latent Dirichlet Analysis produces mixture model data that are geometrically equivalent to points lying on a regular simplex in moderate to high dimensions. Numerous other statistical models and techniques also produce data in this geometric category, even though the meaning of the axes and coordinate values differs significantly. A distance function is used to further analyze these points, for example to cluster them. Several different distance functions are popular amongst statisticians; which distance function is chosen is usually driven by the historical preference of the application domain, information-theoretic considerations, or by the desirability of the clustering results. Relatively littlemore » consideration is usually given to how distance functions geometrically transform data, or the distances algebraic properties. Here we take a look at these issues, in the hope of providing complementary insight and inspiring further geometric thought. Several popular distances, {chi}{sup 2}, Jensen - Shannon divergence, and the square of the Hellinger distance, are shown to be nearly equivalent; in terms of functional forms after transformations, factorizations, and series expansions; and in terms of the shape and proximity of constant-value contours. This is somewhat surprising given that their original functional forms look quite different. Cosine similarity is the square of the Euclidean distance, and a similar geometric relationship is shown with Hellinger and another cosine. We suggest a geodesic variation of Hellinger. The square-root projection that arises in Hellinger distance is briefly compared to standard normalization for Euclidean distance. We include detailed derivations of some ratio and difference bounds for illustrative purposes. We provide some constructions that nearly achieve the worst-case ratios, relevant for contours.« less
Cluster analysis of the hot subdwarfs in the PG survey
NASA Technical Reports Server (NTRS)
Thejll, Peter; Charache, Darryl; Shipman, Harry L.
1989-01-01
Application of cluster analysis to the hot subdwarfs in the Palomar Green (PG) survey of faint blue high-Galactic-latitude objects is assessed, with emphasis on data noise and the number of clusters to subdivide the data into. The data used in the study are presented, and cluster analysis, using the CLUSTAN program, is applied to it. Distances are calculated using the Euclidean formula, and clustering is done by Ward's method. The results are discussed, and five groups representing natural divisions of the subdwarfs in the PG survey are presented.
Geodesic-loxodromes for diffusion tensor interpolation and difference measurement.
Kindlmann, Gordon; Estépar, Raúl San José; Niethammer, Marc; Haker, Steven; Westin, Carl-Fredrik
2007-01-01
In algorithms for processing diffusion tensor images, two common ingredients are interpolating tensors, and measuring the distance between them. We propose a new class of interpolation paths for tensors, termed geodesic-loxodromes, which explicitly preserve clinically important tensor attributes, such as mean diffusivity or fractional anisotropy, while using basic differential geometry to interpolate tensor orientation. This contrasts with previous Riemannian and Log-Euclidean methods that preserve the determinant. Path integrals of tangents of geodesic-loxodromes generate novel measures of over-all difference between two tensors, and of difference in shape and in orientation.
Classification of Company Performance using Weighted Probabilistic Neural Network
NASA Astrophysics Data System (ADS)
Yasin, Hasbi; Waridi Basyiruddin Arifin, Adi; Warsito, Budi
2018-05-01
Classification of company performance can be judged by looking at its financial status, whether good or bad state. Classification of company performance can be achieved by some approach, either parametric or non-parametric. Neural Network is one of non-parametric methods. One of Artificial Neural Network (ANN) models is Probabilistic Neural Network (PNN). PNN consists of four layers, i.e. input layer, pattern layer, addition layer, and output layer. The distance function used is the euclidean distance and each class share the same values as their weights. In this study used PNN that has been modified on the weighting process between the pattern layer and the addition layer by involving the calculation of the mahalanobis distance. This model is called the Weighted Probabilistic Neural Network (WPNN). The results show that the company's performance modeling with the WPNN model has a very high accuracy that reaches 100%.
2012-01-01
Background Discovering new biomarkers has a great role in improving early diagnosis of Hepatocellular carcinoma (HCC). The experimental determination of biomarkers needs a lot of time and money. This motivates this work to use in-silico prediction of biomarkers to reduce the number of experiments required for detecting new ones. This is achieved by extracting the most representative genes in microarrays of HCC. Results In this work, we provide a method for extracting the differential expressed genes, up regulated ones, that can be considered candidate biomarkers in high throughput microarrays of HCC. We examine the power of several gene selection methods (such as Pearson’s correlation coefficient, Cosine coefficient, Euclidean distance, Mutual information and Entropy with different estimators) in selecting informative genes. A biological interpretation of the highly ranked genes is done using KEGG (Kyoto Encyclopedia of Genes and Genomes) pathways, ENTREZ and DAVID (Database for Annotation, Visualization, and Integrated Discovery) databases. The top ten genes selected using Pearson’s correlation coefficient and Cosine coefficient contained six genes that have been implicated in cancer (often multiple cancers) genesis in previous studies. A fewer number of genes were obtained by the other methods (4 genes using Mutual information, 3genes using Euclidean distance and only one gene using Entropy). A better result was obtained by the utilization of a hybrid approach based on intersecting the highly ranked genes in the output of all investigated methods. This hybrid combination yielded seven genes (2 genes for HCC and 5 genes in different types of cancer) in the top ten genes of the list of intersected genes. Conclusions To strengthen the effectiveness of the univariate selection methods, we propose a hybrid approach by intersecting several of these methods in a cascaded manner. This approach surpasses all of univariate selection methods when used individually according to biological interpretation and the examination of gene expression signal profiles. PMID:22867264
Spatio-temporal modeling of the African swine fever epidemic in the Russian Federation, 2007-2012.
Korennoy, F I; Gulenkin, V M; Malone, J B; Mores, C N; Dudnikov, S A; Stevenson, M A
2014-10-01
In 2007 African swine fever (ASF) entered Georgia and in the same year the disease entered the Russian Federation. From 2007 to 2012 ASF spread throughout the southern region of the Russian Federation. At the same time several cases of ASF were detected in the central and northern regions of the Russian Federation, forming a northern cluster of outbreaks in 2011. This northern cluster is of concern because of its proximity to mainland Europe. The aim of this study was to use details of recorded ASF outbreaks and human and swine population details to estimate the spatial distribution of ASF risk in the southern region of the European part of the Russian Federation. Our model of ASF risk was comprised of two components. The first was an estimate of ASF suitability scores calculated using maximum entropy methods. The second was an estimate of ASF risk as a function of Euclidean distance from index cases. An exponential distribution fitted to a frequency histogram of the Euclidean distance between consecutive ASF cases had a mean value of 156 km, a distance greater than the surveillance zone radius of 100-150 km stated in the ASF control regulations for the Russian Federation. We show that the spatial and temporal risk of ASF expansion is related to the suitability of the area of potential expansion, which is in turn a function of socio-economic and geographic variables. We propose that the methodology presented in this paper provides a useful tool to optimize surveillance for ASF in affected areas. Copyright © 2014 Elsevier Ltd. All rights reserved.
Spatio-temporal modeling of the African swine fever epidemic in the Russian Federation, 2007–2012
Korennoy, F.I.; Gulenkin, V.M.; Malone, J.B.; Mores, C.N.; Dudnikov, S.A.; Stevenson, M.A.
2015-01-01
In 2007 African swine fever (ASF) entered Georgia and in the same year the disease entered the Russian Federation. From 2007 to 2012 ASF spread throughout the southern region of the Russian Federation. At the same time several cases of ASF were detected in the central and northern regions of the Russian Federation, forming a northern cluster of outbreaks in 2011. This northern cluster is of concern because of its proximity to mainland Europe. The aim of this study was to use details of recorded ASF outbreaks and human and swine population details to estimate the spatial distribution of ASF risk in the southern region of the European part of the Russian Federation. Our model of ASF risk was comprised of two components. The first was an estimate of ASF suitability scores calculated using maximum entropy methods. The second was an estimate of ASF risk as a function of Euclidean distance from index cases. An exponential distribution fitted to a frequency histogram of the Euclidean distance between consecutive ASF cases had a mean value of 156 km, a distance greater than the surveillance zone radius of 100–150 km stated in the ASF control regulations for the Russian Federation. We show that the spatial and temporal risk of ASF expansion is related to the suitability of the area of potential expansion, which is in turn a function of socio-economic and geographic variables. We propose that the methodology presented in this paper provides a useful tool to optimize surveillance for ASF in affected areas. PMID:25457602
Protein space: a natural method for realizing the nature of protein universe.
Yu, Chenglong; Deng, Mo; Cheng, Shiu-Yuen; Yau, Shek-Chung; He, Rong L; Yau, Stephen S-T
2013-02-07
Current methods cannot tell us what the nature of the protein universe is concretely. They are based on different models of amino acid substitution and multiple sequence alignment which is an NP-hard problem and requires manual intervention. Protein structural analysis also gives a direction for mapping the protein universe. Unfortunately, now only a minuscule fraction of proteins' 3-dimensional structures are known. Furthermore, the phylogenetic tree representations are not unique for any existing tree construction methods. Here we develop a novel method to realize the nature of protein universe. We show the protein universe can be realized as a protein space in 60-dimensional Euclidean space using a distance based on a normalized distribution of amino acids. Every protein is in one-to-one correspondence with a point in protein space, where proteins with similar properties stay close together. Thus the distance between two points in protein space represents the biological distance of the corresponding two proteins. We also propose a natural graphical representation for inferring phylogenies. The representation is natural and unique based on the biological distances of proteins in protein space. This will solve the fundamental question of how proteins are distributed in the protein universe. Copyright © 2012 Elsevier Ltd. All rights reserved.
Real-time segmentation in 4D ultrasound with continuous max-flow
NASA Astrophysics Data System (ADS)
Rajchl, M.; Yuan, J.; Peters, T. M.
2012-02-01
We present a novel continuous Max-Flow based method to segment the inner left ventricular wall from 3D trans-esophageal echocardiography image sequences, which minimizes an energy functional encoding two Fisher-Tippett distributions and a geometrical constraint in form of a Euclidean distance map in a numerically efficient and accurate way. After initialization the method is fully automatic and is able to perform at up to 10Hz making it available for image-guided interventions. Results are shown on 4D TEE data sets from 18 patients with pathological cardiac conditions and the speed of the algorithm is assessed under a variety of conditions.
Exploratory Lattice QCD Study of the Rare Kaon Decay K^{+}→π^{+}νν[over ¯].
Bai, Ziyuan; Christ, Norman H; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T
2017-06-23
We report a first, complete lattice QCD calculation of the long-distance contribution to the K^{+}→π^{+}νν[over ¯] decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.
Exploratory Lattice QCD Study of the Rare Kaon Decay K+→π+ν ν ¯
NASA Astrophysics Data System (ADS)
Bai, Ziyuan; Christ, Norman H.; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T.; Rbc-Ukqcd Collaboration
2017-06-01
We report a first, complete lattice QCD calculation of the long-distance contribution to the K+→π+ν ν ¯ decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.
Spatial weighting approach in numerical method for disaggregation of MDGs indicators
NASA Astrophysics Data System (ADS)
Permai, S. D.; Mukhaiyar, U.; Satyaning PP, N. L. P.; Soleh, M.; Aini, Q.
2018-03-01
Disaggregation use to separate and classify the data based on certain characteristics or on administrative level. Disaggregated data is very important because some indicators not measured on all characteristics. Detailed disaggregation for development indicators is important to ensure that everyone benefits from development and support better development-related policymaking. This paper aims to explore different methods to disaggregate national employment-to-population ratio indicator to province- and city-level. Numerical approach applied to overcome the problem of disaggregation unavailability by constructing several spatial weight matrices based on the neighbourhood, Euclidean distance and correlation. These methods can potentially be used and further developed to disaggregate development indicators into lower spatial level even by several demographic characteristics.
The Perspective Structure of Visual Space
2015-01-01
Luneburg’s model has been the reference for experimental studies of visual space for almost seventy years. His claim for a curved visual space has been a source of inspiration for visual scientists as well as philosophers. The conclusion of many experimental studies has been that Luneburg’s model does not describe visual space in various tasks and conditions. Remarkably, no alternative model has been suggested. The current study explores perspective transformations of Euclidean space as a model for visual space. Computations show that the geometry of perspective spaces is considerably different from that of Euclidean space. Collinearity but not parallelism is preserved in perspective space and angles are not invariant under translation and rotation. Similar relationships have shown to be properties of visual space. Alley experiments performed early in the nineteenth century have been instrumental in hypothesizing curved visual spaces. Alleys were computed in perspective space and compared with reconstructed alleys of Blumenfeld. Parallel alleys were accurately described by perspective geometry. Accurate distance alleys were derived from parallel alleys by adjusting the interstimulus distances according to the size-distance invariance hypothesis. Agreement between computed and experimental alleys and accommodation of experimental results that rejected Luneburg’s model show that perspective space is an appropriate model for how we perceive orientations and angles. The model is also appropriate for perceived distance ratios between stimuli but fails to predict perceived distances. PMID:27648222
Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems
NASA Technical Reports Server (NTRS)
Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.
2012-01-01
Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.
Euclidean commute time distance embedding and its application to spectral anomaly detection
NASA Astrophysics Data System (ADS)
Albano, James A.; Messinger, David W.
2012-06-01
Spectral image analysis problems often begin by performing a preprocessing step composed of applying a transformation that generates an alternative representation of the spectral data. In this paper, a transformation based on a Markov-chain model of a random walk on a graph is introduced. More precisely, we quantify the random walk using a quantity known as the average commute time distance and find a nonlinear transformation that embeds the nodes of a graph in a Euclidean space where the separation between them is equal to the square root of this quantity. This has been referred to as the Commute Time Distance (CTD) transformation and it has the important characteristic of increasing when the number of paths between two nodes decreases and/or the lengths of those paths increase. Remarkably, a closed form solution exists for computing the average commute time distance that avoids running an iterative process and is found by simply performing an eigendecomposition on the graph Laplacian matrix. Contained in this paper is a discussion of the particular graph constructed on the spectral data for which the commute time distance is then calculated from, an introduction of some important properties of the graph Laplacian matrix, and a subspace projection that approximately preserves the maximal variance of the square root commute time distance. Finally, RX anomaly detection and Topological Anomaly Detection (TAD) algorithms will be applied to the CTD subspace followed by a discussion of their results.
Application of meta-analysis methods for identifying proteomic expression level differences.
Amess, Bob; Kluge, Wolfgang; Schwarz, Emanuel; Haenisch, Frieder; Alsaif, Murtada; Yolken, Robert H; Leweke, F Markus; Guest, Paul C; Bahn, Sabine
2013-07-01
We present new statistical approaches for identification of proteins with expression levels that are significantly changed when applying meta-analysis to two or more independent experiments. We showed that the Euclidean distance measure has reduced risk of false positives compared to the rank product method. Our Ψ-ranking method has advantages over the traditional fold-change approach by incorporating both the fold-change direction as well as the p-value. In addition, the second novel method, Π-ranking, considers the ratio of the fold-change and thus integrates all three parameters. We further improved the latter by introducing our third technique, Σ-ranking, which combines all three parameters in a balanced nonparametric approach. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hierarchical clustering using correlation metric and spatial continuity constraint
Stork, Christopher L.; Brewer, Luke N.
2012-10-02
Large data sets are analyzed by hierarchical clustering using correlation as a similarity measure. This provides results that are superior to those obtained using a Euclidean distance similarity measure. A spatial continuity constraint may be applied in hierarchical clustering analysis of images.
Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey
NASA Astrophysics Data System (ADS)
Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.
2017-02-01
Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.
Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.
Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli
2016-05-01
Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.
Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis.
Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng
2015-01-01
Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first. work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method.
Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis
Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng
2015-01-01
Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method. PMID:26221691
Du, Shaoyi; Xu, Yiting; Wan, Teng; Hu, Huaizhong; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao
2017-01-01
The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm.
Du, Shaoyi; Xu, Yiting; Wan, Teng; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao
2017-01-01
The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm. PMID:29176780
Connectivity in a pond system influences migration and genetic structure in threespine stickleback.
Seymour, Mathew; Räsänen, Katja; Holderegger, Rolf; Kristjánsson, Bjarni K
2013-03-01
Neutral genetic structure of natural populations is primarily influenced by migration (the movement of individuals and, subsequently, their genes) and drift (the statistical chance of losing genetic diversity over time). Migration between populations is influenced by several factors, including individual behavior, physical barriers, and environmental heterogeneity among populations. However, drift is expected to be stronger in populations with low immigration rate and small effective population size. With the technological advancement in geological information systems and spatial analysis tools, landscape genetics now allows the development of realistic migration models and increased insight to important processes influencing diversity of natural populations. In this study, we investigated the relationship between landscape connectivity and genetic distance of threespine stickleback (Gasterosteus aculeatus) inhabiting a pond complex in Belgjarskógur, Northeast Iceland. We used two landscape genetic approaches (i.e., least-cost-path and isolation-by-resistance) and asked whether gene flow, as measured by genetic distance, was more strongly associated with Euclidean distance (isolation-by-distance) or with landscape connectivity provided by areas prone to flooding (as indicated by Carex sp. cover)? We found substantial genetic structure across the study area, with pairwise genetic distances among populations (DPS) ranging from 0.118 to 0.488. Genetic distances among populations were more strongly correlated with least-cost-path and isolation-by-resistance than with Euclidean distance, whereas the relative contribution of isolation-by-resistance and Euclidian distance could not be disentangled. These results indicate that migration among stickleback populations occurs via periodically flooded areas. Overall, this study highlights the importance of transient landscape elements influencing migration and genetic structure of populations at small spatial scales.
Evaluation of procedures for prediction of unconventional gas in the presence of geologic trends
Attanasi, E.D.; Coburn, T.C.
2009-01-01
This study extends the application of local spatial nonparametric prediction models to the estimation of recoverable gas volumes in continuous-type gas plays to regimes where there is a single geologic trend. A transformation is presented, originally proposed by Tomczak, that offsets the distortions caused by the trend. This article reports on numerical experiments that compare predictive and classification performance of the local nonparametric prediction models based on the transformation with models based on Euclidean distance. The transformation offers improvement in average root mean square error when the trend is not severely misspecified. Because of the local nature of the models, even those based on Euclidean distance in the presence of trends are reasonably robust. The tests based on other model performance metrics such as prediction error associated with the high-grade tracts and the ability of the models to identify sites with the largest gas volumes also demonstrate the robustness of both local modeling approaches. ?? International Association for Mathematical Geology 2009.
Wheat, J S; Choppin, S; Goyal, A
2014-06-01
Three-dimensional surface imaging technologies have been used in the planning and evaluation of breast reconstructive and cosmetic surgery. The aim of this study was to develop a 3D surface imaging system based on the Microsoft Kinect and assess the accuracy and repeatability with which the system could image the breast. A system comprising two Kinects, calibrated to provide a complete 3D image of the mannequin was developed. Digital measurements of Euclidean and surface distances between landmarks showed acceptable agreement with manual measurements. The mean differences for Euclidean and surface distances were 1.9mm and 2.2mm, respectively. The system also demonstrated good intra- and inter-rater reliability (ICCs>0.999). The Kinect-based 3D surface imaging system offers a low-cost, readily accessible alternative to more expensive, commercially available systems, which have had limited clinical use. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Adhikari, Badri; Trieu, Tuan; Cheng, Jianlin
2016-11-07
Reconstructing three-dimensional structures of chromosomes is useful for visualizing their shapes in a cell and interpreting their function. In this work, we reconstruct chromosomal structures from Hi-C data by translating contact counts in Hi-C data into Euclidean distances between chromosomal regions and then satisfying these distances using a structure reconstruction method rigorously tested in the field of protein structure determination. We first evaluate the robustness of the overall reconstruction algorithm on noisy simulated data at various levels of noise by comparing with some of the state-of-the-art reconstruction methods. Then, using simulated data, we validate that Spearman's rank correlation coefficient between pairwise distances in the reconstructed chromosomal structures and the experimental chromosomal contact counts can be used to find optimum conversion rules for transforming interaction frequencies to wish distances. This strategy is then applied to real Hi-C data at chromosome level for optimal transformation of interaction frequencies to wish distances and for ranking and selecting structures. The chromosomal structures reconstructed from a real-world human Hi-C dataset by our method were validated by the known two-compartment feature of the human chromosome organization. We also show that our method is robust with respect to the change of the granularity of Hi-C data, and consistently produces similar structures at different chromosomal resolutions. Chromosome3D is a robust method of reconstructing chromosome three-dimensional models using distance restraints obtained from Hi-C interaction frequency data. It is available as a web application and as an open source tool at http://sysbio.rnet.missouri.edu/chromosome3d/ .
Crystallization mosaic effect generation by superpixels
NASA Astrophysics Data System (ADS)
Xie, Yuqi; Bo, Pengbo; Yuan, Ye; Wang, Kuanquan
2015-03-01
Art effect generation from digital images using computational tools has been a hot research topic in recent years. We propose a new method for generating crystallization mosaic effects from color images. Two key problems in generating pleasant mosaic effect are studied: grouping pixels into mosaic tiles and arrangement of mosaic tiles adapting to image features. To give visually pleasant mosaic effect, we propose to create mosaic tiles by pixel clustering in feature space of color information, taking compactness of tiles into consideration as well. Moreover, we propose a method for processing feature boundaries in images which gives guidance for arranging mosaic tiles near image features. This method gives nearly uniform shape of mosaic tiles, adapting to feature lines in an esthetic way. The new approach considers both color distance and Euclidean distance of pixels, and thus is capable of giving mosaic tiles in a more pleasing manner. Some experiments are included to demonstrate the computational efficiency of the present method and its capability of generating visually pleasant mosaic tiles. Comparisons with existing approaches are also included to show the superiority of the new method.
View-invariant gait recognition method by three-dimensional convolutional neural network
NASA Astrophysics Data System (ADS)
Xing, Weiwei; Li, Ying; Zhang, Shunli
2018-01-01
Gait as an important biometric feature can identify a human at a long distance. View change is one of the most challenging factors for gait recognition. To address the cross view issues in gait recognition, we propose a view-invariant gait recognition method by three-dimensional (3-D) convolutional neural network. First, 3-D convolutional neural network (3DCNN) is introduced to learn view-invariant feature, which can capture the spatial information and temporal information simultaneously on normalized silhouette sequences. Second, a network training method based on cross-domain transfer learning is proposed to solve the problem of the limited gait training samples. We choose the C3D as the basic model, which is pretrained on the Sports-1M and then fine-tune C3D model to adapt gait recognition. In the recognition stage, we use the fine-tuned model to extract gait features and use Euclidean distance to measure the similarity of gait sequences. Sufficient experiments are carried out on the CASIA-B dataset and the experimental results demonstrate that our method outperforms many other methods.
Kim, Jungmin; Park, Juyong; Lee, Wonjae
2018-01-01
The quality of life for people in urban regions can be improved by predicting urban human mobility and adjusting urban planning accordingly. In this study, we compared several possible variables to verify whether a gravity model (a human mobility prediction model borrowed from Newtonian mechanics) worked as well in inner-city regions as it did in intra-city regions. We reviewed the resident population, the number of employees, and the number of SNS posts as variables for generating mass values for an urban traffic gravity model. We also compared the straight-line distance, travel distance, and the impact of time as possible distance values. We defined the functions of urban regions on the basis of public records and SNS data to reflect the diverse social factors in urban regions. In this process, we conducted a dimension reduction method for the public record data and used a machine learning-based clustering algorithm for the SNS data. In doing so, we found that functional distance could be defined as the Euclidean distance between social function vectors in urban regions. Finally, we examined whether the functional distance was a variable that had a significant impact on urban human mobility.
Comparison of Histograms for Use in Cloud Observation and Modeling
NASA Technical Reports Server (NTRS)
Green, Lisa; Xu, Kuan-Man
2005-01-01
Cloud observation and cloud modeling data can be presented in histograms for each characteristic to be measured. Combining information from single-cloud histograms yields a summary histogram. Summary histograms can be compared to each other to reach conclusions about the behavior of an ensemble of clouds in different places at different times or about the accuracy of a particular cloud model. As in any scientific comparison, it is necessary to decide whether any apparent differences are statistically significant. The usual methods of deciding statistical significance when comparing histograms do not apply in this case because they assume independent data. Thus, a new method is necessary. The proposed method uses the Euclidean distance metric and bootstrapping to calculate the significance level.
Multi-resolution analysis for ear recognition using wavelet features
NASA Astrophysics Data System (ADS)
Shoaib, M.; Basit, A.; Faye, I.
2016-11-01
Security is very important and in order to avoid any physical contact, identification of human when they are moving is necessary. Ear biometric is one of the methods by which a person can be identified using surveillance cameras. Various techniques have been proposed to increase the ear based recognition systems. In this work, a feature extraction method for human ear recognition based on wavelet transforms is proposed. The proposed features are approximation coefficients and specific details of level two after applying various types of wavelet transforms. Different wavelet transforms are applied to find the suitable wavelet. Minimum Euclidean distance is used as a matching criterion. Results achieved by the proposed method are promising and can be used in real time ear recognition system.
Sidek, Khairul; Khali, Ibrahim
2012-01-01
In this paper, a person identification mechanism implemented with Cardioid based graph using electrocardiogram (ECG) is presented. Cardioid based graph has given a reasonably good classification accuracy in terms of differentiating between individuals. However, the current feature extraction method using Euclidean distance could be further improved by using Mahalanobis distance measurement producing extracted coefficients which takes into account the correlations of the data set. Identification is then done by applying these extracted features to Radial Basis Function Network. A total of 30 ECG data from MITBIH Normal Sinus Rhythm database (NSRDB) and MITBIH Arrhythmia database (MITDB) were used for development and evaluation purposes. Our experimentation results suggest that the proposed feature extraction method has significantly increased the classification performance of subjects in both databases with accuracy from 97.50% to 99.80% in NSRDB and 96.50% to 99.40% in MITDB. High sensitivity, specificity and positive predictive value of 99.17%, 99.91% and 99.23% for NSRDB and 99.30%, 99.90% and 99.40% for MITDB also validates the proposed method. This result also indicates that the right feature extraction technique plays a vital role in determining the persistency of the classification accuracy for Cardioid based person identification mechanism.
Fast Laplace solver approach to pore-scale permeability
NASA Astrophysics Data System (ADS)
Arns, C. H.; Adler, P. M.
2018-02-01
We introduce a powerful and easily implemented method to calculate the permeability of porous media at the pore scale using an approximation based on the Poiseulle equation to calculate permeability to fluid flow with a Laplace solver. The method consists of calculating the Euclidean distance map of the fluid phase to assign local conductivities and lends itself naturally to the treatment of multiscale problems. We compare with analytical solutions as well as experimental measurements and lattice Boltzmann calculations of permeability for Fontainebleau sandstone. The solver is significantly more stable than the lattice Boltzmann approach, uses less memory, and is significantly faster. Permeabilities are in excellent agreement over a wide range of porosities.
A Riemannian framework for orientation distribution function computing.
Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid
2009-01-01
Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation.
Local coding based matching kernel method for image classification.
Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong
2014-01-01
This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.
Superintegrable three-body systems on the line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chanu, Claudia; Degiovanni, Luca; Rastelli, Giovanni
2008-11-15
We consider classical three-body interactions on a Euclidean line depending on the reciprocal distance of the particles and admitting four functionally independent quadratic in the momentum first integrals. These systems are multiseparable, superintegrable, and equivalent (up to rescalings) to a one-particle system in the three-dimensional Euclidean space. Common features of the dynamics are discussed. We show how to determine quantum symmetry operators associated with the first integrals considered here but do not analyze the corresponding quantum dynamics. The conformal multiseparability is discussed and examples of conformal first integrals are given. The systems considered here in generality include the Calogero, Wolfes,more » and other three-body interactions widely studied in mathematical physics.« less
Ab initio nanostructure determination
NASA Astrophysics Data System (ADS)
Gujarathi, Saurabh
Reconstruction of complex structures is an inverse problem arising in virtually all areas of science and technology, from protein structure determination to bulk heterostructure solar cells and the structure of nanoparticles. This problem is cast as a complex network problem where the edges in a network have weights equal to the Euclidean distance between their endpoints. A method, called Tribond, for the reconstruction of the locations of the nodes of the network given only the edge weights of the Euclidean network is presented. The timing results indicate that the algorithm is a low order polynomial in the number of nodes in the network in two dimensions. Reconstruction of Euclidean networks in two dimensions of about one thousand nodes in approximately twenty four hours on a desktop computer using this implementation is done. In three dimensions, the computational cost for the reconstruction is a higher order polynomial in the number of nodes and reconstruction of small Euclidean networks in three dimensions is shown. If a starting network of size five is assumed to be given, then for a network of size 100, the remaining reconstruction can be done in about two hours on a desktop computer. In situations when we have less precise data, modifications of the method may be necessary and are discussed. A related problem in one dimension known as the Optimal Golomb ruler (OGR) is also studied. A statistical physics Hamiltonian to describe the OGR problem is introduced and the first order phase transition from a symmetric low constraint phase to a complex symmetry broken phase at high constraint is studied. Despite the fact that the Hamiltonian is not disordered, the asymmetric phase is highly irregular with geometric frustration. The phase diagram is obtained and it is seen that even at a very low temperature T there is a phase transition at finite and non-zero value of the constraint parameter gamma/mu. Analytic calculations for the scaling of the density and free energy of the ruler are done and they are compared with those from the mean field approach. A scaling law is also derived for the length of OGR, which is consistent with Erdos conjecture and with numerical results.
Multi-Frequency Analysis for Landmine Detection with Forward-Looking Ground Penetrating Radar
2010-10-12
CIEXYZ tristimulus values of the red, green, and blue primaries and the white point defining the RGB images‟ color gamut , and the illuminant under...distance in CIELAB color space for the color imagery and Euclidean distance between grey levels for the IR imagery. Use of CIELAB color space was...motivated by its superior perceptual uniformity compared to RGB and slight illuminant invariance, as explained in our previous work. Use of CIELAB color
Local Subspace Classifier with Transform-Invariance for Image Classification
NASA Astrophysics Data System (ADS)
Hotta, Seiji
A family of linear subspace classifiers called local subspace classifier (LSC) outperforms the k-nearest neighbor rule (kNN) and conventional subspace classifiers in handwritten digit classification. However, LSC suffers very high sensitivity to image transformations because it uses projection and the Euclidean distances for classification. In this paper, I present a combination of a local subspace classifier (LSC) and a tangent distance (TD) for improving accuracy of handwritten digit recognition. In this classification rule, we can deal with transform-invariance easily because we are able to use tangent vectors for approximation of transformations. However, we cannot use tangent vectors in other type of images such as color images. Hence, kernel LSC (KLSC) is proposed for incorporating transform-invariance into LSC via kernel mapping. The performance of the proposed methods is verified with the experiments on handwritten digit and color image classification.
Li, Lian-Hui; Mo, Rong
2015-01-01
The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility.
Li, Lian-hui; Mo, Rong
2015-01-01
The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility. PMID:26414758
Liu, Xu; Jia, Shi-qiang; Wang, Chun-ying; Liu, Zhe; Gu, Jian-cheng; Zhai, Wei; Li, Shao-ming; Zhang, Xiao-dong; Zhu, De-hai; Huang, Hua-jun; An, Dong
2015-09-01
This paper explored the relationship among genetic distances, NIR spectra distances and NIR-based identification model performance of the seeds of maize inbred lines. Using 3 groups (total 15 pairs) of maize inbred lines whose genetic distaches are different as experimental materials, we calculates the genetic distance between these seeds with SSR markers and uses Euclidean distance between distributed center points of maize NIR spectrum in the PCA space as the distances of NIR spectrum. BPR method is used to build identification model of inbred lines and the identification accuracy is used as a measure of model identification performance. The results showed that, the correlation of genetic distance and spectra distancesis 0.9868, and it has a correlation of 0.9110 with the identification accuracy, which is highly correlated. This means near-Infrared spectrum of seedscan reflect genetic relationship of maize inbred lines. The smaller the genetic distance, the smaller the distance of spectrum, the poorer ability of model to identify. In practical application, near infrared spectrum analysis technology has the potential to be used to analyze maize inbred genetic relations, contributing much to genetic breeding, identification of species, purity sorting and so on. What's more, when creating a NIR-based identification model, the impact of the maize inbred lines which have closer genetic relationship should be fully considered.
Yu, Kaixin; Wang, Xuetong; Li, Qiongling; Zhang, Xiaohui; Li, Xinwei; Li, Shuyu
2018-01-01
Morphological brain network plays a key role in investigating abnormalities in neurological diseases such as mild cognitive impairment (MCI) and Alzheimer's disease (AD). However, most of the morphological brain network construction methods only considered a single morphological feature. Each type of morphological feature has specific neurological and genetic underpinnings. A combination of morphological features has been proven to have better diagnostic performance compared with a single feature, which suggests that an individual morphological brain network based on multiple morphological features would be beneficial in disease diagnosis. Here, we proposed a novel method to construct individual morphological brain networks for two datasets by calculating the exponential function of multivariate Euclidean distance as the evaluation of similarity between two regions. The first dataset included 24 healthy subjects who were scanned twice within a 3-month period. The topological properties of these brain networks were analyzed and compared with previous studies that used different methods and modalities. Small world property was observed in all of the subjects, and the high reproducibility indicated the robustness of our method. The second dataset included 170 patients with MCI (86 stable MCI and 84 progressive MCI cases) and 169 normal controls (NC). The edge features extracted from the individual morphological brain networks were used to distinguish MCI from NC and separate MCI subgroups (progressive vs. stable) through the support vector machine in order to validate our method. The results showed that our method achieved an accuracy of 79.65% (MCI vs. NC) and 70.59% (stable MCI vs. progressive MCI) in a one-dimension situation. In a multiple-dimension situation, our method improved the classification performance with an accuracy of 80.53% (MCI vs. NC) and 77.06% (stable MCI vs. progressive MCI) compared with the method using a single feature. The results indicated that our method could effectively construct an individual morphological brain network based on multiple morphological features and could accurately discriminate MCI from NC and stable MCI from progressive MCI, and may provide a valuable tool for the investigation of individual morphological brain networks.
Research on measurement method of optical camouflage effect of moving object
NASA Astrophysics Data System (ADS)
Wang, Juntang; Xu, Weidong; Qu, Yang; Cui, Guangzhen
2016-10-01
Camouflage effectiveness measurement as an important part of the camouflage technology, which testing and measuring the camouflage effect of the target and the performance of the camouflage equipment according to the tactical and technical requirements. The camouflage effectiveness measurement of current optical band is mainly aimed at the static target which could not objectively reflect the dynamic camouflage effect of the moving target. This paper synthetical used technology of dynamic object detection and camouflage effect detection, the digital camouflage of the moving object as the research object, the adaptive background update algorithm of Surendra was improved, a method of optical camouflage effect detection using Lab-color space in the detection of moving-object was presented. The binary image of moving object is extracted by this measurement technology, in the sequence diagram, the characteristic parameters such as the degree of dispersion, eccentricity, complexity and moment invariants are constructed to construct the feature vector space. The Euclidean distance of moving target which through digital camouflage was calculated, the results show that the average Euclidean distance of 375 frames was 189.45, which indicated that the degree of dispersion, eccentricity, complexity and moment invariants of the digital camouflage graphics has a great difference with the moving target which not spray digital camouflage. The measurement results showed that the camouflage effect was good. Meanwhile with the performance evaluation module, the correlation coefficient of the dynamic target image range 0.1275 from 0.0035, and presented some ups and down. Under the dynamic condition, the adaptability of target and background was reflected. In view of the existing infrared camouflage technology, the next step, we want to carry out the camouflage effect measurement technology of the moving target based on infrared band.
Using Multidimensional Scaling To Assess the Dimensionality of Dichotomous Item Data.
ERIC Educational Resources Information Center
Meara, Kevin; Robin, Frederic; Sireci, Stephen G.
2000-01-01
Investigated the usefulness of multidimensional scaling (MDS) for assessing the dimensionality of dichotomous test data. Focused on two MDS proximity measures, one based on the PC statistic (T. Chen and M. Davidson, 1996) and other, on interitem Euclidean distances. Simulation results show that both MDS procedures correctly identify…
The Equivalence of Three Statistical Packages for Performing Hierarchical Cluster Analysis
ERIC Educational Resources Information Center
Blashfield, Roger
1977-01-01
Three different software programs which contain hierarchical agglomerative cluster analysis procedures were shown to generate different solutions on the same data set using apparently the same options. The basis for the differences in the solutions was the formulae used to calculate Euclidean distance. (Author/JKS)
ERIC Educational Resources Information Center
Hossain, Md. Mokter
2012-01-01
This mixed methods study examined preservice secondary mathematics teachers' perceptions of a blogging activity used as a supportive teaching-learning tool in a college Euclidean Geometry course. The effect of a 12-week blogging activity that was a standard component of a college Euclidean Geometry course offered for preservice secondary…
Handwritten document age classification based on handwriting styles
NASA Astrophysics Data System (ADS)
Ramaiah, Chetan; Kumar, Gaurav; Govindaraju, Venu
2012-01-01
Handwriting styles are constantly changing over time. We approach the novel problem of estimating the approximate age of Historical Handwritten Documents using Handwriting styles. This system will have many applications in handwritten document processing engines where specialized processing techniques can be applied based on the estimated age of the document. We propose to learn a distribution over styles across centuries using Topic Models and to apply a classifier over weights learned in order to estimate the approximate age of the documents. We present a comparison of different distance metrics such as Euclidean Distance and Hellinger Distance within this application.
Distance to nearest road in the conterminous United States
Watts, Raymond D.
2005-01-01
The new dataset is the first member of the National Overview Road Metrics (NORM) family of road related indicators. This indicator measures straight-line or Euclidean distance (ED) to the nearest road, and is given the compound name NORM ED. NORM ED data can be viewed and downloaded from the transportation section of the web viewer for The National Map, http://nationalmap.usgs.gov. The full-resolution dataset for the conterminous states is made of 8.7 billion values.
A new edge detection algorithm based on Canny idea
NASA Astrophysics Data System (ADS)
Feng, Yingke; Zhang, Jinmin; Wang, Siming
2017-10-01
The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.
Symmetric nonnegative matrix factorization: algorithms and applications to probabilistic clustering.
He, Zhaoshui; Xie, Shengli; Zdunek, Rafal; Zhou, Guoxu; Cichocki, Andrzej
2011-12-01
Nonnegative matrix factorization (NMF) is an unsupervised learning method useful in various applications including image processing and semantic analysis of documents. This paper focuses on symmetric NMF (SNMF), which is a special case of NMF decomposition. Three parallel multiplicative update algorithms using level 3 basic linear algebra subprograms directly are developed for this problem. First, by minimizing the Euclidean distance, a multiplicative update algorithm is proposed, and its convergence under mild conditions is proved. Based on it, we further propose another two fast parallel methods: α-SNMF and β -SNMF algorithms. All of them are easy to implement. These algorithms are applied to probabilistic clustering. We demonstrate their effectiveness for facial image clustering, document categorization, and pattern clustering in gene expression.
Estimating gene function with least squares nonnegative matrix factorization.
Wang, Guoli; Ochs, Michael F
2007-01-01
Nonnegative matrix factorization is a machine learning algorithm that has extracted information from data in a number of fields, including imaging and spectral analysis, text mining, and microarray data analysis. One limitation with the method for linking genes through microarray data in order to estimate gene function is the high variance observed in transcription levels between different genes. Least squares nonnegative matrix factorization uses estimates of the uncertainties on the mRNA levels for each gene in each condition, to guide the algorithm to a local minimum in normalized chi2, rather than a Euclidean distance or divergence between the reconstructed data and the data itself. Herein, application of this method to microarray data is demonstrated in order to predict gene function.
Delgado-González, José-Carlos; Florensa-Vila, José; Mansilla-Legorburo, Francisco; Insausti, Ricardo; Artacho-Pérula, Emilio
2017-01-01
The medial temporal lobe (MTL), and in particular the hippocampal formation, is essential in the processing and consolidation of declarative memory. The 3D environment of the anatomical structures contained in the MTL is an important issue. Our aim was to explore the spatial relationship of the anatomical structures of the MTL and changes in aging and/or Alzheimer's disease (AD). MTL anatomical landmarks are identified and registered to create a 3D network. The brain network is quantitatively described as a plane, rostrocaudally-oriented, and presenting Euclidean/real distances. Correspondence between 1.5T RM, 3T RM, and histological sections were assessed to determine the most important recognizable changes in AD, based on statistical significance. In both 1.5T and 3T RM images and histology, inter-rater reliability was high. Sex and hemisphere had no influence on network pattern. Minor changes were found in relation to aging. Distances from the temporal pole to the dentate gyrus showed the most significant differences when comparing control and AD groups. The best discriminative distance between control and AD cases was found in the temporal pole/dentate gyrus rostrocaudal length in histological sections. Moreover, more distances between landmarks were required to obtain 100% discrimination between control (divided into <65 years or >65 years) and AD cases. Changes in the distance between MTL anatomical landmarks can successfully be detected by using measurements of 3D network patterns in control and AD cases.
Kim, Heekang; Kwon, Soon; Kim, Sungho
2016-07-08
This paper proposes a vehicle light detection method using a hyperspectral camera instead of a Charge-Coupled Device (CCD) or Complementary metal-Oxide-Semiconductor (CMOS) camera for adaptive car headlamp control. To apply Intelligent Headlight Control (IHC), the vehicle headlights need to be detected. Headlights are comprised from a variety of lighting sources, such as Light Emitting Diodes (LEDs), High-intensity discharge (HID), and halogen lamps. In addition, rear lamps are made of LED and halogen lamp. This paper refers to the recent research in IHC. Some problems exist in the detection of headlights, such as erroneous detection of street lights or sign lights and the reflection plate of ego-car from CCD or CMOS images. To solve these problems, this study uses hyperspectral images because they have hundreds of bands and provide more information than a CCD or CMOS camera. Recent methods to detect headlights used the Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), and Euclidean Distance Mapper (EDM). The experimental results highlight the feasibility of the proposed method in three types of lights (LED, HID, and halogen).
Optimal steering for kinematic vehicles with applications to spatially distributed agents
NASA Astrophysics Data System (ADS)
Brown, Scott; Praeger, Cheryl E.; Giudici, Michael
While there is no universal method to address control problems involving networks of autonomous vehicles, there exist a few promising schemes that apply to different specific classes of problems, which have attracted the attention of many researchers from different fields. In particular, one way to extend techniques that address problems involving a single autonomous vehicle to those involving teams of autonomous vehicles is to use the concept of Voronoi diagram. The Voronoi diagram provides a spatial partition of the environment the team of vehicles operate in, where each element of this partition is associated with a unique vehicle from the team. The partition induces a graph abstraction of the operating space that is in an one-to-one correspondence with the network abstraction of the team of autonomous vehicles; a fact that can provide both conceptual and analytical advantages during mission planning and execution. In this dissertation, we propose the use of a new class of Voronoi-like partitioning schemes with respect to state-dependent proximity (pseudo-) metrics rather than the Euclidean distance or other generalized distance functions, which are typically used in the literature. An important nuance here is that, in contrast to the Euclidean distance, state-dependent metrics can succinctly capture system theoretic features of each vehicle from the team (e.g., vehicle kinematics), as well as the environment-vehicle interactions, which are induced, for example, by local winds/currents. We subsequently illustrate how the proposed concept of state-dependent Voronoi-like partition can induce local control schemes for problems involving networks of spatially distributed autonomous vehicles by examining a sequential pursuit problem of a maneuvering target by a group of pursuers distributed in the plane. The construction of generalized Voronoi diagrams with respect to state-dependent metrics poses some significant challenges. First, the generalized distance metric may be a function of the direction of motion of the vehicle (anisotropic pseudo-distance function) and/or may not be expressible in closed form. Second, such problems fall under the general class of partitioning problems for which the vehicles' dynamics must be taken into account. The topology of the vehicle's configuration space may be non-Euclidean, for example, it may be a manifold embedded in a Euclidean space. In other words, these problems may not be reducible to generalized Voronoi diagram problems for which efficient construction schemes, analytical and/or computational, exist in the literature. This research effort pursues three main objectives. First, we present the complete solution of different steering problems involving a single vehicle in the presence of motion constraints imposed by the maneuverability envelope of the vehicle and/or the presence of a drift field induced by winds/currents in its vicinity. The analysis of each steering problem involving a single vehicle provides us with a state-dependent generalized metric, such as the minimum time-to-go/come. We subsequently use these state-dependent generalized distance functions as the proximity metrics in the formulation of generalized Voronoi-like partitioning problems. The characterization of the solutions of these state-dependent Voronoi-like partitioning problems using either analytical or computational techniques constitutes the second main objective of this dissertation. The third objective of this research effort is to illustrate the use of the proposed concept of state-dependent Voronoi-like partition as a means for passing from control techniques that apply to problems involving a single vehicle to problems involving networks of spatially distributed autonomous vehicles. To this aim, we formulate the problem of sequential/relay pursuit of a maneuvering target by a group of spatially distributed pursuers and subsequently propose a distributed group pursuit strategy that directly derives from the solution of a state-dependent Voronoi-like partitioning problem. (Abstract shortened by UMI.)
Distance learning in discriminative vector quantization.
Schneider, Petra; Biehl, Michael; Hammer, Barbara
2009-10-01
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.
ERIC Educational Resources Information Center
Vos, Pauline
2009-01-01
When studying correlations, how do the three bivariate correlation coefficients between three variables relate? After transforming Pearson's correlation coefficient r into a Euclidean distance, undergraduate students can tackle this problem using their secondary school knowledge of geometry (Pythagoras' theorem and similarity of triangles).…
The K-INDSCAL Model for Heterogeneous Three-Way Dissimilarity Data
ERIC Educational Resources Information Center
Bocci, Laura; Vichi, Maurizio
2011-01-01
A weighted Euclidean distance model for analyzing three-way dissimilarity data (stimuli by stimuli by subjects) for heterogeneous subjects is proposed. First, it is shown that INDSCAL may fail to identify a common space representative of the observed data structure in presence of heterogeneity. A new model that removes the rotational invariance of…
ERIC Educational Resources Information Center
Kynigos, Chronis
1993-01-01
Used 2 12-year-old children to investigate deductive and inductive reasoning in plane geometry. A LOGO microworld was programmed to measure distances and turns relative to points on the plane. Learning environments like this may enhance formation of inductive geometrical understandings. (Contains 44 references.) (LDR)
Graviton propagator from background-independent quantum gravity.
Rovelli, Carlo
2006-10-13
We study the graviton propagator in Euclidean loop quantum gravity. We use spin foam, boundary-amplitude, and group-field-theory techniques. We compute a component of the propagator to first order, under some approximations, obtaining the correct large-distance behavior. This indicates a way for deriving conventional spacetime quantities from a background-independent theory.
Abstract A measurable “sustainability footprint” can be constructed from the chosen indicators to assess relative sustainability. We show in a step by step manner with three case studies how the sustainability footprint based on Euclidean distance of a system from a ...
Prediction of acoustic feature parameters using myoelectric signals.
Lee, Ki-Seung
2010-07-01
It is well-known that a clear relationship exists between human voices and myoelectric signals (MESs) from the area of the speaker's mouth. In this study, we utilized this information to implement a speech synthesis scheme in which MES alone was used to predict the parameters characterizing the vocal-tract transfer function of specific speech signals. Several feature parameters derived from MES were investigated to find the optimal feature for maximization of the mutual information between the acoustic and the MES features. After the optimal feature was determined, an estimation rule for the acoustic parameters was proposed, based on a minimum mean square error (MMSE) criterion. In a preliminary study, 60 isolated words were used for both objective and subjective evaluations. The results showed that the average Euclidean distance between the original and predicted acoustic parameters was reduced by about 30% compared with the average Euclidean distance of the original parameters. The intelligibility of the synthesized speech signals using the predicted features was also evaluated. A word-level identification ratio of 65.5% and a syllable-level identification ratio of 73% were obtained through a listening test.
On the complexity of some quadratic Euclidean 2-clustering problems
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Pyatkin, A. V.
2016-03-01
Some problems of partitioning a finite set of points of Euclidean space into two clusters are considered. In these problems, the following criteria are minimized: (1) the sum over both clusters of the sums of squared pairwise distances between the elements of the cluster and (2) the sum of the (multiplied by the cardinalities of the clusters) sums of squared distances from the elements of the cluster to its geometric center, where the geometric center (or centroid) of a cluster is defined as the mean value of the elements in that cluster. Additionally, another problem close to (2) is considered, where the desired center of one of the clusters is given as input, while the center of the other cluster is unknown (is the variable to be optimized) as in problem (2). Two variants of the problems are analyzed, in which the cardinalities of the clusters are (1) parts of the input or (2) optimization variables. It is proved that all the considered problems are strongly NP-hard and that, in general, there is no fully polynomial-time approximation scheme for them (unless P = NP).
Modeling ECM fiber formation: structure information extracted by analysis of 2D and 3D image sets
NASA Astrophysics Data System (ADS)
Wu, Jun; Voytik-Harbin, Sherry L.; Filmer, David L.; Hoffman, Christoph M.; Yuan, Bo; Chiang, Ching-Shoei; Sturgis, Jennis; Robinson, Joseph P.
2002-05-01
Recent evidence supports the notion that biological functions of extracellular matrix (ECM) are highly correlated to its structure. Understanding this fibrous structure is very crucial in tissue engineering to develop the next generation of biomaterials for restoration of tissues and organs. In this paper, we integrate confocal microscopy imaging and image-processing techniques to analyze the structural properties of ECM. We describe a 2D fiber middle-line tracing algorithm and apply it via Euclidean distance maps (EDM) to extract accurate fibrous structure information, such as fiber diameter, length, orientation, and density, from single slices. Based on a 2D tracing algorithm, we extend our analysis to 3D tracing via Euclidean distance maps to extract 3D fibrous structure information. We use computer simulation to construct the 3D fibrous structure which is subsequently used to test our tracing algorithms. After further image processing, these models are then applied to a variety of ECM constructions from which results of 2D and 3D traces are statistically analyzed.
Park, Juyong
2018-01-01
The quality of life for people in urban regions can be improved by predicting urban human mobility and adjusting urban planning accordingly. In this study, we compared several possible variables to verify whether a gravity model (a human mobility prediction model borrowed from Newtonian mechanics) worked as well in inner-city regions as it did in intra-city regions. We reviewed the resident population, the number of employees, and the number of SNS posts as variables for generating mass values for an urban traffic gravity model. We also compared the straight-line distance, travel distance, and the impact of time as possible distance values. We defined the functions of urban regions on the basis of public records and SNS data to reflect the diverse social factors in urban regions. In this process, we conducted a dimension reduction method for the public record data and used a machine learning-based clustering algorithm for the SNS data. In doing so, we found that functional distance could be defined as the Euclidean distance between social function vectors in urban regions. Finally, we examined whether the functional distance was a variable that had a significant impact on urban human mobility. PMID:29432440
Gu, Dongxiao; Liang, Changyong; Zhao, Huimin
2017-03-01
We present the implementation and application of a case-based reasoning (CBR) system for breast cancer related diagnoses. By retrieving similar cases in a breast cancer decision support system, oncologists can obtain powerful information or knowledge, complementing their own experiential knowledge, in their medical decision making. We observed two problems in applying standard CBR to this context: the abundance of different types of attributes and the difficulty in eliciting appropriate attribute weights from human experts. We therefore used a distance measure named weighted heterogeneous value distance metric, which can better deal with both continuous and discrete attributes simultaneously than the standard Euclidean distance, and a genetic algorithm for learning the attribute weights involved in this distance measure automatically. We evaluated our CBR system in two case studies, related to benign/malignant tumor prediction and secondary cancer prediction, respectively. Weighted heterogeneous value distance metric with genetic algorithm for weight learning outperformed several alternative attribute matching methods and several classification methods by at least 3.4%, reaching 0.938, 0.883, 0.933, and 0.984 in the first case study, and 0.927, 0.842, 0.939, and 0.989 in the second case study, in terms of accuracy, sensitivity×specificity, F measure, and area under the receiver operating characteristic curve, respectively. The evaluation result indicates the potential of CBR in the breast cancer diagnosis domain. Copyright © 2017 Elsevier B.V. All rights reserved.
Odontological approach to sexual dimorphism in southeastern France.
Lladeres, Emilie; Saliba-Serre, Bérengère; Sastre, Julien; Foti, Bruno; Tardivo, Delphine; Adalian, Pascal
2013-01-01
The aim of this study was to establish a prediction formula to allow for the determination of sex among the southeastern French population using dental measurements. The sample consisted of 105 individuals (57 males and 48 females, aged between 18 and 25 years). Dental measurements were calculated using Euclidean distances, in three-dimensional space, from point coordinates obtained by a Microscribe. A multiple logistic regression analysis was performed to establish the prediction formula. Among 12 selected dental distances, a stepwise logistic regression analysis highlighted the two most significant discriminate predictors of sex: one located at the mandible and the other at the maxilla. A cutpoint was proposed to prediction of true sex. The prediction formula was then tested on a validation sample (20 males and 34 females, aged between 18 and 62 years and with a history of orthodontics or restorative care) to evaluate the accuracy of the method. © 2012 American Academy of Forensic Sciences.
Aras, N; Altinel, I K; Oommen, J
2003-01-01
In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.
Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.
Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang
2017-11-01
Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.
Role of dimensionality in preferential attachment growth in the Bianconi-Barabási model
NASA Astrophysics Data System (ADS)
Nunes, Thiago C.; Brito, Samurai; da Silva, Luciano R.; Tsallis, Constantino
2017-09-01
Scale-free networks are quite popular nowadays since many systems are well represented by such structures. In order to study these systems, several models were proposed. However, most of them do not take into account the node-to-node Euclidean distance, i.e. the geographical distance. In real networks, the distance between sites can be very relevant, e.g. those cases where it is intended to minimize costs. Within this scenario we studied the role of dimensionality d in the Bianconi-Barabási model with a preferential attachment growth involving Euclidean distances. The preferential attachment in this model follows the rule \\Pii \\propto ηi k_i/rijα_A (1 ≤slant i < j; αA ≥slant 0) , where ηi characterizes the fitness of the ith site and is randomly chosen within the (0, 1] interval. We verified that the degree distribution P(k) for dimensions d=1, 2, 3, 4 are well fitted by P(k) \\propto e_q-k/κ , where e_q-k/κ is the q-exponential function naturally emerging within nonextensive statistical mechanics. We determine the index q and κ as functions of the quantities αA and d, and numerically verify that both present a universal behavior with respect to the scaled variable α_A/d . The same behavior also has been displayed by the dynamical β exponent which characterizes the steadily growing number of links of a given site.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sasahara, M; Arimura, H; Hirose, T
Purpose: Current image-guided radiotherapy (IGRT) procedure is bonebased patient positioning, followed by subjective manual correction using cone beam computed tomography (CBCT). This procedure might cause the misalignment of the patient positioning. Automatic target-based patient positioning systems achieve the better reproducibility of patient setup. Our aim of this study was to develop an automatic target-based patient positioning framework for IGRT with CBCT images in prostate cancer treatment. Methods: Seventy-three CBCT images of 10 patients and 24 planning CT images with digital imaging and communications in medicine for radiotherapy (DICOM-RT) structures were used for this study. Our proposed framework started from themore » generation of probabilistic atlases of bone and prostate from 24 planning CT images and prostate contours, which were made in the treatment planning. Next, the gray-scale histograms of CBCT values within CTV regions in the planning CT images were obtained as the occurrence probability of the CBCT values. Then, CBCT images were registered to the atlases using a rigid registration with mutual information. Finally, prostate regions were estimated by applying the Bayesian inference to CBCT images with the probabilistic atlases and CBCT value occurrence probability. The proposed framework was evaluated by calculating the Euclidean distance of errors between two centroids of prostate regions determined by our method and ground truths of manual delineations by a radiation oncologist and a medical physicist on CBCT images for 10 patients. Results: The average Euclidean distance between the centroids of extracted prostate regions determined by our proposed method and ground truths was 4.4 mm. The average errors for each direction were 1.8 mm in anteroposterior direction, 0.6 mm in lateral direction and 2.1 mm in craniocaudal direction. Conclusion: Our proposed framework based on probabilistic atlases and Bayesian inference might be feasible to automatically determine prostate regions on CBCT images.« less
Fuzzy logic applied to prospecting for areas for installation of wood panel industries.
Dos Santos, Alexandre Rosa; Paterlini, Ewerthon Mattos; Fiedler, Nilton Cesar; Ribeiro, Carlos Antonio Alvares Soares; Lorenzon, Alexandre Simões; Domingues, Getulio Fonseca; Marcatti, Gustavo Eduardo; de Castro, Nero Lemos Martins; Teixeira, Thaisa Ribeiro; Dos Santos, Gleissy Mary Amaral Dino Alves; Juvanhol, Ronie Silva; Branco, Elvis Ricardo Figueira; Mota, Pedro Henrique Santos; da Silva, Lilianne Gomes; Pirovani, Daiani Bernardo; de Jesus, Waldir Cintra; Santos, Ana Carolina de Albuquerque; Leite, Helio Garcia; Iwakiri, Setsuo
2017-05-15
Prospecting for suitable areas for forestry operations, where the objective is a reduction in production and transportation costs, as well as the maximization of profits and available resources, constitutes an optimization problem. However, fuzzy logic is an alternative method for solving this problem. In the context of prospecting for suitable areas for the installation of wood panel industries, we propose applying fuzzy logic analysis for simulating the planting of different species and eucalyptus hybrids in Espírito Santo State, Brazil. The necessary methodological steps for this study are as follows: a) agriclimatological zoning of different species and eucalyptus hybrids; b) the selection of the vector variables; c) the application of the Euclidean distance to the vector variables; d) the application of fuzzy logic to matrix variables of the Euclidean distance; and e) the application of overlap fuzzy logic to locate areas for installation of wood panel industries. Among all the species and hybrids, Corymbia citriodora showed the highest percentage values for the combined very good and good classes, with 8.60%, followed by Eucalyptus grandis with 8.52%, Eucalyptus urophylla with 8.35% and Urograndis with 8.34%. The fuzzy logic analysis afforded flexibility in prospecting for suitable areas for the installation of wood panel industries in the Espírito Santo State can bring great economic and social benefits to the local population with the generation of jobs, income, tax revenues and GDP increase for the State and municipalities involved. The proposed methodology can be adapted to other areas and agricultural crops. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Meulman, Jacqueline J.; Verboon, Peter
1993-01-01
Points of view analysis, as a way to deal with individual differences in multidimensional scaling, was largely supplanted by the weighted Euclidean model. It is argued that the approach deserves new attention, especially as a technique to analyze group differences. A streamlined and integrated process is proposed. (SLD)
Visual Analytics for Exploration of a High-Dimensional Structure
2013-04-01
5 Figure 3. Comparison of Euclidean vs. geodesic distance. LDRs use...manifold, whereas an LDR fails. ...........................6 Figure 4. WEKA GUI for data mining HDD using FRFS-ACO...multidimensional scaling (CMDS)— are a linear DR ( LDR ). An LDR is based on a linear combination of the feature data. LDRs keep similar data points close together
2012-01-01
Background Previous studies have provided mixed evidence with regards to associations between food store access and dietary outcomes. This study examines the most commonly applied measures of locational access to assess whether associations between supermarket access and fruit and vegetable consumption are affected by the choice of access measure and scale. Method Supermarket location data from Glasgow, UK (n = 119), and fruit and vegetable intake data from the ‘Health and Well-Being’ Survey (n = 1041) were used to compare various measures of locational access. These exposure variables included proximity estimates (with different points-of-origin used to vary levels of aggregation) and density measures using three approaches (Euclidean and road network buffers and Kernel density estimation) at distances ranging from 0.4 km to 5 km. Further analysis was conducted to assess the impact of using smaller buffer sizes for individuals who did not own a car. Associations between these multiple access measures and fruit and vegetable consumption were estimated using linear regression models. Results Levels of spatial aggregation did not impact on the proximity estimates. Counts of supermarkets within Euclidean buffers were associated with fruit and vegetable consumption at 1 km, 2 km and 3 km, and for our road network buffers at 2 km, 3 km, and 4 km. Kernel density estimates provided the strongest associations and were significant at a distance of 2 km, 3 km, 4 km and 5 km. Presence of a supermarket within 0.4 km of road network distance from where people lived was positively associated with fruit consumption amongst those without a car (coef. 0.657; s.e. 0.247; p0.008). Conclusions The associations between locational access to supermarkets and individual-level dietary behaviour are sensitive to the method by which the food environment variable is captured. Care needs to be taken to ensure robust and conceptually appropriate measures of access are used and these should be grounded in a clear a priori reasoning. PMID:22839742
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Khandeev, V. I.
2016-02-01
The strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters of given sizes (cardinalities) minimizing the sum (over both clusters) of the intracluster sums of squared distances from the elements of the clusters to their centers is considered. It is assumed that the center of one of the sought clusters is specified at the desired (arbitrary) point of space (without loss of generality, at the origin), while the center of the other one is unknown and determined as the mean value over all elements of this cluster. It is shown that unless P = NP, there is no fully polynomial-time approximation scheme for this problem, and such a scheme is substantiated in the case of a fixed space dimension.
The Principle of the Micro-Electronic Neural Bridge and a Prototype System Design.
Huang, Zong-Hao; Wang, Zhi-Gong; Lu, Xiao-Ying; Li, Wen-Yuan; Zhou, Yu-Xuan; Shen, Xiao-Yan; Zhao, Xin-Tai
2016-01-01
The micro-electronic neural bridge (MENB) aims to rebuild lost motor function of paralyzed humans by routing movement-related signals from the brain, around the damage part in the spinal cord, to the external effectors. This study focused on the prototype system design of the MENB, including the principle of the MENB, the neural signal detecting circuit and the functional electrical stimulation (FES) circuit design, and the spike detecting and sorting algorithm. In this study, we developed a novel improved amplitude threshold spike detecting method based on variable forward difference threshold for both training and bridging phase. The discrete wavelet transform (DWT), a new level feature coefficient selection method based on Lilliefors test, and the k-means clustering method based on Mahalanobis distance were used for spike sorting. A real-time online spike detecting and sorting algorithm based on DWT and Euclidean distance was also implemented for the bridging phase. Tested by the data sets available at Caltech, in the training phase, the average sensitivity, specificity, and clustering accuracies are 99.43%, 97.83%, and 95.45%, respectively. Validated by the three-fold cross-validation method, the average sensitivity, specificity, and classification accuracy are 99.43%, 97.70%, and 96.46%, respectively.
The algorithm of fast image stitching based on multi-feature extraction
NASA Astrophysics Data System (ADS)
Yang, Chunde; Wu, Ge; Shi, Jing
2018-05-01
This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.
Anomaly detection of flight routes through optimal waypoint
NASA Astrophysics Data System (ADS)
Pusadan, M. Y.; Buliali, J. L.; Ginardi, R. V. H.
2017-01-01
Deciding factor of flight, one of them is the flight route. Flight route determined by coordinate (latitude and longitude). flight routed is determined by its coordinates (latitude and longitude) as defined is waypoint. anomaly occurs, if the aircraft is flying outside the specified waypoint area. In the case of flight data, anomalies occur by identifying problems of the flight route based on data ADS-B. This study has an aim of to determine the optimal waypoints of the flight route. The proposed methods: i) Agglomerative Hierarchical Clustering (AHC) in several segments based on range area coordinates (latitude and longitude) in every waypoint; ii) The coefficient cophenetics correlation (c) to determine the correlation between the members in each cluster; iii) cubic spline interpolation as a graphic representation of the has connected between the coordinates on every waypoint; and iv). Euclidean distance to measure distances between waypoints with 2 centroid result of clustering AHC. The experiment results are value of coefficient cophenetics correlation (c): 0,691≤ c ≤ 0974, five segments the generated of the range area waypoint coordinates, and the shortest and longest distance between the centroid with waypoint are 0.46 and 2.18. Thus, concluded that the shortest distance is used as the reference coordinates of optimal waypoint, and farthest distance can be indicated potentially detected anomaly.
NASA Astrophysics Data System (ADS)
Kozoderov, V. V.; Kondranin, T. V.; Dmitriev, E. V.
2017-12-01
The basic model for the recognition of natural and anthropogenic objects using their spectral and textural features is described in the problem of hyperspectral air-borne and space-borne imagery processing. The model is based on improvements of the Bayesian classifier that is a computational procedure of statistical decision making in machine-learning methods of pattern recognition. The principal component method is implemented to decompose the hyperspectral measurements on the basis of empirical orthogonal functions. Application examples are shown of various modifications of the Bayesian classifier and Support Vector Machine method. Examples are provided of comparing these classifiers and a metrical classifier that operates on finding the minimal Euclidean distance between different points and sets in the multidimensional feature space. A comparison is also carried out with the " K-weighted neighbors" method that is close to the nonparametric Bayesian classifier.
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
NASA Astrophysics Data System (ADS)
Wu, Wei; Zhao, Dewei; Zhang, Huan
2015-12-01
Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maffei, Nicola; Guidi, Gabriele, E-mail: guidi.gab
Purpose: A susceptible-infected-susceptible (SIS) epidemic model was applied to radiation therapy (RT) treatments to predict morphological variations in head and neck (H&N) anatomy. Methods: 360 daily MVCT images of 12 H&N patients treated by tomotherapy were analyzed in this retrospective study. Deformable image registration (DIR) algorithms, mesh grids, and structure recontouring, implemented in the RayStation treatment planning system (TPS), were applied to assess the daily organ warping. The parotid’s warping was evaluated using the epidemiological approach considering each vertex as a single subject and its deformed vector field (DVF) as an infection. Dedicated IronPython scripts were developed to export dailymore » coordinates and displacements of the region of interest (ROI) from the TPS. MATLAB tools were implemented to simulate the SIS modeling. Finally, the fully trained model was applied to a new patient. Results: A QUASAR phantom was used to validate the model. The patients’ validation was obtained setting 0.4 cm of vertex displacement as threshold and splitting susceptible (S) and infectious (I) cases. The correlation between the epidemiological model and the parotids’ trend for further optimization of alpha and beta was carried out by Euclidean and dynamic time warping (DTW) distances. The best fit with experimental conditions across all patients (Euclidean distance of 4.09 ± 1.12 and DTW distance of 2.39 ± 0.66) was obtained setting the contact rate at 7.55 ± 0.69 and the recovery rate at 2.45 ± 0.26; birth rate was disregarded in this constant population. Conclusions: Combining an epidemiological model with adaptive RT (ART), the authors’ novel approach could support image-guided radiation therapy (IGRT) to validate daily setup and to forecast anatomical variations. The SIS-ART model developed could support clinical decisions in order to optimize timing of replanning achieving personalized treatments.« less
Enhancing colposcopy with polarized light.
Ferris, Daron G; Li, Wenjing; Gustafsson, Ulf; Lieberman, Richard W; Galdos, Oscar; Santos, Carlos
2010-07-01
To determine the potential utility of polarized light used during colposcopic examinations. Matched, polarized, and unpolarized colposcopic images and diagnostic annotations from 31 subjects receiving excisional treatment of cervical neoplasia were compared. Sensitivity, specificity, and mean Euclidean distances between the centroids of the gaussian ellipsoids for the different epithelial types were calculated for unpolarized and polarized images. The sensitivities of polarized colposcopic annotations for discriminating cervical intraepithelial neoplasia (CIN) 2 or higher were greater for all 3 acetowhite categories when compared with unpolarized annotations (58% [44/76] vs 45% [34/76], 68% [50/74] vs 59% [45/76], and 68% [49/72] vs 66% [50/76], respectively). The average percent differences in Euclidean distances between the epithelial types for unpolarized and polarized cervical images were as follows: CIN 2/3 versus CIN 1 = 33% (10/30, p =.03), CIN 2/3 versus columnar epithelium = 22% (p =.004), CIN 2/3 versus immature metaplasia = 29% (14/47, p =.11), and CIN 1 versus immature metaplasia = 27% (4.4/16, p =.16). Because of its ability to interrogate at a deeper plane and eliminate obscuring glare, polarized light colposcopy may enhance the evaluation and detection of cervical neoplasias.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giampaolo, Salvatore M.; CNR-INFM Coherentia, Naples; CNISM Unita di Salerno and INFN Sezione di Napoli, Gruppo collegato di Salerno, Baronissi
2007-10-15
We investigate the geometric characterization of pure state bipartite entanglement of (2xD)- and (3xD)-dimensional composite quantum systems. To this aim, we analyze the relationship between states and their images under the action of particular classes of local unitary operations. We find that invariance of states under the action of single-qubit and single-qutrit transformations is a necessary and sufficient condition for separability. We demonstrate that in the (2xD)-dimensional case the von Neumann entropy of entanglement is a monotonic function of the minimum squared Euclidean distance between states and their images over the set of single qubit unitary transformations. Moreover, both inmore » the (2xD)- and in the (3xD)-dimensional cases the minimum squared Euclidean distance exactly coincides with the linear entropy [and thus as well with the tangle measure of entanglement in the (2xD)-dimensional case]. These results provide a geometric characterization of entanglement measures originally established in informational frameworks. Consequences and applications of the formalism to quantum critical phenomena in spin systems are discussed.« less
Zhang, Yue; Zou, Huanxin; Luo, Tiancheng; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng
2016-01-01
The superpixel segmentation algorithm, as a preprocessing technique, should show good performance in fast segmentation speed, accurate boundary adherence and homogeneous regularity. A fast superpixel segmentation algorithm by iterative edge refinement (IER) works well on optical images. However, it may generate poor superpixels for Polarimetric synthetic aperture radar (PolSAR) images due to the influence of strong speckle noise and many small-sized or slim regions. To solve these problems, we utilized a fast revised Wishart distance instead of Euclidean distance in the local relabeling of unstable pixels, and initialized unstable pixels as all the pixels substituted for the initial grid edge pixels in the initialization step. Then, postprocessing with the dissimilarity measure is employed to remove the generated small isolated regions as well as to preserve strong point targets. Finally, the superiority of the proposed algorithm is validated with extensive experiments on four simulated and two real-world PolSAR images from Experimental Synthetic Aperture Radar (ESAR) and Airborne Synthetic Aperture Radar (AirSAR) data sets, which demonstrate that the proposed method shows better performance with respect to several commonly used evaluation measures, even with about nine times higher computational efficiency, as well as fine boundary adherence and strong point targets preservation, compared with three state-of-the-art methods. PMID:27754385
Sequence comparison alignment-free approach based on suffix tree and L-words frequency.
Soares, Inês; Goios, Ana; Amorim, António
2012-01-01
The vast majority of methods available for sequence comparison rely on a first sequence alignment step, which requires a number of assumptions on evolutionary history and is sometimes very difficult or impossible to perform due to the abundance of gaps (insertions/deletions). In such cases, an alternative alignment-free method would prove valuable. Our method starts by a computation of a generalized suffix tree of all sequences, which is completed in linear time. Using this tree, the frequency of all possible words with a preset length L-L-words--in each sequence is rapidly calculated. Based on the L-words frequency profile of each sequence, a pairwise standard Euclidean distance is then computed producing a symmetric genetic distance matrix, which can be used to generate a neighbor joining dendrogram or a multidimensional scaling graph. We present an improvement to word counting alignment-free approaches for sequence comparison, by determining a single optimal word length and combining suffix tree structures to the word counting tasks. Our approach is, thus, a fast and simple application that proved to be efficient and powerful when applied to mitochondrial genomes. The algorithm was implemented in Python language and is freely available on the web.
NASA Astrophysics Data System (ADS)
Shen, Fei; Chen, Chao; Yan, Ruqiang
2017-05-01
Classical bearing fault diagnosis methods, being designed according to one specific task, always pay attention to the effectiveness of extracted features and the final diagnostic performance. However, most of these approaches suffer from inefficiency when multiple tasks exist, especially in a real-time diagnostic scenario. A fault diagnosis method based on Non-negative Matrix Factorization (NMF) and Co-clustering strategy is proposed to overcome this limitation. Firstly, some high-dimensional matrixes are constructed using the Short-Time Fourier Transform (STFT) features, where the dimension of each matrix equals to the number of target tasks. Then, the NMF algorithm is carried out to obtain different components in each dimension direction through optimized matching, such as Euclidean distance and divergence distance. Finally, a Co-clustering technique based on information entropy is utilized to realize classification of each component. To verity the effectiveness of the proposed approach, a series of bearing data sets were analysed in this research. The tests indicated that although the diagnostic performance of single task is comparable to traditional clustering methods such as K-mean algorithm and Guassian Mixture Model, the accuracy and computational efficiency in multi-tasks fault diagnosis are improved.
Kim, Heekang; Kwon, Soon; Kim, Sungho
2016-01-01
This paper proposes a vehicle light detection method using a hyperspectral camera instead of a Charge-Coupled Device (CCD) or Complementary metal-Oxide-Semiconductor (CMOS) camera for adaptive car headlamp control. To apply Intelligent Headlight Control (IHC), the vehicle headlights need to be detected. Headlights are comprised from a variety of lighting sources, such as Light Emitting Diodes (LEDs), High-intensity discharge (HID), and halogen lamps. In addition, rear lamps are made of LED and halogen lamp. This paper refers to the recent research in IHC. Some problems exist in the detection of headlights, such as erroneous detection of street lights or sign lights and the reflection plate of ego-car from CCD or CMOS images. To solve these problems, this study uses hyperspectral images because they have hundreds of bands and provide more information than a CCD or CMOS camera. Recent methods to detect headlights used the Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), and Euclidean Distance Mapper (EDM). The experimental results highlight the feasibility of the proposed method in three types of lights (LED, HID, and halogen). PMID:27399720
Sequence spaces [Formula: see text] and [Formula: see text] with application in clustering.
Khan, Mohd Shoaib; Alamri, Badriah As; Mursaleen, M; Lohani, Qm Danish
2017-01-01
Distance measures play a central role in evolving the clustering technique. Due to the rich mathematical background and natural implementation of [Formula: see text] distance measures, researchers were motivated to use them in almost every clustering process. Beside [Formula: see text] distance measures, there exist several distance measures. Sargent introduced a special type of distance measures [Formula: see text] and [Formula: see text] which is closely related to [Formula: see text]. In this paper, we generalized the Sargent sequence spaces through introduction of [Formula: see text] and [Formula: see text] sequence spaces. Moreover, it is shown that both spaces are BK -spaces, and one is a dual of another. Further, we have clustered the two-moon dataset by using an induced [Formula: see text]-distance measure (induced by the Sargent sequence space [Formula: see text]) in the k-means clustering algorithm. The clustering result established the efficacy of replacing the Euclidean distance measure by the [Formula: see text]-distance measure in the k-means algorithm.
Segmentation and Visual Analysis of Whole-Body Mouse Skeleton microSPECT
Khmelinskii, Artem; Groen, Harald C.; Baiker, Martin; de Jong, Marion; Lelieveldt, Boudewijn P. F.
2012-01-01
Whole-body SPECT small animal imaging is used to study cancer, and plays an important role in the development of new drugs. Comparing and exploring whole-body datasets can be a difficult and time-consuming task due to the inherent heterogeneity of the data (high volume/throughput, multi-modality, postural and positioning variability). The goal of this study was to provide a method to align and compare side-by-side multiple whole-body skeleton SPECT datasets in a common reference, thus eliminating acquisition variability that exists between the subjects in cross-sectional and multi-modal studies. Six whole-body SPECT/CT datasets of BALB/c mice injected with bone targeting tracers 99mTc-methylene diphosphonate (99mTc-MDP) and 99mTc-hydroxymethane diphosphonate (99mTc-HDP) were used to evaluate the proposed method. An articulated version of the MOBY whole-body mouse atlas was used as a common reference. Its individual bones were registered one-by-one to the skeleton extracted from the acquired SPECT data following an anatomical hierarchical tree. Sequential registration was used while constraining the local degrees of freedom (DoFs) of each bone in accordance to the type of joint and its range of motion. The Articulated Planar Reformation (APR) algorithm was applied to the segmented data for side-by-side change visualization and comparison of data. To quantitatively evaluate the proposed algorithm, bone segmentations of extracted skeletons from the correspondent CT datasets were used. Euclidean point to surface distances between each dataset and the MOBY atlas were calculated. The obtained results indicate that after registration, the mean Euclidean distance decreased from 11.5±12.1 to 2.6±2.1 voxels. The proposed approach yielded satisfactory segmentation results with minimal user intervention. It proved to be robust for “incomplete” data (large chunks of skeleton missing) and for an intuitive exploration and comparison of multi-modal SPECT/CT cross-sectional mouse data. PMID:23152834
NASA Technical Reports Server (NTRS)
Li, Z. K.
1985-01-01
A specialized program was developed for flow cytometric list-mode data using an heirarchical tree method for identifying and enumerating individual subpopulations, the method of principal components for a two-dimensional display of 6-parameter data array, and a standard sorting algorithm for characterizing subpopulations. The program was tested against a published data set subjected to cluster analysis and experimental data sets from controlled flow cytometry experiments using a Coulter Electronics EPICS V Cell Sorter. A version of the program in compiled BASIC is usable on a 16-bit microcomputer with the MS-DOS operating system. It is specialized for 6 parameters and up to 20,000 cells. Its two-dimensional display of Euclidean distances reveals clusters clearly, as does its 1-dimensional display. The identified subpopulations can, in suitable experiments, be related to functional subpopulations of cells.
NASA Technical Reports Server (NTRS)
Tavana, Madjid
2005-01-01
"To understand and protect our home planet, to explore the universe and search for life, and to inspire the next generation of explorers" is NASA's mission. The Systems Management Office at Johnson Space Center (JSC) is searching for methods to effectively manage the Center's resources to meet NASA's mission. D-Side is a group multi-criteria decision support system (GMDSS) developed to support facility decisions at JSC. D-Side uses a series of sequential and structured processes to plot facilities in a three-dimensional (3-D) graph on the basis of each facility alignment with NASA's mission and goals, the extent to which other facilities are dependent on the facility, and the dollar value of capital investments that have been postponed at the facility relative to the facility replacement value. A similarity factor rank orders facilities based on their Euclidean distance from Ideal and Nadir points. These similarity factors are then used to allocate capital improvement resources across facilities. We also present a parallel model that can be used to support decisions concerning allocation of human resources investments across workforce units. Finally, we present results from a pilot study where 12 experienced facility managers from NASA used D-Side and the organization's current approach to rank order and allocate funds for capital improvement across 20 facilities. Users evaluated D-Side favorably in terms of ease of use, the quality of the decision-making process, decision quality, and overall value-added. Their evaluations of D-Side were significantly more favorable than their evaluations of the current approach. Keywords: NASA, Multi-Criteria Decision Making, Decision Support System, AHP, Euclidean Distance, 3-D Modeling, Facility Planning, Workforce Planning.
A CT-based software tool for evaluating compensator quality in passively scattered proton therapy
NASA Astrophysics Data System (ADS)
Li, Heng; Zhang, Lifei; Dong, Lei; Sahoo, Narayan; Gillin, Michael T.; Zhu, X. Ronald
2010-11-01
We have developed a quantitative computed tomography (CT)-based quality assurance (QA) tool for evaluating the accuracy of manufactured compensators used in passively scattered proton therapy. The thickness of a manufactured compensator was measured from its CT images and compared with the planned thickness defined by the treatment planning system. The difference between the measured and planned thicknesses was calculated with use of the Euclidean distance transformation and the kd-tree search method. Compensator accuracy was evaluated by examining several parameters including mean distance, maximum distance, global thickness error and central axis shifts. Two rectangular phantoms were used to validate the performance of the QA tool. Nine patients and 20 compensators were included in this study. We found that mean distances, global thickness errors and central axis shifts were all within 1 mm for all compensators studied, with maximum distances ranging from 1.1 to 3.8 mm. Although all compensators passed manual verification at selected points, about 5% of the pixels still had maximum distances of >2 mm, most of which correlated with large depth gradients. The correlation between the mean depth gradient of the compensator and the percentage of pixels with mean distance <1 mm is -0.93 with p < 0.001, which suggests that the mean depth gradient is a good indicator of compensator complexity. These results demonstrate that the CT-based compensator QA tool can be used to quantitatively evaluate manufactured compensators.
NASA Astrophysics Data System (ADS)
Osseiran, Sam; Roider, Elisabeth M.; Wang, Hequn; Suita, Yusuke; Murphy, Michael; Fisher, David E.; Evans, Conor L.
2017-12-01
Chemical sun filters are commonly used as active ingredients in sunscreens due to their efficient absorption of ultraviolet (UV) radiation. Yet, it is known that these compounds can photochemically react with UV light and generate reactive oxygen species and oxidative stress in vitro, though this has yet to be validated in vivo. One label-free approach to probe oxidative stress is to measure and compare the relative endogenous fluorescence generated by cellular coenzymes nicotinamide adenine dinucleotides and flavin adenine dinucleotides. However, chemical sun filters are fluorescent, with emissive properties that contaminate endogenous fluorescent signals. To accurately distinguish the source of fluorescence in ex vivo skin samples treated with chemical sun filters, fluorescence lifetime imaging microscopy data were processed on a pixel-by-pixel basis using a non-Euclidean separation algorithm based on Mahalanobis distance and validated on simulated data. Applying this method, ex vivo samples exhibited a small oxidative shift when exposed to sun filters alone, though this shift was much smaller than that imparted by UV irradiation. Given the need for investigative tools to further study the clinical impact of chemical sun filters in patients, the reported methodology may be applied to visualize chemical sun filters and measure oxidative stress in patients' skin.
Fractal Clustering and Knowledge-driven Validation Assessment for Gene Expression Profiling.
Wang, Lu-Yong; Balasubramanian, Ammaiappan; Chakraborty, Amit; Comaniciu, Dorin
2005-01-01
DNA microarray experiments generate a substantial amount of information about the global gene expression. Gene expression profiles can be represented as points in multi-dimensional space. It is essential to identify relevant groups of genes in biomedical research. Clustering is helpful in pattern recognition in gene expression profiles. A number of clustering techniques have been introduced. However, these traditional methods mainly utilize shape-based assumption or some distance metric to cluster the points in multi-dimension linear Euclidean space. Their results shows poor consistence with the functional annotation of genes in previous validation study. From a novel different perspective, we propose fractal clustering method to cluster genes using intrinsic (fractal) dimension from modern geometry. This method clusters points in such a way that points in the same clusters are more self-affine among themselves than to the points in other clusters. We assess this method using annotation-based validation assessment for gene clusters. It shows that this method is superior in identifying functional related gene groups than other traditional methods.
Orthogonal Array Testing for Transmit Precoding based Codebooks in Space Shift Keying Systems
NASA Astrophysics Data System (ADS)
Al-Ansi, Mohammed; Alwee Aljunid, Syed; Sourour, Essam; Mat Safar, Anuar; Rashidi, C. B. M.
2018-03-01
In Space Shift Keying (SSK) systems, transmit precoding based codebook approaches have been proposed to improve the performance in limited feedback channels. The receiver performs an exhaustive search in a predefined Full-Combination (FC) codebook to select the optimal codeword that maximizes the Minimum Euclidean Distance (MED) between the received constellations. This research aims to reduce the codebook size with the purpose of minimizing the selection time and the number of feedback bits. Therefore, we propose to construct the codebooks based on Orthogonal Array Testing (OAT) methods due to their powerful inherent properties. These methods allow to acquire a short codebook where the codewords are sufficient to cover almost all the possible effects included in the FC codebook. Numerical results show the effectiveness of the proposed OAT codebooks in terms of the system performance and complexity.
2014-09-18
Operations and Developing Issues . . . . . . . . . . . . . . . . . . 6 2.1.2 Next-Generation Air Transportation System (NextGen...Air Traffic Management ESP Euclidean Shortest Path FAA Federal Aviation Administration FCFS First-Come-First-Served HCS Hybrid Control System KKT...Karush-Kuhn-Tucker LGR Legendre-Gauss-Radau MLD Minimum Lateral Distance NAS National Airspace System NASA National Aeronautics and Space Administration
Orientation estimation of anatomical structures in medical images for object recognition
NASA Astrophysics Data System (ADS)
Bağci, Ulaş; Udupa, Jayaram K.; Chen, Xinjian
2011-03-01
Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.
Toward the optimization of normalized graph Laplacian.
Xie, Bo; Wang, Meng; Tao, Dacheng
2011-04-01
Normalized graph Laplacian has been widely used in many practical machine learning algorithms, e.g., spectral clustering and semisupervised learning. However, all of them use the Euclidean distance to construct the graph Laplacian, which does not necessarily reflect the inherent distribution of the data. In this brief, we propose a method to directly optimize the normalized graph Laplacian by using pairwise constraints. The learned graph is consistent with equivalence and nonequivalence pairwise relationships, and thus it can better represent similarity between samples. Meanwhile, our approach, unlike metric learning, automatically determines the scale factor during the optimization. The learned normalized Laplacian matrix can be directly applied in spectral clustering and semisupervised learning algorithms. Comprehensive experiments demonstrate the effectiveness of the proposed approach.
Lin, Jui-Ching; Heeschen, William; Reffner, John; Hook, John
2012-04-01
The combination of integrated focused ion beam-scanning electron microscope (FIB-SEM) serial sectioning and imaging techniques with image analysis provided quantitative characterization of three-dimensional (3D) pigment dispersion in dried paint films. The focused ion beam in a FIB-SEM dual beam system enables great control in slicing paints, and the sectioning process can be synchronized with SEM imaging providing high quality serial cross-section images for 3D reconstruction. Application of Euclidean distance map and ultimate eroded points image analysis methods can provide quantitative characterization of 3D particle distribution. It is concluded that 3D measurement of binder distribution in paints is effective to characterize the order of pigment dispersion in dried paint films.
Tang, Kujin; Lu, Yang Young; Sun, Fengzhu
2018-01-01
Horizontal gene transfer (HGT) plays an important role in the evolution of microbial organisms including bacteria. Alignment-free methods based on single genome compositional information have been used to detect HGT. Currently, Manhattan and Euclidean distances based on tetranucleotide frequencies are the most commonly used alignment-free dissimilarity measures to detect HGT. By testing on simulated bacterial sequences and real data sets with known horizontal transferred genomic regions, we found that more advanced alignment-free dissimilarity measures such as CVTree and [Formula: see text] that take into account the background Markov sequences can solve HGT detection problems with significantly improved performance. We also studied the influence of different factors such as evolutionary distance between host and donor sequences, size of sliding window, and host genome composition on the performances of alignment-free methods to detect HGT. Our study showed that alignment-free methods can predict HGT accurately when host and donor genomes are in different order levels. Among all methods, CVTree with word length of 3, [Formula: see text] with word length 3, Markov order 1 and [Formula: see text] with word length 4, Markov order 1 outperform others in terms of their highest F 1 -score and their robustness under the influence of different factors.
Isonymy structure of Sucre and Táchira, two Venezuelan states.
Rodríguez-Larralde, A; Barrai, I
1997-10-01
The isonymy structure of two Venezuelan states, Sucre and Táchira, is described using the surnames of the Register of Electors updated in 1991. The frequency distribution of surnames pooled together by sex was obtained for the 57 counties of Sucre and the 52 counties of Táchira, based on total population sizes of 158,705 and 160,690 individuals, respectively. The coefficient of consanguinity resulting from random isonymy (phi ii), Karlin and McGregor's ni (identical to v), and the proportion of the population included in surnames represented only once (estimator A) and in the seven most frequent surnames (estimator B) were calculated for each county. RST, a measure of microdifferentiation, was estimated for each state. The Euclidean distance between pairs of counties within states was calculated together with the corresponding geographic distances. The correlations between their logarithmic transformations were significant in both cases, indicating differentiation of surnames by distance. Dendrograms based on the Euclidean distance matrix were constructed. From them a first approximation of the effect of internal migration within states was obtained. Ninety-six percent of the coefficient of consanguinity resulting from random isonymy is determined by the proportion of the population included in the seven most frequent surnames, whereas between 72% and 88% of Karlin and McGregor's ni for Sucre and Táchira, respectively, is determined by the proportion of population included in surnames represented only once. Surnames with generalized and with focal distribution were identified for both states, to be used as possible indicators of the geographic origin of their carriers. Our results indicate that Táchira's counties, on average, tend to be more isolated than Sucre's counties, as measured by RST, estimator B, and phi ii. Comparisons with the results obtained for other. Venezuelan states and other non-Venezuelan populations are also given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
I. W. Ginsberg
Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less
Advertisement call and genetic structure conservatism: good news for an endangered Neotropical frog
Costa, William P.; Martins, Lucas B.; Nunes-de-Almeida, Carlos H. L.; Toledo, Luís Felipe
2016-01-01
Background: Many amphibian species are negatively affected by habitat change due to anthropogenic activities. Populations distributed over modified landscapes may be subject to local extinction or may be relegated to the remaining—likely isolated and possibly degraded—patches of available habitat. Isolation without gene flow could lead to variability in phenotypic traits owing to differences in local selective pressures such as environmental structure, microclimate, or site-specific species assemblages. Methods: Here, we tested the microevolution hypothesis by evaluating the acoustic parameters of 349 advertisement calls from 15 males from six populations of the endangered amphibian species Proceratophrys moratoi. In addition, we analyzed the genetic distances among populations and the genetic diversity with a haplotype network analysis. We performed cluster analysis on acoustic data based on the Bray-Curtis index of similarity, using the UPGMA method. We correlated acoustic dissimilarities (calculated by Euclidean distance) with geographical and genetic distances among populations. Results: Spectral traits of the advertisement call of P. moratoi presented lower coefficients of variation than did temporal traits, both within and among males. Cluster analyses placed individuals without congruence in population or geographical distance, but recovered the species topology in relation to sister species. The genetic distance among populations was low; it did not exceed 0.4% for the most distant populations, and was not correlated with acoustic distance. Discussion: Both acoustic features and genetic sequences are highly conserved, suggesting that populations could be connected by recent migrations, and that they are subject to stabilizing selective forces. Although further studies are required, these findings add to a growing body of literature suggesting that this species would be a good candidate for a reintroduction program without negative effects on communication or genetic impact. PMID:27190717
NASA Astrophysics Data System (ADS)
Briceño, Raúl A.; Hansen, Maxwell T.; Monahan, Christopher J.
2017-07-01
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate that the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Finally we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.
Briceno, Raul A.; Hansen, Maxwell T.; Monahan, Christopher J.
2017-07-11
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate thatmore » the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Lastly, we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.« less
Pettengill, James B; Pightling, Arthur W; Baugher, Joseph D; Rand, Hugh; Strain, Errol
2016-01-01
The adoption of whole-genome sequencing within the public health realm for molecular characterization of bacterial pathogens has been followed by an increased emphasis on real-time detection of emerging outbreaks (e.g., food-borne Salmonellosis). In turn, large databases of whole-genome sequence data are being populated. These databases currently contain tens of thousands of samples and are expected to grow to hundreds of thousands within a few years. For these databases to be of optimal use one must be able to quickly interrogate them to accurately determine the genetic distances among a set of samples. Being able to do so is challenging due to both biological (evolutionary diverse samples) and computational (petabytes of sequence data) issues. We evaluated seven measures of genetic distance, which were estimated from either k-mer profiles (Jaccard, Euclidean, Manhattan, Mash Jaccard, and Mash distances) or nucleotide sites (NUCmer and an extended multi-locus sequence typing (MLST) scheme). When analyzing empirical data (whole-genome sequence data from 18,997 Salmonella isolates) there are features (e.g., genomic, assembly, and contamination) that cause distances inferred from k-mer profiles, which treat absent data as informative, to fail to accurately capture the distance between samples when compared to distances inferred from differences in nucleotide sites. Thus, site-based distances, like NUCmer and extended MLST, are superior in performance, but accessing the computing resources necessary to perform them may be challenging when analyzing large databases.
Pettengill, James B.; Pightling, Arthur W.; Baugher, Joseph D.; ...
2016-11-10
The adoption of whole-genome sequencing within the public health realm for molecular characterization of bacterial pathogens has been followed by an increased emphasis on real-time detection of emerging outbreaks (e.g., food-borne Salmonellosis). In turn, large databases of whole-genome sequence data are being populated. These databases currently contain tens of thousands of samples and are expected to grow to hundreds of thousands within a few years. For these databases to be of optimal use one must be able to quickly interrogate them to accurately determine the genetic distances among a set of samples. Being able to do so is challenging duemore » to both biological (evolutionary diverse samples) and computational (petabytes of sequence data) issues. We evaluated seven measures of genetic distance, which were estimated from either k-mer profiles (Jaccard, Euclidean, Manhattan, Mash Jaccard, and Mash distances) or nucleotide sites (NUCmer and an extended multi-locus sequence typing (MLST) scheme). Finally, when analyzing empirical data (wholegenome sequence data from 18,997 Salmonella isolates) there are features (e.g., genomic, assembly, and contamination) that cause distances inferred from k-mer profiles, which treat absent data as informative, to fail to accurately capture the distance between samples when compared to distances inferred from differences in nucleotide sites. Thus, site-based distances, like NUCmer and extended MLST, are superior in performance, but accessing the computing resources necessary to perform them may be challenging when analyzing large databases.« less
Young, Sean G; Carrel, Margaret; Kitchen, Andrew; Malanson, George P; Tamerius, James; Ali, Mohamad; Kayali, Ghazi
2017-04-01
First introduced to Egypt in 2006, H5N1 highly pathogenic avian influenza has resulted in the death of millions of birds and caused over 350 infections and at least 117 deaths in humans. After a decade of viral circulation, outbreaks continue to occur and diffusion mechanisms between poultry farms remain unclear. Using landscape genetics techniques, we identify the distance models most strongly correlated with the genetic relatedness of the viruses, suggesting the most likely methods of viral diffusion within Egyptian poultry. Using 73 viral genetic sequences obtained from infected birds throughout northern Egypt between 2009 and 2015, we calculated the genetic dissimilarity between H5N1 viruses for all eight gene segments. Spatial correlation was evaluated using Mantel tests and correlograms and multiple regression of distance matrices within causal modeling and relative support frameworks. These tests examine spatial patterns of genetic relatedness, and compare different models of distance. Four models were evaluated: Euclidean distance, road network distance, road network distance via intervening markets, and a least-cost path model designed to approximate wild waterbird travel using niche modeling and circuit theory. Samples from backyard farms were most strongly correlated with least cost path distances. Samples from commercial farms were most strongly correlated with road network distances. Results were largely consistent across gene segments. Results suggest wild birds play an important role in viral diffusion between backyard farms, while commercial farms experience human-mediated diffusion. These results can inform avian influenza surveillance and intervention strategies in Egypt. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pettengill, James B.; Pightling, Arthur W.; Baugher, Joseph D.
The adoption of whole-genome sequencing within the public health realm for molecular characterization of bacterial pathogens has been followed by an increased emphasis on real-time detection of emerging outbreaks (e.g., food-borne Salmonellosis). In turn, large databases of whole-genome sequence data are being populated. These databases currently contain tens of thousands of samples and are expected to grow to hundreds of thousands within a few years. For these databases to be of optimal use one must be able to quickly interrogate them to accurately determine the genetic distances among a set of samples. Being able to do so is challenging duemore » to both biological (evolutionary diverse samples) and computational (petabytes of sequence data) issues. We evaluated seven measures of genetic distance, which were estimated from either k-mer profiles (Jaccard, Euclidean, Manhattan, Mash Jaccard, and Mash distances) or nucleotide sites (NUCmer and an extended multi-locus sequence typing (MLST) scheme). Finally, when analyzing empirical data (wholegenome sequence data from 18,997 Salmonella isolates) there are features (e.g., genomic, assembly, and contamination) that cause distances inferred from k-mer profiles, which treat absent data as informative, to fail to accurately capture the distance between samples when compared to distances inferred from differences in nucleotide sites. Thus, site-based distances, like NUCmer and extended MLST, are superior in performance, but accessing the computing resources necessary to perform them may be challenging when analyzing large databases.« less
Sensor Network Localization by Eigenvector Synchronization Over the Euclidean Group
CUCURINGU, MIHAI; LIPMAN, YARON; SINGER, AMIT
2013-01-01
We present a new approach to localization of sensors from noisy measurements of a subset of their Euclidean distances. Our algorithm starts by finding, embedding, and aligning uniquely realizable subsets of neighboring sensors called patches. In the noise-free case, each patch agrees with its global positioning up to an unknown rigid motion of translation, rotation, and possibly reflection. The reflections and rotations are estimated using the recently developed eigenvector synchronization algorithm, while the translations are estimated by solving an overdetermined linear system. The algorithm is scalable as the number of nodes increases and can be implemented in a distributed fashion. Extensive numerical experiments show that it compares favorably to other existing algorithms in terms of robustness to noise, sparse connectivity, and running time. While our approach is applicable to higher dimensions, in the current article, we focus on the two-dimensional case. PMID:23946700
An ensemble of dissimilarity based classifiers for Mackerel gender determination
NASA Astrophysics Data System (ADS)
Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.
2014-03-01
Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.
Euclidean bridge to the relativistic constituent quark model
NASA Astrophysics Data System (ADS)
Hobbs, T. J.; Alberg, Mary; Miller, Gerald A.
2017-03-01
Background: Knowledge of nucleon structure is today ever more of a precision science, with heightened theoretical and experimental activity expected in coming years. At the same time, a persistent gap lingers between theoretical approaches grounded in Euclidean methods (e.g., lattice QCD, Dyson-Schwinger equations [DSEs]) as opposed to traditional Minkowski field theories (such as light-front constituent quark models). Purpose: Seeking to bridge these complementary world views, we explore the potential of a Euclidean constituent quark model (ECQM). This formalism enables us to study the gluonic dressing of the quark-level axial-vector vertex, which we undertake as a test of the framework. Method: To access its indispensable elements with a minimum of inessential detail, we develop our ECQM using the simplified quark + scalar diquark picture of the nucleon. We construct a hyperspherical formalism involving polynomial expansions of diquark propagators to marry our ECQM with the results of Bethe-Salpeter equation (BSE) analyses, and constrain model parameters by fitting electromagnetic form factor data. Results: From this formalism, we define and compute a new quantity—the Euclidean density function (EDF)—an object that characterizes the nucleon's various charge distributions as functions of the quark's Euclidean momentum. Applying this technology and incorporating information from BSE analyses, we find the quenched dressing effect on the proton's axial-singlet charge to be small in magnitude and consistent with zero, while use of recent determinations of unquenched BSEs results in a large suppression. Conclusions: The quark + scalar diquark ECQM is a step toward a realistic quark model in Euclidean space, and needs additional refinements. The substantial effect we obtain for the impact on the axial-singlet charge of the unquenched dressed vertex compared to the quenched demands further investigation.
Bayesian extraction of the parton distribution amplitude from the Bethe-Salpeter wave function
NASA Astrophysics Data System (ADS)
Gao, Fei; Chang, Lei; Liu, Yu-xin
2017-07-01
We propose a new numerical method to compute the parton distribution amplitude (PDA) from the Euclidean Bethe-Salpeter wave function. The essential step is to extract the weight function in the Nakanishi representation of the Bethe-Salpeter wave function in Euclidean space, which is an ill-posed inversion problem, via the maximum entropy method (MEM). The Nakanishi weight function as well as the corresponding light-front parton distribution amplitude (PDA) can be well determined. We confirm prior work on PDA computations, which was based on different methods.
Sun, LiJun; Hwang, Hyeon-Shik; Lee, Kyung-Min
2018-03-01
The purpose of this study was to examine changes in registration accuracy after including occlusal surface and incisal edge areas in addition to the buccal surface when integrating laser-scanned and maxillofacial cone-beam computed tomography (CBCT) dental images. CBCT scans and maxillary dental casts were obtained from 30 patients. Three methods were used to integrate the images: R1, only the buccal and labial surfaces were used; R2, the incisal edges of the anterior teeth and the buccal and distal marginal ridges of the second molars were used; and R3, labial surfaces, including incisal edges of anterior teeth, and buccal surfaces, including buccal and distal marginal ridges of the second molars, were used. Differences between the 2 images were evaluated by color-mapping methods and average surface distances by measuring the 3-dimensional Euclidean distances between the surface points on the 2 images. The R1 method showed more discrepancies between the laser-scanned and CBCT images than did the other methods. The R2 method did not show a significant difference in registration accuracy compared with the R3 method. The results of this study indicate that accuracy when integrating laser-scanned dental images into maxillofacial CBCT images can be increased by including occlusal surface and incisal edge areas as registration areas. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Detecting changes in forced climate attractors with Wasserstein distance
NASA Astrophysics Data System (ADS)
Robin, Yoann; Yiou, Pascal; Naveau, Philippe
2017-07-01
The climate system can been described by a dynamical system and its associated attractor. The dynamics of this attractor depends on the external forcings that influence the climate. Such forcings can affect the mean values or variances, but regions of the attractor that are seldom visited can also be affected. It is an important challenge to measure how the climate attractor responds to different forcings. Currently, the Euclidean distance or similar measures like the Mahalanobis distance have been favored to measure discrepancies between two climatic situations. Those distances do not have a natural building mechanism to take into account the attractor dynamics. In this paper, we argue that a Wasserstein distance, stemming from optimal transport theory, offers an efficient and practical way to discriminate between dynamical systems. After treating a toy example, we explore how the Wasserstein distance can be applied and interpreted to detect non-autonomous dynamics from a Lorenz system driven by seasonal cycles and a warming trend.
Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms
NASA Astrophysics Data System (ADS)
Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.
2017-09-01
Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.
Teichmuller Space Resolution of the EPR Paradox
NASA Astrophysics Data System (ADS)
Winterberg, Friedwardt
2013-04-01
The mystery of Newton's action-at-a-distance law of gravity was resolved by Einstein with Riemann's non-Euclidean geometry, which permitted the explanation of the departure from Newton's law for the motion of Mercury. It is here proposed that the similarly mysterious non-local EPR-type quantum correlations may be explained by a Teichmuller space geometry below the Planck length, for which an experiment for its verification is proposed.
Anninos, Dionysios; Denef, Frederik
2016-06-30
We show that the late time Hartle-Hawking wave function for a free massless scalar in a fixed de Sitter background encodes a sharp ultrametric structure for the standard Euclidean distance on the space of field configurations. This implies a hierarchical, tree-like organization of the state space, reflecting its genesis as a branched diffusion process. In conclusion, an equivalent mathematical structure organizes the state space of the Sherrington-Kirkpatrick model of a spin glass.
Microarray gene expression profiling using core biopsies of renal neoplasia.
Rogers, Craig G; Ditlev, Jonathon A; Tan, Min-Han; Sugimura, Jun; Qian, Chao-Nan; Cooper, Jeff; Lane, Brian; Jewett, Michael A; Kahnoski, Richard J; Kort, Eric J; Teh, Bin T
2009-01-01
We investigate the feasibility of using microarray gene expression profiling technology to analyze core biopsies of renal tumors for classification of tumor histology. Core biopsies were obtained ex-vivo from 7 renal tumors-comprised of four histological subtypes-following radical nephrectomy using 18-gauge biopsy needles. RNA was isolated from these samples and, in the case of biopsy samples, amplified by in vitro transcription. Microarray analysis was then used to quantify the mRNA expression patterns in these samples relative to non-diseased renal tissue mRNA. Genes with significant variation across all non-biopsy tumor samples were identified, and the relationship between tumor and biopsy samples in terms of expression levels of these genes was then quantified in terms of Euclidean distance, and visualized by complete linkage clustering. Final pathologic assessment of kidney tumors demonstrated clear cell renal cell carcinoma (4), oncocytoma (1), angiomyolipoma (1) and adrenalcortical carcinoma (1). Five of the seven biopsy samples were most similar in terms of gene expression to the resected tumors from which they were derived in terms of Euclidean distance. All seven biopsies were assigned to the correct histological class by hierarchical clustering. We demonstrate the feasibility of gene expression profiling of core biopsies of renal tumors to classify tumor histology.
Microarray gene expression profiling using core biopsies of renal neoplasia
Rogers, Craig G.; Ditlev, Jonathon A.; Tan, Min-Han; Sugimura, Jun; Qian, Chao-Nan; Cooper, Jeff; Lane, Brian; Jewett, Michael A.; Kahnoski, Richard J.; Kort, Eric J.; Teh, Bin T.
2009-01-01
We investigate the feasibility of using microarray gene expression profiling technology to analyze core biopsies of renal tumors for classification of tumor histology. Core biopsies were obtained ex-vivo from 7 renal tumors—comprised of four histological subtypes—following radical nephrectomy using 18-gauge biopsy needles. RNA was isolated from these samples and, in the case of biopsy samples, amplified by in vitro transcription. Microarray analysis was then used to quantify the mRNA expression patterns in these samples relative to non-diseased renal tissue mRNA. Genes with significant variation across all non-biopsy tumor samples were identified, and the relationship between tumor and biopsy samples in terms of expression levels of these genes was then quantified in terms of Euclidean distance, and visualized by complete linkage clustering. Final pathologic assessment of kidney tumors demonstrated clear cell renal cell carcinoma (4), oncocytoma (1), angiomyolipoma (1) and adrenalcortical carcinoma (1). Five of the seven biopsy samples were most similar in terms of gene expression to the resected tumors from which they were derived in terms of Euclidean distance. All seven biopsies were assigned to the correct histological class by hierarchical clustering. We demonstrate the feasibility of gene expression profiling of core biopsies of renal tumors to classify tumor histology. PMID:19966938
Efficient Irregular Wavefront Propagation Algorithms on Hybrid CPU-GPU Machines
Teodoro, George; Pan, Tony; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Saltz, Joel
2013-01-01
We address the problem of efficient execution of a computation pattern, referred to here as the irregular wavefront propagation pattern (IWPP), on hybrid systems with multiple CPUs and GPUs. The IWPP is common in several image processing operations. In the IWPP, data elements in the wavefront propagate waves to their neighboring elements on a grid if a propagation condition is satisfied. Elements receiving the propagated waves become part of the wavefront. This pattern results in irregular data accesses and computations. We develop and evaluate strategies for efficient computation and propagation of wavefronts using a multi-level queue structure. This queue structure improves the utilization of fast memories in a GPU and reduces synchronization overheads. We also develop a tile-based parallelization strategy to support execution on multiple CPUs and GPUs. We evaluate our approaches on a state-of-the-art GPU accelerated machine (equipped with 3 GPUs and 2 multicore CPUs) using the IWPP implementations of two widely used image processing operations: morphological reconstruction and euclidean distance transform. Our results show significant performance improvements on GPUs. The use of multiple CPUs and GPUs cooperatively attains speedups of 50× and 85× with respect to single core CPU executions for morphological reconstruction and euclidean distance transform, respectively. PMID:23908562
Bot, Maarten; van den Munckhof, Pepijn; Bakay, Roy; Stebbins, Glenn; Verhagen Metman, Leo
2017-01-01
To determine the accuracy of intraoperative computed tomography (iCT) in localizing deep brain stimulation (DBS) electrodes by comparing this modality with postoperative magnetic resonance imaging (MRI). Optimal lead placement is a critical factor for the outcome of DBS procedures and preferably confirmed during surgery. iCT offers 3-dimensional verification of both microelectrode and lead location during DBS surgery. However, accurate electrode representation on iCT has not been extensively studied. DBS surgery was performed using the Leksell stereotactic G frame. Stereotactic coordinates of 52 DBS leads were determined on both iCT and postoperative MRI and compared with intended final target coordinates. The resulting absolute differences in X (medial-lateral), Y (anterior-posterior), and Z (dorsal-ventral) coordinates (ΔX, ΔY, and ΔZ) for both modalities were then used to calculate the euclidean distance. Euclidean distances were 2.7 ± 1.1 and 2.5 ± 1.2 mm for MRI and iCT, respectively (p = 0.2). Postoperative MRI and iCT show equivalent DBS lead representation. Intraoperative localization of both microelectrode and DBS lead in stereotactic space enables direct adjustments. Verification of lead placement with postoperative MRI, considered to be the gold standard, is unnecessary. © 2017 The Author(s) Published by S. Karger AG, Basel.
Traffic Management for Emergency Vehicle Priority Based on Visual Sensing.
Nellore, Kapileswar; Hancke, Gerhard P
2016-11-10
Vehicular traffic is endlessly increasing everywhere in the world and can cause terrible traffic congestion at intersections. Most of the traffic lights today feature a fixed green light sequence, therefore the green light sequence is determined without taking the presence of the emergency vehicles into account. Therefore, emergency vehicles such as ambulances, police cars, fire engines, etc. stuck in a traffic jam and delayed in reaching their destination can lead to loss of property and valuable lives. This paper presents an approach to schedule emergency vehicles in traffic. The approach combines the measurement of the distance between the emergency vehicle and an intersection using visual sensing methods, vehicle counting and time sensitive alert transmission within the sensor network. The distance between the emergency vehicle and the intersection is calculated for comparison using Euclidean distance, Manhattan distance and Canberra distance techniques. The experimental results have shown that the Euclidean distance outperforms other distance measurement techniques. Along with visual sensing techniques to collect emergency vehicle information, it is very important to have a Medium Access Control (MAC) protocol to deliver the emergency vehicle information to the Traffic Management Center (TMC) with less delay. Then only the emergency vehicle is quickly served and can reach the destination in time. In this paper, we have also investigated the MAC layer in WSNs to prioritize the emergency vehicle data and to reduce the transmission delay for emergency messages. We have modified the medium access procedure used in standard IEEE 802.11p with PE-MAC protocol, which is a new back off selection and contention window adjustment scheme to achieve low broadcast delay for emergency messages. A VANET model for the UTMS is developed and simulated in NS-2. The performance of the standard IEEE 802.11p and the proposed PE-MAC is analysed in detail. The NS-2 simulation results have shown that the PE-MAC outperforms the IEEE 802.11p in terms of average end-to-end delay, throughput and energy consumption. The performance evaluation results have proven that the proposed PE-MAC prioritizes the emergency vehicle data and delivers the emergency messages to the TMC with less delay compared to the IEEE 802.11p. The transmission delay of the proposed PE-MAC is also compared with the standard IEEE 802.15.4, and Enhanced Back-off Selection scheme for IEEE 802.15.4 protocol [EBSS, an existing protocol to ensure fast transmission of the detected events on the road towards the TMC] and the comparative results have proven the effectiveness of the PE-MAC over them. Furthermore, this research work will provide an insight into the design of an intelligent urban traffic management system for the effective management of emergency vehicles and will help to save lives and property.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomizawa, Shinya; Nozawa, Masato
2006-06-15
We study vacuum solutions of five-dimensional Einstein equations generated by the inverse scattering method. We reproduce the black ring solution which was found by Emparan and Reall by taking the Euclidean Levi-Civita metric plus one-dimensional flat space as a seed. This transformation consists of two successive processes; the first step is to perform the three-solitonic transformation of the Euclidean Levi-Civita metric with one-dimensional flat space as a seed. The resulting metric is the Euclidean C-metric with extra one-dimensional flat space. The second is to perform the two-solitonic transformation by taking it as a new seed. Our result may serve asmore » a stepping stone to find new exact solutions in higher dimensions.« less
Wang, Mei-Fei; Lian, Hong-Zhen; Mao, Li; Zhou, Jing-Ping; Gong, Hui-Juan; Qian, Bao-Yong; Fang, Yan; Li, Jie
2007-07-11
A capillary gas chromatographic (GC) method has been developed for the separation and determination of policosanol components extracted from rice bran wax. A Varian CP-sil 8 CB column was employed, and an oven temperature was programmed. Gas chromatography-mass spectrometry (GC-MS) was used to identify the composition of policosanol. Quantitative analysis was carried out by means of hydrogen flame ionization detector (FID) with dinonyl phthalate (DNP) as internal standard. The results indicated that the extract obtained by dry saponification has the highest contents of octacosanol and triacontanol among extracts by all used extraction methods including dry saponification, saponification in alcohol, saponification in water (neutralized and non-neutralized), and transesterification. Meanwhile, the GC-MS fingerprint of policosanol extracted by dry saponification has been established. Euclidean distance similarity calculation showed remarkable consistency of compositions and contents among 12 batches of policosanol from a rice bran wax variety. This protocol provided a rapid and feasible method for quality control of policosanol products.
Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.
Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage controlmore » problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.« less
Color-coded depth information in volume-rendered magnetic resonance angiography
NASA Astrophysics Data System (ADS)
Smedby, Orjan; Edsborg, Karin; Henriksson, John
2004-05-01
Magnetic Resonance Angiography (MRA) and Computed Tomography Angiography (CTA) data are usually presented using Maximum Intensity Projection (MIP) or Volume Rendering Technique (VRT), but these often fail to demonstrate a stenosis if the projection angle is not suitably chosen. In order to make vascular stenoses visible in projection images independent of the choice of viewing angle, a method is proposed to supplement these images with colors representing the local caliber of the vessel. After preprocessing the volume image with a median filter, segmentation is performed by thresholding, and a Euclidean distance transform is applied. The distance to the background from each voxel in the vessel is mapped to a color. These colors can either be rendered directly using MIP or be presented together with opacity information based on the original image using VRT. The method was tested in a synthetic dataset containing a cylindrical vessel with stenoses in varying angles. The results suggest that the visibility of stenoses is enhanced by the color information. In clinical feasibility experiments, the technique was applied to clinical MRA data. The results are encouraging and indicate that the technique can be used with clinical images.
NASA Astrophysics Data System (ADS)
Rojek, Barbara; Wesolowski, Marek; Suchacz, Bogdan
2013-12-01
In the paper infrared (IR) spectroscopy and multivariate exploration techniques: principal component analysis (PCA) and cluster analysis (CA) were applied as supportive methods for the detection of physicochemical incompatibilities between baclofen and excipients. In the course of research, the most useful rotational strategy in PCA proved to be varimax normalized, while in CA Ward's hierarchical agglomeration with Euclidean distance measure enabled to yield the most interpretable results. Chemometrical calculations confirmed the suitability of PCA and CA as the auxiliary methods for interpretation of infrared spectra in order to recognize whether compatibilities or incompatibilities between active substance and excipients occur. On the basis of IR spectra and the results of PCA and CA it was possible to demonstrate that the presence of lactose, β-cyclodextrin and meglumine in binary mixtures produce interactions with baclofen. The results were verified using differential scanning calorimetry, differential thermal analysis, thermogravimetry/differential thermogravimetry and X-ray powder diffraction analyses.
Quantitative imaging of aggregated emulsions.
Penfold, Robert; Watson, Andrew D; Mackie, Alan R; Hibberd, David J
2006-02-28
Noise reduction, restoration, and segmentation methods are developed for the quantitative structural analysis in three dimensions of aggregated oil-in-water emulsion systems imaged by fluorescence confocal laser scanning microscopy. Mindful of typical industrial formulations, the methods are demonstrated for concentrated (30% volume fraction) and polydisperse emulsions. Following a regularized deconvolution step using an analytic optical transfer function and appropriate binary thresholding, novel application of the Euclidean distance map provides effective discrimination of closely clustered emulsion droplets with size variation over at least 1 order of magnitude. The a priori assumption of spherical nonintersecting objects provides crucial information to combat the ill-posed inverse problem presented by locating individual particles. Position coordinates and size estimates are recovered with sufficient precision to permit quantitative study of static geometrical features. In particular, aggregate morphology is characterized by a novel void distribution measure based on the generalized Apollonius problem. This is also compared with conventional Voronoi/Delauney analysis.
The Development of Euclidean and Non-Euclidean Cosmologies
ERIC Educational Resources Information Center
Norman, P. D.
1975-01-01
Discusses early Euclidean cosmologies, inadequacies in classical Euclidean cosmology, and the development of non-Euclidean cosmologies. Explains the present state of the theory of cosmology including the work of Dirac, Sandage, and Gott. (CP)
Mapping growing stock volume and forest live biomass: a case study of the Polissya region of Ukraine
NASA Astrophysics Data System (ADS)
Bilous, Andrii; Myroniuk, Viktor; Holiaka, Dmytrii; Bilous, Svitlana; See, Linda; Schepaschenko, Dmitry
2017-10-01
Forest inventory and biomass mapping are important tasks that require inputs from multiple data sources. In this paper we implement two methods for the Ukrainian region of Polissya: random forest (RF) for tree species prediction and k-nearest neighbors (k-NN) for growing stock volume and biomass mapping. We examined the suitability of the five-band RapidEye satellite image to predict the distribution of six tree species. The accuracy of RF is quite high: ~99% for forest/non-forest mask and 89% for tree species prediction. Our results demonstrate that inclusion of elevation as a predictor variable in the RF model improved the performance of tree species classification. We evaluated different distance metrics for the k-NN method, including Euclidean or Mahalanobis distance, most similar neighbor (MSN), gradient nearest neighbor, and independent component analysis. The MSN with the four nearest neighbors (k = 4) is the most precise (according to the root-mean-square deviation) for predicting forest attributes across the study area. The k-NN method allowed us to estimate growing stock volume with an accuracy of 3 m3 ha-1 and for live biomass of about 2 t ha-1 over the study area.
An active learning representative subset selection method using net analyte signal.
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-05
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.
An active learning representative subset selection method using net analyte signal
NASA Astrophysics Data System (ADS)
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-01
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.
Thornton, Lukar E; Pearce, Jamie R; Macdonald, Laura; Lamb, Karen E; Ellaway, Anne
2012-07-27
Previous studies have provided mixed evidence with regards to associations between food store access and dietary outcomes. This study examines the most commonly applied measures of locational access to assess whether associations between supermarket access and fruit and vegetable consumption are affected by the choice of access measure and scale. Supermarket location data from Glasgow, UK (n = 119), and fruit and vegetable intake data from the 'Health and Well-Being' Survey (n = 1041) were used to compare various measures of locational access. These exposure variables included proximity estimates (with different points-of-origin used to vary levels of aggregation) and density measures using three approaches (Euclidean and road network buffers and Kernel density estimation) at distances ranging from 0.4 km to 5 km. Further analysis was conducted to assess the impact of using smaller buffer sizes for individuals who did not own a car. Associations between these multiple access measures and fruit and vegetable consumption were estimated using linear regression models. Levels of spatial aggregation did not impact on the proximity estimates. Counts of supermarkets within Euclidean buffers were associated with fruit and vegetable consumption at 1 km, 2 km and 3 km, and for our road network buffers at 2 km, 3 km, and 4 km. Kernel density estimates provided the strongest associations and were significant at a distance of 2 km, 3 km, 4 km and 5 km. Presence of a supermarket within 0.4 km of road network distance from where people lived was positively associated with fruit consumption amongst those without a car (coef. 0.657; s.e. 0.247; p0.008). The associations between locational access to supermarkets and individual-level dietary behaviour are sensitive to the method by which the food environment variable is captured. Care needs to be taken to ensure robust and conceptually appropriate measures of access are used and these should be grounded in a clear a priori reasoning.
Cooper, David T; Behrens, Claus F
2016-01-01
Objective: In cervical radiotherapy, it is essential that the uterine position is correctly determined prior to treatment delivery. The aim of this study was to evaluate an autoscan ultrasound (A-US) probe, a motorized transducer creating three-dimensional (3D) images by sweeping, by comparing it with a conventional ultrasound (C-US) probe, where manual scanning is required to acquire 3D images. Methods: Nine healthy volunteers were scanned by seven operators, using the Clarity® system (Elekta, Stockholm, Sweden). In total, 72 scans, 36 scans from the C-US and 36 scans from the A-US probes, were acquired. Two observers delineated the uterine structure, using the software-assisted segmentation in the Clarity workstation. The data of uterine volume, uterine centre of mass (COM) and maximum uterine lengths, in three orthogonal directions, were analyzed. Results: In 53% of the C-US scans, the whole uterus was captured, compared with 89% using the A-US. F-test on 36 scans demonstrated statistically significant differences in interobserver COM standard deviation (SD) when comparing the C-US with the A-US probe for the inferior–superior (p < 0.006), left–right (p < 0.012) and anteroposterior directions (p < 0.001). The median of the interobserver COM distance (Euclidean distance for 36 scans) was reduced from 8.5 (C-US) to 6.0 mm (A-US). An F-test on the 36 scans showed strong significant differences (p < 0.001) in the SD of the Euclidean interobserver distance when comparing the C-US with the A-US scans. The average Dice coefficient when comparing the two observers was 0.67 (C-US) and 0.75 (A-US). The predictive interval demonstrated better interobserver delineation concordance using the A-US probe. Conclusion: The A-US probe imaging might be a better choice of image-guided radiotherapy system for correcting for daily uterine positional changes in cervical radiotherapy. Advances in knowledge: Using a novel A-US probe might reduce the uncertainty in interoperator variability during ultrasound scanning. PMID:27452268
A straightforward method to compute average stochastic oscillations from data samples.
Júlvez, Jorge
2015-10-19
Many biological systems exhibit sustained stochastic oscillations in their steady state. Assessing these oscillations is usually a challenging task due to the potential variability of the amplitude and frequency of the oscillations over time. As a result of this variability, when several stochastic replications are averaged, the oscillations are flattened and can be overlooked. This can easily lead to the erroneous conclusion that the system reaches a constant steady state. This paper proposes a straightforward method to detect and asses stochastic oscillations. The basis of the method is in the use of polar coordinates for systems with two species, and cylindrical coordinates for systems with more than two species. By slightly modifying these coordinate systems, it is possible to compute the total angular distance run by the system and the average Euclidean distance to a reference point. This allows us to compute confidence intervals, both for the average angular speed and for the distance to a reference point, from a set of replications. The use of polar (or cylindrical) coordinates provides a new perspective of the system dynamics. The mean trajectory that can be obtained by averaging the usual cartesian coordinates of the samples informs about the trajectory of the center of mass of the replications. In contrast to such a mean cartesian trajectory, the mean polar trajectory can be used to compute the average circular motion of those replications, and therefore, can yield evidence about sustained steady state oscillations. Both, the coordinate transformation and the computation of confidence intervals, can be carried out efficiently. This results in an efficient method to evaluate stochastic oscillations.
Osseiran, Sam; Roider, Elisabeth M; Wang, Hequn; Suita, Yusuke; Murphy, Michael; Fisher, David E; Evans, Conor L
2017-12-01
Chemical sun filters are commonly used as active ingredients in sunscreens due to their efficient absorption of ultraviolet (UV) radiation. Yet, it is known that these compounds can photochemically react with UV light and generate reactive oxygen species and oxidative stress in vitro, though this has yet to be validated in vivo. One label-free approach to probe oxidative stress is to measure and compare the relative endogenous fluorescence generated by cellular coenzymes nicotinamide adenine dinucleotides and flavin adenine dinucleotides. However, chemical sun filters are fluorescent, with emissive properties that contaminate endogenous fluorescent signals. To accurately distinguish the source of fluorescence in ex vivo skin samples treated with chemical sun filters, fluorescence lifetime imaging microscopy data were processed on a pixel-by-pixel basis using a non-Euclidean separation algorithm based on Mahalanobis distance and validated on simulated data. Applying this method, ex vivo samples exhibited a small oxidative shift when exposed to sun filters alone, though this shift was much smaller than that imparted by UV irradiation. Given the need for investigative tools to further study the clinical impact of chemical sun filters in patients, the reported methodology may be applied to visualize chemical sun filters and measure oxidative stress in patients' skin. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Modelling non-Euclidean movement and landscape connectivity in highly structured ecological networks
Sutherland, Christopher; Fuller, Angela K.; Royle, J. Andrew
2015-01-01
The ecological distance SCR model uses spatially indexed capture-recapture data to estimate how activity patterns are influenced by landscape structure. As well as reducing bias in estimates of abundance, this approach provides biologically realistic representations of home range geometry, and direct information about species-landscape interactions. The incorporation of both structural (landscape) and functional (movement) components of connectivity provides a direct measure of species-specific landscape connectivity.
2014-02-01
installation based on a Euclidean distance allocation and assigned that installation’s threshold values. The second approach used a thin - plate spline ...installation critical nLS+ thresholds involved spatial interpolation. A thin - plate spline radial basis functions (RBF) was selected as the...the interpolation of installation results using a thin - plate spline radial basis function technique. 6.5 OBJECTIVE #5: DEVELOP AND
Molnár, Emil
2005-11-01
A new method, developed in previous works by the author (partly with co-authors), is presented which decides algorithmically, in principle by computer, whether a combinatorial space tiling (Tau, Gamma) is realizable in the d-dimensional Euclidean space E(d) (think of d = 2, 3, 4) or in other homogeneous spaces, e.g. in Thurston's 3-geometries: E(3), S(3), H(3), S(2) x R, H(2) x R, SL(2)R, Nil, Sol. Then our group Gamma will be an isometry group of a projective metric 3-sphere PiS(3) (R, < , >), acting discontinuously on its above tiling Tau. The method is illustrated by a plane example and by the well known rhombohedron tiling (Tau, Gamma), where Gamma = R3m is the Euclidean space group No. 166 in International Tables for Crystallography.
Spatial analysis of groundwater levels using Fuzzy Logic and geostatistical tools
NASA Astrophysics Data System (ADS)
Theodoridou, P. G.; Varouchakis, E. A.; Karatzas, G. P.
2017-12-01
The spatial variability evaluation of the water table of an aquifer provides useful information in water resources management plans. Geostatistical methods are often employed to map the free surface of an aquifer. In geostatistical analysis using Kriging techniques the selection of the optimal variogram is very important for the optimal method performance. This work compares three different criteria to assess the theoretical variogram that fits to the experimental one: the Least Squares Sum method, the Akaike Information Criterion and the Cressie's Indicator. Moreover, variable distance metrics such as the Euclidean, Minkowski, Manhattan, Canberra and Bray-Curtis are applied to calculate the distance between the observation and the prediction points, that affects both the variogram calculation and the Kriging estimator. A Fuzzy Logic System is then applied to define the appropriate neighbors for each estimation point used in the Kriging algorithm. The two criteria used during the Fuzzy Logic process are the distance between observation and estimation points and the groundwater level value at each observation point. The proposed techniques are applied to a data set of 250 hydraulic head measurements distributed over an alluvial aquifer. The analysis showed that the Power-law variogram model and Manhattan distance metric within ordinary kriging provide the best results when the comprehensive geostatistical analysis process is applied. On the other hand, the Fuzzy Logic approach leads to a Gaussian variogram model and significantly improves the estimation performance. The two different variogram models can be explained in terms of a fractional Brownian motion approach and of aquifer behavior at local scale. Finally, maps of hydraulic head spatial variability and of predictions uncertainty are constructed for the area with the two different approaches comparing their advantages and drawbacks.
Spatial accessibility to vaccination sites in a campaign against rabies in São Paulo city, Brazil.
Polo, Gina; Acosta, Carlos Mera; Dias, Ricardo Augusto
2013-08-01
It is estimated that the city of São Paulo has over 2.5 million dogs and 560 thousand cats. These populations are irregularly distributed throughout the territory, making it difficult to appropriately allocate health services focused on these species. To reasonably allocate vaccination sites, it is necessary to identify social groups and their access to the referred service. Rabies in dogs and cats has been an important zoonotic health issue in São Paulo and the key component of rabies control is vaccination. The present study aims to introduce an approach to quantify the potential spatial accessibility to the vaccination sites of the 2009 campaign against rabies in the city of São Paulo and solve the overestimation associated with the classic methodology that applies buffer zones around vaccination sites based on Euclidean (straight-line) distance. To achieve this, a Gaussian-based two-step floating catchment area method with a travel-friction coefficient was adapted in a geographic information system environment, using distances along a street network based on Dijkstra's algorithm (short path method). The choice of the distance calculation method affected the results in terms of the population covered. In general, areas with low accessibility for both dogs and cats were observed, especially in densely populated areas. The eastern zone of the city had higher accessibility values compared with peripheral and central zones. The Gaussian-based two-step floating catchment method with a travel-friction coefficient was used to assess the overestimation of the straight-line distance method, which is the most widely used method for coverage analysis. We conclude that this approach has the potential to improve the efficiency of resource use when planning rabies control programs in large urban environments such as São Paulo. The findings emphasize the need for surveillance and intervention in isolated areas. Copyright © 2013 Elsevier B.V. All rights reserved.
Sun, Liping; Luo, Yonglong; Ding, Xintao; Zhang, Ji
2014-01-01
An important component of a spatial clustering algorithm is the distance measure between sample points in object space. In this paper, the traditional Euclidean distance measure is replaced with innovative obstacle distance measure for spatial clustering under obstacle constraints. Firstly, we present a path searching algorithm to approximate the obstacle distance between two points for dealing with obstacles and facilitators. Taking obstacle distance as similarity metric, we subsequently propose the artificial immune clustering with obstacle entity (AICOE) algorithm for clustering spatial point data in the presence of obstacles and facilitators. Finally, the paper presents a comparative analysis of AICOE algorithm and the classical clustering algorithms. Our clustering model based on artificial immune system is also applied to the case of public facility location problem in order to establish the practical applicability of our approach. By using the clone selection principle and updating the cluster centers based on the elite antibodies, the AICOE algorithm is able to achieve the global optimum and better clustering effect.
Flood, Jessica S; Porphyre, Thibaud; Tildesley, Michael J; Woolhouse, Mark E J
2013-10-08
When modelling infectious diseases, accurately capturing the pattern of dissemination through space is key to providing optimal recommendations for control. Mathematical models of disease spread in livestock, such as for foot-and-mouth disease (FMD), have done this by incorporating a transmission kernel which describes the decay in transmission rate with increasing Euclidean distance from an infected premises (IP). However, this assumes a homogenous landscape, and is based on the distance between point locations of farms. Indeed, underlying the spatial pattern of spread are the contact networks involved in transmission. Accordingly, area-weighted tessellation around farm point locations has been used to approximate field-contiguity and simulate the effect of contiguous premises (CP) culling for FMD. Here, geographic data were used to determine contiguity based on distance between premises' fields and presence of landscape features for two sample areas in Scotland. Sensitivity, positive predictive value, and the True Skill Statistic (TSS) were calculated to determine how point distance measures and area-weighted tessellation compared to the 'gold standard' of the map-based measures in identifying CPs. In addition, the mean degree and density of the different contact networks were calculated. Utilising point distances <1 km and <5 km as a measure for contiguity resulted in poor discrimination between map-based CPs/non-CPs (TSS 0.279-0.344 and 0.385-0.400, respectively). Point distance <1 km missed a high proportion of map-based CPs; <5 km point distance picked up a high proportion of map-based non-CPs as CPs. Area-weighted tessellation performed best, with reasonable discrimination between map-based CPs/non-CPs (TSS 0.617-0.737) and comparable mean degree and density. Landscape features altered network properties considerably when taken into account. The farming landscape is not homogeneous. Basing contiguity on geographic locations of field boundaries and including landscape features known to affect transmission into FMD models are likely to improve individual farm-level accuracy of spatial predictions in the event of future outbreaks. If a substantial proportion of FMD transmission events are by contiguous spread, and CPs should be assigned an elevated relative transmission rate, the shape of the kernel could be significantly altered since ability to discriminate between map-based CPs and non-CPs is different over different Euclidean distances.
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure.
Zhou, Yang; Wu, Dewei
2016-01-01
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT).
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure
2016-01-01
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT). PMID:27597859
Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data
NASA Astrophysics Data System (ADS)
Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.
2018-04-01
With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.
Fractal spectral triples on Kellendonk's C∗-algebra of a substitution tiling
NASA Astrophysics Data System (ADS)
Mampusti, Michael; Whittaker, Michael F.
2017-02-01
We introduce a new class of noncommutative spectral triples on Kellendonk's C∗-algebra associated with a nonperiodic substitution tiling. These spectral triples are constructed from fractal trees on tilings, which define a geodesic distance between any two tiles in the tiling. Since fractals typically have infinite Euclidean length, the geodesic distance is defined using Perron-Frobenius theory, and is self-similar with scaling factor given by the Perron-Frobenius eigenvalue. We show that each spectral triple is θ-summable, and respects the hierarchy of the substitution system. To elucidate our results, we construct a fractal tree on the Penrose tiling, and explicitly show how it gives rise to a collection of spectral triples.
Multivariate Spectral Analysis to Extract Materials from Multispectral Data
1993-09-01
Euclidean minimum distance and conventional Bayesian classifier suggest some fundamental instabilities. Two candidate sources are (1) inadequate...Coacete Water 2 TOTAL Cetu¢t1te 0 0 0 0 34 0 0 34 TZC10 0 0 0 0 0 26 0 26 hpem ~d I 0 0 to 0 0 0 0 60 Seb~ s 0 0 0 0 4 24 0 28 Mwal 0 0 0 0 33 29 0 62 Ihwid
Attenuation relation for strong motion in Eastern Java based on appropriate database and method
NASA Astrophysics Data System (ADS)
Mahendra, Rian; Rohadi, Supriyanto; Rudyanto, Ariska
2017-07-01
The selection and determination of attenuation relation has become important for seismic hazard assessment in active seismic region. This research initially constructs the appropriate strong motion database, including site condition and type of the earthquake. The data set consisted of large number earthquakes of 5 ≤ Mw ≤ 9 and distance less than 500 km that occurred around Java from 2009 until 2016. The location and depth of earthquake are being relocated using double difference method to improve the quality of database. Strong motion data from twelve BMKG's accelerographs which are located in east Java is used. The site condition is known by using dominant period and Vs30. The type of earthquake is classified into crustal earthquake, interface, and intraslab based on slab geometry analysis. A total of 10 Ground Motion Prediction Equations (GMPEs) are tested using Likelihood (Scherbaum et al., 2004) and Euclidean Distance Ranking method (Kale and Akkar, 2012) with the associated database. The evaluation of these methods lead to a set of GMPEs that can be applied for seismic hazard in East Java where the strong motion data is collected. The result of these methods found that there is still high deviation of GMPEs, so the writer modified some GMPEs using inversion method. Validation was performed by analysing the attenuation curve of the selected GMPE and observation data in period 2015 up to 2016. The results show that the selected GMPE is suitable for estimated PGA value in East Java.
Functional Connectivity in Islets of Langerhans from Mouse Pancreas Tissue Slices
Stožer, Andraž; Gosak, Marko; Dolenšek, Jurij; Perc, Matjaž; Marhl, Marko; Rupnik, Marjan Slak; Korošak, Dean
2013-01-01
We propose a network representation of electrically coupled beta cells in islets of Langerhans. Beta cells are functionally connected on the basis of correlations between calcium dynamics of individual cells, obtained by means of confocal laser-scanning calcium imaging in islets from acute mouse pancreas tissue slices. Obtained functional networks are analyzed in the light of known structural and physiological properties of islets. Focusing on the temporal evolution of the network under stimulation with glucose, we show that the dynamics are more correlated under stimulation than under non-stimulated conditions and that the highest overall correlation, largely independent of Euclidean distances between cells, is observed in the activation and deactivation phases when cells are driven by the external stimulus. Moreover, we find that the range of interactions in networks during activity shows a clear dependence on the Euclidean distance, lending support to previous observations that beta cells are synchronized via calcium waves spreading throughout islets. Most interestingly, the functional connectivity patterns between beta cells exhibit small-world properties, suggesting that beta cells do not form a homogeneous geometric network but are connected in a functionally more efficient way. Presented results provide support for the existing knowledge of beta cell physiology from a network perspective and shed important new light on the functional organization of beta cell syncitia whose structural topology is probably not as trivial as believed so far. PMID:23468610
Size-Dictionary Interpolation for Robot's Adjustment.
Daneshmand, Morteza; Aabloo, Alvo; Anbarjafari, Gholamreza
2015-01-01
This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps toward completion of the whole realistic online fitting package is provided.
NASA Astrophysics Data System (ADS)
Jiang, Yicheng; Cheng, Ping; Ou, Yangkui
2001-09-01
A new method for target classification of high-range resolution radar is proposed. It tries to use neural learning to obtain invariant subclass features of training range profiles. A modified Euclidean metric based on the Box-Cox transformation technique is investigated for Nearest Neighbor target classification improvement. The classification experiments using real radar data of three different aircraft have demonstrated that classification error can reduce 8% if this method proposed in this paper is chosen instead of the conventional method. The results of this paper have shown that by choosing an optimized metric, it is indeed possible to reduce the classification error without increasing the number of samples.
Traffic Management for Emergency Vehicle Priority Based on Visual Sensing
Nellore, Kapileswar; Hancke, Gerhard P.
2016-01-01
Vehicular traffic is endlessly increasing everywhere in the world and can cause terrible traffic congestion at intersections. Most of the traffic lights today feature a fixed green light sequence, therefore the green light sequence is determined without taking the presence of the emergency vehicles into account. Therefore, emergency vehicles such as ambulances, police cars, fire engines, etc. stuck in a traffic jam and delayed in reaching their destination can lead to loss of property and valuable lives. This paper presents an approach to schedule emergency vehicles in traffic. The approach combines the measurement of the distance between the emergency vehicle and an intersection using visual sensing methods, vehicle counting and time sensitive alert transmission within the sensor network. The distance between the emergency vehicle and the intersection is calculated for comparison using Euclidean distance, Manhattan distance and Canberra distance techniques. The experimental results have shown that the Euclidean distance outperforms other distance measurement techniques. Along with visual sensing techniques to collect emergency vehicle information, it is very important to have a Medium Access Control (MAC) protocol to deliver the emergency vehicle information to the Traffic Management Center (TMC) with less delay. Then only the emergency vehicle is quickly served and can reach the destination in time. In this paper, we have also investigated the MAC layer in WSNs to prioritize the emergency vehicle data and to reduce the transmission delay for emergency messages. We have modified the medium access procedure used in standard IEEE 802.11p with PE-MAC protocol, which is a new back off selection and contention window adjustment scheme to achieve low broadcast delay for emergency messages. A VANET model for the UTMS is developed and simulated in NS-2. The performance of the standard IEEE 802.11p and the proposed PE-MAC is analysed in detail. The NS-2 simulation results have shown that the PE-MAC outperforms the IEEE 802.11p in terms of average end-to-end delay, throughput and energy consumption. The performance evaluation results have proven that the proposed PE-MAC prioritizes the emergency vehicle data and delivers the emergency messages to the TMC with less delay compared to the IEEE 802.11p. The transmission delay of the proposed PE-MAC is also compared with the standard IEEE 802.15.4, and Enhanced Back-off Selection scheme for IEEE 802.15.4 protocol [EBSS, an existing protocol to ensure fast transmission of the detected events on the road towards the TMC] and the comparative results have proven the effectiveness of the PE-MAC over them. Furthermore, this research work will provide an insight into the design of an intelligent urban traffic management system for the effective management of emergency vehicles and will help to save lives and property. PMID:27834924
Surnames in Chile: a study of the population of Chile through isonymy.
Barrai, I; Rodriguez-Larralde, A; Dipierri, J; Alfaro, E; Acevedo, N; Mamolini, E; Sandri, M; Carrieri, A; Scapoli, C
2012-03-01
In Chile, the Hispanic dual surname system is used. To describe the isonymic structure of this country, the distribution of 16,277,255 surnames of 8,178,209 persons was studied in the 15 regions, the 54 provinces, and the 346 communes of the nation. The number of different surnames found was 72,667. Effective surname number (Fisher's α) for the entire country was 309.0, the average for regions was 240.8 ± 17.6, for provinces 209.2 ± 8.9, and for communes 178.7 ± 4.7. These values display a variation of inbreeding between administrative levels in the Chilean population, which can be attributed to the 'Prefecture effect' of Nei and Imaizumi. Matrices of isonymic distances between units within administrative levels were tested for correlation with geographic distance. The correlations were highest for provinces (r = 0.630 ± 0.019 for Euclidean distance) and lowest for communes (r = 0.366 ± 0.009 for Lasker's). The geographical distribution of the first three-dimensions of the Euclidean distance matrix suggests that population diffusion may have taken place from the north of the country toward the center and south. The prevalence of European plus European-Amerindian (95.4%) over Amerindian ethnicity (4.6%, CIA World Factbook) is compatible with diffusion of Caucasian groups over a low-density area populated by indigenous groups. The significant excess of maternal over paternal indigenous surnames indicates some asymmetric mating between nonAmerindian and Amerindian Chileans. The available studies of Y-markers and mt-markers are in agreement with this asymmetry. In the present work, we investigate the Chilean population with the aim of detecting its structure through the study of isonymy (Crow and Mange,1965) in the three administrative levels of the nation, namely 15 regions, 54 provinces, and 346 communes. Copyright © 2012 Wiley Periodicals, Inc.
Metrics in Keplerian orbits quotient spaces
NASA Astrophysics Data System (ADS)
Milanov, Danila V.
2018-03-01
Quotient spaces of Keplerian orbits are important instruments for the modelling of orbit samples of celestial bodies on a large time span. We suppose that variations of the orbital eccentricities, inclinations and semi-major axes remain sufficiently small, while arbitrary perturbations are allowed for the arguments of pericentres or longitudes of the nodes, or both. The distance between orbits or their images in quotient spaces serves as a numerical criterion for such problems of Celestial Mechanics as search for common origin of meteoroid streams, comets, and asteroids, asteroid families identification, and others. In this paper, we consider quotient sets of the non-rectilinear Keplerian orbits space H. Their elements are identified irrespective of the values of pericentre arguments or node longitudes. We prove that distance functions on the quotient sets, introduced in Kholshevnikov et al. (Mon Not R Astron Soc 462:2275-2283, 2016), satisfy metric space axioms and discuss theoretical and practical importance of this result. Isometric embeddings of the quotient spaces into R^n, and a space of compact subsets of H with Hausdorff metric are constructed. The Euclidean representations of the orbits spaces find its applications in a problem of orbit averaging and computational algorithms specific to Euclidean space. We also explore completions of H and its quotient spaces with respect to corresponding metrics and establish a relation between elements of the extended spaces and rectilinear trajectories. Distance between an orbit and subsets of elliptic and hyperbolic orbits is calculated. This quantity provides an upper bound for the metric value in a problem of close orbits identification. Finally the invariance of the equivalence relations in H under coordinates change is discussed.
Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios
2017-02-01
Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.
Relationship of some upland rice genotype after gamma irradiation
NASA Astrophysics Data System (ADS)
Suliartini, N. W. S.; Wijayanto, T.; Madiki, A.; Boer, D.; Muhidin; Juniawan
2018-02-01
The objective of the research was to group local upland rice genotypes after being treated with gamma irradiation. The research materials were upland rice genotypes resulted from mutation of the second generation and two parents: Pae Loilo (K3D0) and Pae Pongasi (K2D0) Cultivars. The research was conducted at the Indonesian Sweetener and Fiber Crops Research Institute, Malang Regency, and used the augmented design method. Research data were analyzed with R Program. Eight hundred and seventy one genotypes were selected with the selection criteria were based on yields on the average parents added 1.5 standard deviation. Based on the selection, eighty genotypes were analyzed with cluster analyses. Nine observation variables were used to develop cluster dendrogram using average linked method. Genetic distance was measured by euclidean distance. The results of cluster dendrogram showed that tested genotypes were divided into eight groups. Group 1, 2, 7, and 8 each had one genotype, group 3 and 6 each had two genotypes, group 4 had 25 genotypes, and group 5 had 51 genotypes. Check genotypes formed a separate group. Group 6 had the highest yield per plant of 126.11 gram, followed by groups 5 and 4 of 97.63 and 94.08 gram, respectively.
Mallick, Pijush; Sikdar, Samir Ranjan
2014-08-01
Nine inter-generic somatic hybrids named as pfle were produced through PEG-mediated protoplast fusion between Pleurotus florida and Lentinula edodes using double selection method. Hybridity of the newly developed strains was established on the basis of colony morphology, mycelial growth, hyphal traits, fruit-body productivity and inter single sequence repeat (ISSR) marker profiling. Hybrid population was assessed with different phenotypic variables by one-way analysis of variance. Principal component matrices were analyzed for the six phenotypic variables in scatter plot showing maximum positive correlation between each variable for all strains examined. Six ISSR primers generated 66 reproducible fragments with 98.48 % polymorphism. The dendrogram thus created based on unweighted pair-group method with mathematic averages method of clustering and Euclidean distance which exhibited three major groups between the parents and pfle hybrids. Though P. florida parent remained in one group but it showed different degrees of genetic distance with all the hybrid lines belonging to the other two groups while L. edodes was most distantly related to all the hybrid lines. L. edodes specific sequence-rich ISSR amplicon was recorded in all the hybrid lines and in L. edodes but not in P. florida. All the fruit body generating pfle hybrid lines could produce basidiocarp on paddy straw in sub-tropical climate and showed phenotypic resemblance to the P. florida parent.
A Novel Method for Reconstructing Broken Contour Lines Extracted from Scanned Topographic Maps
NASA Astrophysics Data System (ADS)
Wang, Feng; Liu, Pingzhi; Yang, Yun; Wei, Haiping; An, Xiaoya
2018-05-01
It is known that after segmentation and morphological operations on scanned topographic maps, gaps occur in contour lines. It is also well known that filling these gaps and reconstruction of contour lines with high accuracy and completeness is not an easy problem. In this paper, a novel method is proposed dedicated in automatic or semiautomatic filling up caps and reconstructing broken contour lines in binary images. The key part of end points' auto-matching and reconnecting is deeply discussed after introducing the procedure of reconstruction, in which some key algorithms and mechanisms are presented and realized, including multiple incremental backing trace to get weighted average direction angle of end points, the max constraint angle control mechanism based on the multiple gradient ranks, combination of weighted Euclidean distance and deviation angle to determine the optimum matching end point, bidirectional parabola control, etc. Lastly, experimental comparisons based on typically samples are complemented between proposed method and the other representative method, the results indicate that the former holds higher accuracy and completeness, better stability and applicability.
A data-based conservation planning tool for Florida panthers
Murrow, Jennifer L.; Thatcher, Cindy A.; Van Manen, Frank T.; Clark, Joseph D.
2013-01-01
Habitat loss and fragmentation are the greatest threats to the endangered Florida panther (Puma concolor coryi). We developed a data-based habitat model and user-friendly interface so that land managers can objectively evaluate Florida panther habitat. We used a geographic information system (GIS) and the Mahalanobis distance statistic (D2) to develop a model based on broad-scale landscape characteristics associated with panther home ranges. Variables in our model were Euclidean distance to natural land cover, road density, distance to major roads, human density, amount of natural land cover, amount of semi-natural land cover, amount of permanent or semi-permanent flooded area–open water, and a cost–distance variable. We then developed a Florida Panther Habitat Estimator tool, which automates and replicates the GIS processes used to apply the statistical habitat model. The estimator can be used by persons with moderate GIS skills to quantify effects of land-use changes on panther habitat at local and landscape scales. Example applications of the tool are presented.
Batchelder, Kendra A; Tanenbaum, Aaron B; Albert, Seth; Guimond, Lyne; Kestener, Pierre; Arneodo, Alain; Khalil, Andre
2014-01-01
The 2D Wavelet-Transform Modulus Maxima (WTMM) method was used to detect microcalcifications (MC) in human breast tissue seen in mammograms and to characterize the fractal geometry of benign and malignant MC clusters. This was done in the context of a preliminary analysis of a small dataset, via a novel way to partition the wavelet-transform space-scale skeleton. For the first time, the estimated 3D fractal structure of a breast lesion was inferred by pairing the information from two separate 2D projected mammographic views of the same breast, i.e. the cranial-caudal (CC) and mediolateral-oblique (MLO) views. As a novelty, we define the "CC-MLO fractal dimension plot", where a "fractal zone" and "Euclidean zones" (non-fractal) are defined. 118 images (59 cases, 25 malignant and 34 benign) obtained from a digital databank of mammograms with known radiologist diagnostics were analyzed to determine which cases would be plotted in the fractal zone and which cases would fall in the Euclidean zones. 92% of malignant breast lesions studied (23 out of 25 cases) were in the fractal zone while 88% of the benign lesions were in the Euclidean zones (30 out of 34 cases). Furthermore, a Bayesian statistical analysis shows that, with 95% credibility, the probability that fractal breast lesions are malignant is between 74% and 98%. Alternatively, with 95% credibility, the probability that Euclidean breast lesions are benign is between 76% and 96%. These results support the notion that the fractal structure of malignant tumors is more likely to be associated with an invasive behavior into the surrounding tissue compared to the less invasive, Euclidean structure of benign tumors. Finally, based on indirect 3D reconstructions from the 2D views, we conjecture that all breast tumors considered in this study, benign and malignant, fractal or Euclidean, restrict their growth to 2-dimensional manifolds within the breast tissue.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briceno, Raul A.; Hansen, Maxwell T.; Monahan, Christopher J.
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate thatmore » the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Lastly, we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.« less
Identification of hydrometeor mixtures in polarimetric radar measurements and their linear de-mixing
NASA Astrophysics Data System (ADS)
Besic, Nikola; Ventura, Jordi Figueras i.; Grazioli, Jacopo; Gabella, Marco; Germann, Urs; Berne, Alexis
2017-04-01
The issue of hydrometeor mixtures affects radar sampling volumes without a clear dominant hydrometeor type. Containing a number of different hydrometeor types which significantly contribute to the polarimetric variables, these volumes are likely to occur in the vicinity of the melting layer and mainly, at large distance from a given radar. Motivated by potential benefits for both quantitative and qualitative applications of dual-pol radar, we propose a method for the identification of hydrometeor mixtures and their subsequent linear de-mixing. This method is intrinsically related to our recently proposed semi-supervised approach for hydrometeor classification. The mentioned classification approach [1] performs labeling of radar sampling volumes by using as a criterion the Euclidean distance with respect to five-dimensional centroids, depicting nine hydrometeor classes. The positions of the centroids in the space formed by four radar moments and one external parameter (phase indicator), are derived through a technique of k-medoids clustering, applied on a selected representative set of radar observations, and coupled with statistical testing which introduces the assumed microphysical properties of the different hydrometeor types. Aside from a hydrometeor type label, each radar sampling volume is characterized by an entropy estimate, indicating the uncertainty of the classification. Here, we revisit the concept of entropy presented in [1], in order to emphasize its presumed potential for the identification of hydrometeor mixtures. The calculation of entropy is based on the estimate of the probability (pi ) that the observation corresponds to the hydrometeor type i (i = 1,ṡṡṡ9) . The probability is derived from the Euclidean distance (di ) of the observation to the centroid characterizing the hydrometeor type i . The parametrization of the d → p transform is conducted in a controlled environment, using synthetic polarimetric radar datasets. It ensures balanced entropy values: low for pure volumes, and high for different possible combinations of mixed hydrometeors. The parametrized entropy is further on applied to real polarimetric C and X band radar datasets, where we demonstrate the potential of linear de-mixing using a simplex formed by a set of pre-defined centroids in the five-dimensional space. As main outcome, the proposed approach allows to provide plausible proportions of the different hydrometeors contained in a given radar sampling volume. [1] Besic, N., Figueras i Ventura, J., Grazioli, J., Gabella, M., Germann, U., and Berne, A.: Hydrometeor classification through statistical clustering of polarimetric radar measurements: a semi-supervised approach, Atmos. Meas. Tech., 9, 4425-4445, doi:10.5194/amt-9-4425-2016, 2016.
Ho, Jeff C; Russel, Kory C; Davis, Jennifer
2014-03-01
Support is growing for the incorporation of fetching time and/or distance considerations in the definition of access to improved water supply used for global monitoring. Current efforts typically rely on self-reported distance and/or travel time data that have been shown to be unreliable. To date, however, there has been no head-to-head comparison of such indicators with other possible distance/time metrics. This study provides such a comparison. We examine the association between both straight-line distance and self-reported one-way travel time with measured route distances to water sources for 1,103 households in Nampula province, Mozambique. We find straight-line, or Euclidean, distance to be a good proxy for route distance (R(2) = 0.98), while self-reported travel time is a poor proxy (R(2) = 0.12). We also apply a variety of time- and distance-based indicators proposed in the literature to our sample data, finding that the share of households classified as having versus lacking access would differ by more than 70 percentage points depending on the particular indicator employed. This work highlights the importance of the ongoing debate regarding valid, reliable, and feasible strategies for monitoring progress in the provision of improved water supply services.
Representational constraints on children's suggestibility.
Ceci, Stephen J; Papierno, Paul B; Kulkofsky, Sarah
2007-06-01
In a multistage experiment, twelve 4- and 9-year-old children participated in a triad rating task. Their ratings were mapped with multidimensional scaling, from which euclidean distances were computed to operationalize semantic distance between items in target pairs. These children and age-mates then participated in an experiment that employed these target pairs in a story, which was followed by a misinformation manipulation. Analyses linked individual and developmental differences in suggestibility to children's representations of the target items. Semantic proximity was a strong predictor of differences in suggestibility: The closer a suggested distractor was to the original item's representation, the greater was the distractor's suggestive influence. The triad participants' semantic proximity subsequently served as the basis for correctly predicting memory performance in the larger group. Semantic proximity enabled a priori counterintuitive predictions of reverse age-related trends to be confirmed whenever the distance between representations of items in a target pair was greater for younger than for older children.
Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing
Abubaker, Ahmad; Baharum, Adam; Alrefaei, Mahmoud
2015-01-01
This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, “MOPSOSA”. The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO) and the Multi-Objective Simulated Annealing (MOSA). Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets. PMID:26132309
NASA Astrophysics Data System (ADS)
Bracken, Paul
2007-05-01
The generalized Weierstrass (GW) system is introduced and its correspondence with the associated two-dimensional nonlinear sigma model is reviewed. The method of symmetry reduction is systematically applied to derive several classes of invariant solutions for the GW system. The solutions can be used to induce constant mean curvature surfaces in Euclidean three space. Some properties of the system for the case of nonconstant mean curvature are introduced as well.
ΛCDM Cosmology for Astronomers
NASA Astrophysics Data System (ADS)
Condon, J. J.; Matthews, A. M.
2018-07-01
The homogeneous, isotropic, and flat ΛCDM universe favored by observations of the cosmic microwave background can be described using only Euclidean geometry, locally correct Newtonian mechanics, and the basic postulates of special and general relativity. We present simple derivations of the most useful equations connecting astronomical observables (redshift, flux density, angular diameter, brightness, local space density, ...) with the corresponding intrinsic properties of distant sources (lookback time, distance, spectral luminosity, linear size, specific intensity, source counts, ...). We also present an analytic equation for lookback time that is accurate within 0.1% for all redshifts z. The exact equation for comoving distance is an elliptic integral that must be evaluated numerically, but we found a simple approximation with errors <0.2% for all redshifts up to z ≈ 50.
A three-dimensional evaluation of human facial asymmetry.
Ferrario, V F; Sforza, C; Miani, A; Serrao, G
1995-01-01
Soft-tissue facial asymmetry was studied in a group of 80 young healthy white Caucasian adults (40 men, 40 women) with no craniofacial, dental or mandibular disorders. For each subject, the 3-dimensional coordinates of 16 standardised soft-tissue facial landmarks (trichion, nasion, pronasale, subnasale, B point, pogonion, eye lateral canthi, nasal alae, labial commissures, tragi, gonia) were measured by infrared photogrammetry by an automated instrument. The form of the right and left hemifaces was assessed by calculating all the possible linear distances between pairs of landmarks within side. Side differences were tested by using euclidean distance matrix analysis. The mean faces of both groups were significantly asymmetric, i.e. the 2 sides of face showed significant differences in shape, but no differences in size. PMID:7649806
Advertisement call and genetic structure conservatism: good news for an endangered Neotropical frog.
Forti, Lucas R; Costa, William P; Martins, Lucas B; Nunes-de-Almeida, Carlos H L; Toledo, Luís Felipe
2016-01-01
Many amphibian species are negatively affected by habitat change due to anthropogenic activities. Populations distributed over modified landscapes may be subject to local extinction or may be relegated to the remaining-likely isolated and possibly degraded-patches of available habitat. Isolation without gene flow could lead to variability in phenotypic traits owing to differences in local selective pressures such as environmental structure, microclimate, or site-specific species assemblages. Here, we tested the microevolution hypothesis by evaluating the acoustic parameters of 349 advertisement calls from 15 males from six populations of the endangered amphibian species Proceratophrys moratoi. In addition, we analyzed the genetic distances among populations and the genetic diversity with a haplotype network analysis. We performed cluster analysis on acoustic data based on the Bray-Curtis index of similarity, using the UPGMA method. We correlated acoustic dissimilarities (calculated by Euclidean distance) with geographical and genetic distances among populations. Spectral traits of the advertisement call of P. moratoi presented lower coefficients of variation than did temporal traits, both within and among males. Cluster analyses placed individuals without congruence in population or geographical distance, but recovered the species topology in relation to sister species. The genetic distance among populations was low; it did not exceed 0.4% for the most distant populations, and was not correlated with acoustic distance. Both acoustic features and genetic sequences are highly conserved, suggesting that populations could be connected by recent migrations, and that they are subject to stabilizing selective forces. Although further studies are required, these findings add to a growing body of literature suggesting that this species would be a good candidate for a reintroduction program without negative effects on communication or genetic impact.
Mapping the Similarities of Spectra: Global and Locally-biased Approaches to SDSS Galaxies
NASA Astrophysics Data System (ADS)
Lawlor, David; Budavári, Tamás; Mahoney, Michael W.
2016-12-01
We present a novel approach to studying the diversity of galaxies. It is based on a novel spectral graph technique, that of locally-biased semi-supervised eigenvectors. Our method introduces new coordinates that summarize an entire spectrum, similar to but going well beyond the widely used Principal Component Analysis (PCA). Unlike PCA, however, this technique does not assume that the Euclidean distance between galaxy spectra is a good global measure of similarity. Instead, we relax that condition to only the most similar spectra, and we show that doing so yields more reliable results for many astronomical questions of interest. The global variant of our approach can identify very finely numerous astronomical phenomena of interest. The locally-biased variants of our basic approach enable us to explore subtle trends around a set of chosen objects. The power of the method is demonstrated in the Sloan Digital Sky Survey Main Galaxy Sample, by illustrating that the derived spectral coordinates carry an unprecedented amount of information.
Spot detection and image segmentation in DNA microarray data.
Qin, Li; Rueda, Luis; Ali, Adnan; Ngom, Alioune
2005-01-01
Following the invention of microarrays in 1994, the development and applications of this technology have grown exponentially. The numerous applications of microarray technology include clinical diagnosis and treatment, drug design and discovery, tumour detection, and environmental health research. One of the key issues in the experimental approaches utilising microarrays is to extract quantitative information from the spots, which represent genes in a given experiment. For this process, the initial stages are important and they influence future steps in the analysis. Identifying the spots and separating the background from the foreground is a fundamental problem in DNA microarray data analysis. In this review, we present an overview of state-of-the-art methods for microarray image segmentation. We discuss the foundations of the circle-shaped approach, adaptive shape segmentation, histogram-based methods and the recently introduced clustering-based techniques. We analytically show that clustering-based techniques are equivalent to the one-dimensional, standard k-means clustering algorithm that utilises the Euclidean distance.
CLUSTERING OF INTERICTAL SPIKES BY DYNAMIC TIME WARPING AND AFFINITY PROPAGATION
Thomas, John; Jin, Jing; Dauwels, Justin; Cash, Sydney S.; Westover, M. Brandon
2018-01-01
Epilepsy is often associated with the presence of spikes in electroencephalograms (EEGs). The spike waveforms vary vastly among epilepsy patients, and also for the same patient across time. In order to develop semi-automated and automated methods for detecting spikes, it is crucial to obtain a better understanding of the various spike shapes. In this paper, we develop several approaches to extract exemplars of spikes. We generate spike exemplars by applying clustering algorithms to a database of spikes from 12 patients. As similarity measures for clustering, we consider the Euclidean distance and Dynamic Time Warping (DTW). We assess two clustering algorithms, namely, K-means clustering and affinity propagation. The clustering methods are compared based on the mean squared error, and the similarity measures are assessed based on the number of generated spike clusters. Affinity propagation with DTW is shown to be the best combination for clustering epileptic spikes, since it generates fewer spike templates and does not require to pre-specify the number of spike templates. PMID:29527130
Tan, Shan; Zhang, Hao; Zhang, Yongxue; Chen, Wengen; D’Souza, Warren D.; Lu, Wei
2013-01-01
Purpose: A family of fluorine-18 (18F)-fluorodeoxyglucose (18F-FDG) positron-emission tomography (PET) features based on histogram distances is proposed for predicting pathologic tumor response to neoadjuvant chemoradiotherapy (CRT). These features describe the longitudinal change of FDG uptake distribution within a tumor. Methods: Twenty patients with esophageal cancer treated with CRT plus surgery were included in this study. All patients underwent PET/CT scans before (pre-) and after (post-) CRT. The two scans were first rigidly registered, and the original tumor sites were then manually delineated on the pre-PET/CT by an experienced nuclear medicine physician. Two histograms representing the FDG uptake distribution were extracted from the pre- and the registered post-PET images, respectively, both within the delineated tumor. Distances between the two histograms quantify longitudinal changes in FDG uptake distribution resulting from CRT, and thus are potential predictors of tumor response. A total of 19 histogram distances were examined and compared to both traditional PET response measures and Haralick texture features. Receiver operating characteristic analyses and Mann-Whitney U test were performed to assess their predictive ability. Results: Among all tested histogram distances, seven bin-to-bin and seven crossbin distances outperformed traditional PET response measures using maximum standardized uptake value (AUC = 0.70) or total lesion glycolysis (AUC = 0.80). The seven bin-to-bin distances were: L2 distance (AUC = 0.84), χ2 distance (AUC = 0.83), intersection distance (AUC = 0.82), cosine distance (AUC = 0.83), squared Euclidean distance (AUC = 0.83), L1 distance (AUC = 0.82), and Jeffrey distance (AUC = 0.82). The seven crossbin distances were: quadratic-chi distance (AUC = 0.89), earth mover distance (AUC = 0.86), fast earth mover distance (AUC = 0.86), diffusion distance (AUC = 0.88), Kolmogorov-Smirnov distance (AUC = 0.88), quadratic form distance (AUC = 0.87), and match distance (AUC = 0.84). These crossbin histogram distance features showed slightly higher prediction accuracy than texture features on post-PET images. Conclusions: The results suggest that longitudinal patterns in 18F-FDG uptake characterized using histogram distances provide useful information for predicting the pathologic response of esophageal cancer to CRT. PMID:24089897
Constructing financial network based on PMFG and threshold method
NASA Astrophysics Data System (ADS)
Nie, Chun-Xiao; Song, Fu-Tie
2018-04-01
Based on planar maximally filtered graph (PMFG) and threshold method, we introduced a correlation-based network named PMFG-based threshold network (PTN). We studied the community structure of PTN and applied ISOMAP algorithm to represent PTN in low-dimensional Euclidean space. The results show that the community corresponds well to the cluster in the Euclidean space. Further, we studied the dynamics of the community structure and constructed the normalized mutual information (NMI) matrix. Based on the real data in the market, we found that the volatility of the market can lead to dramatic changes in the community structure, and the structure is more stable during the financial crisis.
Euclidean scalar field theory in the bilocal approximation
NASA Astrophysics Data System (ADS)
Nagy, S.; Polonyi, J.; Steib, I.
2018-04-01
The blocking step of the renormalization group method is usually carried out by restricting it to fluctuations and to local blocked action. The tree-level, bilocal saddle point contribution to the blocking, defined by the infinitesimal decrease of the sharp cutoff in momentum space, is followed within the three dimensional Euclidean ϕ6 model in this work. The phase structure is changed, new phases and relevant operators are found, and certain universality classes are restricted by the bilocal saddle point.
Jia, Rui; Monk, Paul; Murray, David; Noble, J Alison; Mellon, Stephen
2017-09-06
Optoelectronic motion capture systems are widely employed to measure the movement of human joints. However, there can be a significant discrepancy between the data obtained by a motion capture system (MCS) and the actual movement of underlying bony structures, which is attributed to soft tissue artefact. In this paper, a computer-aided tracking and motion analysis with ultrasound (CAT & MAUS) system with an augmented globally optimal registration algorithm is presented to dynamically track the underlying bony structure during movement. The augmented registration part of CAT & MAUS was validated with a high system accuracy of 80%. The Euclidean distance between the marker-based bony landmark and the bony landmark tracked by CAT & MAUS was calculated to quantify the measurement error of an MCS caused by soft tissue artefact during movement. The average Euclidean distance between the target bony landmark measured by each of the CAT & MAUS system and the MCS alone varied from 8.32mm to 16.87mm in gait. This indicates the discrepancy between the MCS measured bony landmark and the actual underlying bony landmark. Moreover, Procrustes analysis was applied to demonstrate that CAT & MAUS reduces the deformation of the body segment shape modeled by markers during motion. The augmented CAT & MAUS system shows its potential to dynamically detect and locate actual underlying bony landmarks, which reduces the MCS measurement error caused by soft tissue artefact during movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
Effect of planning for connectivity on linear reserve networks.
Lentini, Pia E; Gibbons, Philip; Carwardine, Josie; Fischer, Joern; Drielsma, Michael; Martin, Tara G
2013-08-01
Although the concept of connectivity is decades old, it remains poorly understood and defined, and some argue that habitat quality and area should take precedence in conservation planning instead. However, fragmented landscapes are often characterized by linear features that are inherently connected, such as streams and hedgerows. For these, both representation and connectivity targets may be met with little effect on the cost, area, or quality of the reserve network. We assessed how connectivity approaches affect planning outcomes for linear habitat networks by using the stock-route network of Australia as a case study. With the objective of representing vegetation communities across the network at a minimal cost, we ran scenarios with a range of representation targets (10%, 30%, 50%, and 70%) and used 3 approaches to account for connectivity (boundary length modifier, Euclidean distance, and landscape-value [LV]). We found that decisions regarding the target and connectivity approach used affected the spatial allocation of reserve systems. At targets ≥50%, networks designed with the Euclidean distance and LV approaches consisted of a greater number of small reserves. Hence, by maximizing both representation and connectivity, these networks compromised on larger contiguous areas. However, targets this high are rarely used in real-world conservation planning. Approaches for incorporating connectivity into the planning of linear reserve networks that account for both the spatial arrangement of reserves and the characteristics of the intervening matrix highlight important sections that link the landscape and that may otherwise be overlooked. © 2013 Society for Conservation Biology.
Tackling higher derivative ghosts with the Euclidean path integral
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fontanini, Michele; Department of Physics, Syracuse University, Syracuse, New York 13244; Trodden, Mark
2011-05-15
An alternative to the effective field theory approach to treat ghosts in higher derivative theories is to attempt to integrate them out via the Euclidean path integral formalism. It has been suggested that this method could provide a consistent framework within which we might tolerate the ghost degrees of freedom that plague, among other theories, the higher derivative gravity models that have been proposed to explain cosmic acceleration. We consider the extension of this idea to treating a class of terms with order six derivatives, and find that for a general term the Euclidean path integral approach works in themore » most trivial background, Minkowski. Moreover we see that even in de Sitter background, despite some difficulties, it is possible to define a probability distribution for tensorial perturbations of the metric.« less
Zhang, Li; Qian, Liqiang; Ding, Chuntao; Zhou, Weida; Li, Fanzhang
2015-09-01
The family of discriminant neighborhood embedding (DNE) methods is typical graph-based methods for dimension reduction, and has been successfully applied to face recognition. This paper proposes a new variant of DNE, called similarity-balanced discriminant neighborhood embedding (SBDNE) and applies it to cancer classification using gene expression data. By introducing a novel similarity function, SBDNE deals with two data points in the same class and the different classes with different ways. The homogeneous and heterogeneous neighbors are selected according to the new similarity function instead of the Euclidean distance. SBDNE constructs two adjacent graphs, or between-class adjacent graph and within-class adjacent graph, using the new similarity function. According to these two adjacent graphs, we can generate the local between-class scatter and the local within-class scatter, respectively. Thus, SBDNE can maximize the between-class scatter and simultaneously minimize the within-class scatter to find the optimal projection matrix. Experimental results on six microarray datasets show that SBDNE is a promising method for cancer classification. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multitask SVM learning for remote sensing data classification
NASA Astrophysics Data System (ADS)
Leiva-Murillo, Jose M.; Gómez-Chova, Luis; Camps-Valls, Gustavo
2010-10-01
Many remote sensing data processing problems are inherently constituted by several tasks that can be solved either individually or jointly. For instance, each image in a multitemporal classification setting could be taken as an individual task but relation to previous acquisitions should be properly considered. In such problems, different modalities of the data (temporal, spatial, angular) gives rise to changes between the training and test distributions, which constitutes a difficult learning problem known as covariate shift. Multitask learning methods aim at jointly solving a set of prediction problems in an efficient way by sharing information across tasks. This paper presents a novel kernel method for multitask learning in remote sensing data classification. The proposed method alleviates the dataset shift problem by imposing cross-information in the classifiers through matrix regularization. We consider the support vector machine (SVM) as core learner and two regularization schemes are introduced: 1) the Euclidean distance of the predictors in the Hilbert space; and 2) the inclusion of relational operators between tasks. Experiments are conducted in the challenging remote sensing problems of cloud screening from multispectral MERIS images and for landmine detection.
Measurement of the PPN parameter γ by testing the geometry of near-Earth space
NASA Astrophysics Data System (ADS)
Luo, Jie; Tian, Yuan; Wang, Dian-Hong; Qin, Cheng-Gang; Shao, Cheng-Gang
2016-06-01
The Beyond Einstein Advanced Coherent Optical Network (BEACON) mission was designed to achieve an accuracy of 10^{-9} in measuring the Eddington parameter γ , which is perhaps the most fundamental Parameterized Post-Newtonian parameter. However, this ideal accuracy was just estimated as a ratio of the measurement accuracy of the inter-spacecraft distances to the magnitude of the departure from Euclidean geometry. Based on the BEACON concept, we construct a measurement model to estimate the parameter γ with the least squares method. Influences of the measurement noise and the out-of-plane error on the estimation accuracy are evaluated based on the white noise model. Though the BEACON mission does not require expensive drag-free systems and avoids physical dynamical models of spacecraft, the relatively low accuracy of initial inter-spacecraft distances poses a great challenge, which reduces the estimation accuracy in about two orders of magnitude. Thus the noise requirements may need to be more stringent in the design in order to achieve the target accuracy, which is demonstrated in the work. Considering that, we have given the limits on the power spectral density of both noise sources for the accuracy of 10^{-9}.
Luo, He; Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang
2018-01-01
Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided.
Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang
2018-01-01
Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided. PMID:29561888
Einstein-Podolsky-Rosen steering: Its geometric quantification and witness
NASA Astrophysics Data System (ADS)
Ku, Huan-Yu; Chen, Shin-Liang; Budroni, Costantino; Miranowicz, Adam; Chen, Yueh-Nan; Nori, Franco
2018-02-01
We propose a measure of quantum steerability, namely, a convex steering monotone, based on the trace distance between a given assemblage and its corresponding closest assemblage admitting a local-hidden-state (LHS) model. We provide methods to estimate such a quantity, via lower and upper bounds, based on semidefinite programming. One of these upper bounds has a clear geometrical interpretation as a linear function of rescaled Euclidean distances in the Bloch sphere between the normalized quantum states of (i) a given assemblage and (ii) an LHS assemblage. For a qubit-qubit quantum state, these ideas also allow us to visualize various steerability properties of the state in the Bloch sphere via the so-called LHS surface. In particular, some steerability properties can be obtained by comparing such an LHS surface with a corresponding quantum steering ellipsoid. Thus, we propose a witness of steerability corresponding to the difference of the volumes enclosed by these two surfaces. This witness (which reveals the steerability of a quantum state) enables one to find an optimal measurement basis, which can then be used to determine the proposed steering monotone (which describes the steerability of an assemblage) optimized over all mutually unbiased bases.
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
Function representation with circle inversion map systems
NASA Astrophysics Data System (ADS)
Boreland, Bryson; Kunze, Herb
2017-01-01
The fractals literature develops the now well-known concept of local iterated function systems (using affine maps) with grey-level maps (LIFSM) as an approach to function representation in terms of the associated fixed point of the so-called fractal transform. While originally explored as a method to achieve signal (and 2-D image) compression, more recent work has explored various aspects of signal and image processing using this machinery. In this paper, we develop a similar framework for function representation using circle inversion map systems. Given a circle C with centre õ and radius r, inversion with respect to C transforms the point p˜ to the point p˜', such that p˜ and p˜' lie on the same radial half-line from õ and d(õ, p˜)d(õ, p˜') = r2, where d is Euclidean distance. We demonstrate the results with an example.
Clustering Tree-structured Data on Manifold
Lu, Na; Miao, Hongyu
2016-01-01
Tree-structured data usually contain both topological and geometrical information, and are necessarily considered on manifold instead of Euclidean space for appropriate data parameterization and analysis. In this study, we propose a novel tree-structured data parameterization, called Topology-Attribute matrix (T-A matrix), so the data clustering task can be conducted on matrix manifold. We incorporate the structure constraints embedded in data into the non-negative matrix factorization method to determine meta-trees from the T-A matrix, and the signature vector of each single tree can then be extracted by meta-tree decomposition. The meta-tree space turns out to be a cone space, in which we explore the distance metric and implement the clustering algorithm based on the concepts like Fréchet mean. Finally, the T-A matrix based clustering (TAMBAC) framework is evaluated and compared using both simulated data and real retinal images to illus trate its efficiency and accuracy. PMID:26660696
Fontes, Cristiano Hora; Budman, Hector
2017-11-01
A clustering problem involving multivariate time series (MTS) requires the selection of similarity metrics. This paper shows the limitations of the PCA similarity factor (SPCA) as a single metric in nonlinear problems where there are differences in magnitude of the same process variables due to expected changes in operation conditions. A novel method for clustering MTS based on a combination between SPCA and the average-based Euclidean distance (AED) within a fuzzy clustering approach is proposed. Case studies involving either simulated or real industrial data collected from a large scale gas turbine are used to illustrate that the hybrid approach enhances the ability to recognize normal and fault operating patterns. This paper also proposes an oversampling procedure to create synthetic multivariate time series that can be useful in commonly occurring situations involving unbalanced data sets. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Mathematical modeling of the malignancy of cancer using graph evolution.
Gunduz-Demir, Cigdem
2007-10-01
We report a novel computational method based on graph evolution process to model the malignancy of brain cancer called glioma. In this work, we analyze the phases that a graph passes through during its evolution and demonstrate strong relation between the malignancy of cancer and the phase of its graph. From the photomicrographs of tissues, which are diagnosed as normal, low-grade cancerous and high-grade cancerous, we construct cell-graphs based on the locations of cells; we probabilistically generate an edge between every pair of cells depending on the Euclidean distance between them. For a cell-graph, we extract connectivity information including the properties of its connected components in order to analyze the phase of the cell-graph. Working with brain tissue samples surgically removed from 12 patients, we demonstrate that cell-graphs generated for different tissue types evolve differently and that they exhibit different phase properties, which distinguish a tissue type from another.
Describing spatial pattern in stream networks: A practical approach
Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.
2005-01-01
The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.
A geostatistical approach for describing spatial pattern in stream networks
Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.
2005-01-01
The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.
On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.
Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba
2013-01-01
Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ d , and the dictionary is learned from the training data using the vector space structure of ℝ d and its Euclidean L 2 -metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.
On A Nonlinear Generalization of Sparse Coding and Dictionary Learning
Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba
2013-01-01
Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝd, and the dictionary is learned from the training data using the vector space structure of ℝd and its Euclidean L2-metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis. PMID:24129583
Principal Curves on Riemannian Manifolds.
Hauberg, Soren
2016-09-01
Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.
Enjoyment of Euclidean Planar Triangles
ERIC Educational Resources Information Center
Srinivasan, V. K.
2013-01-01
This article adopts the following classification for a Euclidean planar [triangle]ABC, purely based on angles alone. A Euclidean planar triangle is said to be acute angled if all the three angles of the Euclidean planar [triangle]ABC are acute angles. It is said to be right angled at a specific vertex, say B, if the angle ?ABC is a right angle…
Gifted Mathematicians Constructing Their Own Geometries--Changes in Knowledge and Attitude.
ERIC Educational Resources Information Center
Shillor, Irith
1997-01-01
Using Taxi-Cab Geometry (a non-Euclidean geometry program) as the starting point, 14 mathematically gifted British secondary students (ages 12-14) were asked to consider the differences between Euclidean and Non-Euclidean geometries, then to construct their own geometry and to consider the non-Euclidean elements within it. The positive effects of…
Damage localization of marine risers using time series of vibration signals
NASA Astrophysics Data System (ADS)
Liu, Hao; Yang, Hezhen; Liu, Fushun
2014-10-01
Based on dynamic response signals a damage detection algorithm is developed for marine risers. Damage detection methods based on numerous modal properties have encountered issues in the researches in offshore oil community. For example, significant increase in structure mass due to marine plant/animal growth and changes in modal properties by equipment noise are not the result of damage for riser structures. In an attempt to eliminate the need to determine modal parameters, a data-based method is developed. The implementation of the method requires that vibration data are first standardized to remove the influence of different loading conditions and the autoregressive moving average (ARMA) model is used to fit vibration response signals. In addition, a damage feature factor is introduced based on the autoregressive (AR) parameters. After that, the Euclidean distance between ARMA models is subtracted as a damage indicator for damage detection and localization and a top tensioned riser simulation model with different damage scenarios is analyzed using the proposed method with dynamic acceleration responses of a marine riser as sensor data. Finally, the influence of measured noise is analyzed. According to the damage localization results, the proposed method provides accurate damage locations of risers and is robust to overcome noise effect.
Convolutional neural network features based change detection in satellite images
NASA Astrophysics Data System (ADS)
Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong
2016-07-01
With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.
Hyperspectral feature mapping classification based on mathematical morphology
NASA Astrophysics Data System (ADS)
Liu, Chang; Li, Junwei; Wang, Guangping; Wu, Jingli
2016-03-01
This paper proposed a hyperspectral feature mapping classification algorithm based on mathematical morphology. Without the priori information such as spectral library etc., the spectral and spatial information can be used to realize the hyperspectral feature mapping classification. The mathematical morphological erosion and dilation operations are performed respectively to extract endmembers. The spectral feature mapping algorithm is used to carry on hyperspectral image classification. The hyperspectral image collected by AVIRIS is applied to evaluate the proposed algorithm. The proposed algorithm is compared with minimum Euclidean distance mapping algorithm, minimum Mahalanobis distance mapping algorithm, SAM algorithm and binary encoding mapping algorithm. From the results of the experiments, it is illuminated that the proposed algorithm's performance is better than that of the other algorithms under the same condition and has higher classification accuracy.
Alternative of raw material’s suppliers using TOPSIS method in chicken slaughterhouse industry
NASA Astrophysics Data System (ADS)
Sari, R. M.; Rizkya, I.; Syahputri, K.; Anizar; Siregar, I.
2018-02-01
Chicken slaughterhouse industry is one of the fastest growing industries that depends on the freshness of raw materials. The raw materials quality arrive at the company depends heavily on the suppliers. Fresh chicken and frozen chicken meat are the main raw materials for this industry. Problems occurred by the suppliers are catering the amount of raw material needs that are not appropriate and also delay during delivery process. This condition causes disruption of the production process in the company. Therefore, it is necessary to determine the best suppliers to supply the main raw materials of fresh and frozen chicken meat on the slaughterhouse chicken industry. This study analyze the supplier’s capability by using TOPSIS method. This method use to find out the best supplier. The TOPSIS method is performed using the principle that chosen alternative must have the shortest distance from the positive solution and furthest from the ideal solution of the geometric point by using the Euclidean distance to determine the relative proximity of the optimum solution alternative. TOPSIS method found the rank of best supplier’s order is supplier A followed by supplier D, supplier B, supplier C, supplier E, supplier F, and supplier G. Based on the rank order obtained from each company, it will assist the company in prioritizing the order to the supplier with the best rank. Total supply from All suppliers are 885,994 kg per month. Based on the results of research, the top five suppliers have been sufficient to meet the needs of the company.
2011-01-01
Background The Prospective Space-Time scan statistic (PST) is widely used for the evaluation of space-time clusters of point event data. Usually a window of cylindrical shape is employed, with a circular or elliptical base in the space domain. Recently, the concept of Minimum Spanning Tree (MST) was applied to specify the set of potential clusters, through the Density-Equalizing Euclidean MST (DEEMST) method, for the detection of arbitrarily shaped clusters. The original map is cartogram transformed, such that the control points are spread uniformly. That method is quite effective, but the cartogram construction is computationally expensive and complicated. Results A fast method for the detection and inference of point data set space-time disease clusters is presented, the Voronoi Based Scan (VBScan). A Voronoi diagram is built for points representing population individuals (cases and controls). The number of Voronoi cells boundaries intercepted by the line segment joining two cases points defines the Voronoi distance between those points. That distance is used to approximate the density of the heterogeneous population and build the Voronoi distance MST linking the cases. The successive removal of edges from the Voronoi distance MST generates sub-trees which are the potential space-time clusters. Finally, those clusters are evaluated through the scan statistic. Monte Carlo replications of the original data are used to evaluate the significance of the clusters. An application for dengue fever in a small Brazilian city is presented. Conclusions The ability to promptly detect space-time clusters of disease outbreaks, when the number of individuals is large, was shown to be feasible, due to the reduced computational load of VBScan. Instead of changing the map, VBScan modifies the metric used to define the distance between cases, without requiring the cartogram construction. Numerical simulations showed that VBScan has higher power of detection, sensitivity and positive predicted value than the Elliptic PST. Furthermore, as VBScan also incorporates topological information from the point neighborhood structure, in addition to the usual geometric information, it is more robust than purely geometric methods such as the elliptic scan. Those advantages were illustrated in a real setting for dengue fever space-time clusters. PMID:21513556
The global climate change effect on the Altai region's climate in the first half of XXI century
NASA Astrophysics Data System (ADS)
Lagutin, Anatoly A.; Volkov, Nikolai V.; Makushev, Konstantin M.; Mordvin, Egor Yu.
2017-11-01
We investigate an effect of global climate system change on climate of Altai region. It is shown that a data of the RegCM4 regional climate model, obtained for contemporary and future periods, within an approach which is based on standard Euclidean distance, allows to define specific zones in which climate change is forecasted. Such zones have been defined for the Altai region territory within the framework of global radiative forcing scenarios RCP 4.5 and RCP 8.5 for the middle of XXI century.
2013-05-01
for initial test of object coverage for these scanning trajectories. I have also acquired real data of physical phantoms by using a clinical CBCT system...scan. To test the extension of axial coverage, I car- ried out a simulated data study using numerical disk and anthropomorphic XCAT phantoms [15]. As an...imaging model in Eq. (1), I investigated the choice of data divergence, such as the Euclidean distance or Kullback - Leibler (K-L) divergence, which are
ERIC Educational Resources Information Center
Walwyn, Amy L.; Navarro, Daniel J.
2010-01-01
An experiment is reported comparing human performance on two kinds of visually presented traveling salesperson problems (TSPs), those reliant on Euclidean geometry and those reliant on city block geometry. Across multiple array sizes, human performance was near-optimal in both geometries, but was slightly better in the Euclidean format. Even so,…
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images
Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu
2017-01-01
The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979
Shi, Xiaoping; Wu, Yuehua; Rao, Calyampudi Radhakrishna
2018-06-05
The change-point detection has been carried out in terms of the Euclidean minimum spanning tree (MST) and shortest Hamiltonian path (SHP), with successful applications in the determination of authorship of a classic novel, the detection of change in a network over time, the detection of cell divisions, etc. However, these Euclidean graph-based tests may fail if a dataset contains random interferences. To solve this problem, we present a powerful non-Euclidean SHP-based test, which is consistent and distribution-free. The simulation shows that the test is more powerful than both Euclidean MST- and SHP-based tests and the non-Euclidean MST-based test. Its applicability in detecting both landing and departure times in video data of bees' flower visits is illustrated.
Févotte, Cédric; Bertin, Nancy; Durrieu, Jean-Louis
2009-03-01
This letter presents theoretical, algorithmic, and experimental results about nonnegative matrix factorization (NMF) with the Itakura-Saito (IS) divergence. We describe how IS-NMF is underlaid by a well-defined statistical model of superimposed gaussian components and is equivalent to maximum likelihood estimation of variance parameters. This setting can accommodate regularization constraints on the factors through Bayesian priors. In particular, inverse-gamma and gamma Markov chain priors are considered in this work. Estimation can be carried out using a space-alternating generalized expectation-maximization (SAGE) algorithm; this leads to a novel type of NMF algorithm, whose convergence to a stationary point of the IS cost function is guaranteed. We also discuss the links between the IS divergence and other cost functions used in NMF, in particular, the Euclidean distance and the generalized Kullback-Leibler (KL) divergence. As such, we describe how IS-NMF can also be performed using a gradient multiplicative algorithm (a standard algorithm structure in NMF) whose convergence is observed in practice, though not proven. Finally, we report a furnished experimental comparative study of Euclidean-NMF, KL-NMF, and IS-NMF algorithms applied to the power spectrogram of a short piano sequence recorded in real conditions, with various initializations and model orders. Then we show how IS-NMF can successfully be employed for denoising and upmix (mono to stereo conversion) of an original piece of early jazz music. These experiments indicate that IS-NMF correctly captures the semantics of audio and is better suited to the representation of music signals than NMF with the usual Euclidean and KL costs.
Geometrical and quantum mechanical aspects in observers' mathematics
NASA Astrophysics Data System (ADS)
Khots, Boris; Khots, Dmitriy
2013-10-01
When we create mathematical models for Quantum Mechanics we assume that the mathematical apparatus used in modeling, at least the simplest mathematical apparatus, is infallible. In particular, this relates to the use of "infinitely small" and "infinitely large" quantities in arithmetic and the use of Newton Cauchy definitions of a limit and derivative in analysis. We believe that is where the main problem lies in contemporary study of nature. We have introduced a new concept of Observer's Mathematics (see www.mathrelativity.com). Observer's Mathematics creates new arithmetic, algebra, geometry, topology, analysis and logic which do not contain the concept of continuum, but locally coincide with the standard fields. We prove that Euclidean Geometry works in sufficiently small neighborhood of the given line, but when we enlarge the neighborhood, non-euclidean Geometry takes over. We prove that the physical speed is a random variable, cannot exceed some constant, and this constant does not depend on an inertial coordinate system. We proved the following theorems: Theorem A (Lagrangian). Let L be a Lagrange function of free material point with mass m and speed v. Then the probability P of L = m 2 v2 is less than 1: P(L = m 2 v2) < 1. Theorem B (Nadezhda effect). On the plane (x, y) on every line y = kx there is a point (x0, y0) with no existing Euclidean distance between origin (0, 0) and this point. Conjecture (Black Hole). Our space-time nature is a black hole: light cannot go out infinitely far from origin.
Escamilla, Veronica; Chibwesha, Carla J.; Gartland, Matthew; Chintu, Namwinga; Mubiana-Mbewe, Mwangelwa; Musokotwane, Kebby; Musonda, Patrick; Miller, William C.; Stringer, Jeffrey S. A.; Chi, Benjamin H.
2016-01-01
Background In rural settings, HIV-infected pregnant women often live significant distances from facilities that provide prevention of mother-to-child transmission (PMTCT) services. Methods We implemented a pilot project to offer universal maternal combination antiretroviral regimens in 4 clinics in rural Zambia. To evaluate the impact of services, we conducted a household survey in communities surrounding each facility. We collected information about HIV status and antenatal service utilization from women who delivered in the past two years. Using household global positing systems coordinates collected in the survey, we measured Euclidean (i.e., straight line) distance between individual households and clinics. Multivariable logistic regression and predicted probabilities were used to determine associations between distance and uptake of any PMTCT regimen and combination antiretroviral regimens specifically. Results From March to December 2011, 390 HIV-infected mothers were surveyed across four communities. Of these, 254 (65%) had household geographical coordinates documented. 168 women reported use of a PMTCT regimen during pregnancy, including 102 who initiated a combination antiretroviral regimen. The probability of PMTCT regimen initiation was highest within 1.9 km of the facility and gradually declined. Overall, 103 of 145 (71%) who lived within 1.9 km of the facility initiated PMTCT, versus 65 of 109 (60%) who lived farther away. For every kilometer increase, the association with PMTCT regimen uptake (adjusted odds ratio [AOR]: 0.90, 95%CI: 0.82—0.99) and combination antiretroviral regimen uptake (AOR: 0.88, 95%CI: 0.80—0.97) decreased. Conclusions In this rural African setting, uptake of PMTCT regimens was influenced by distance to health facility. Program models that further decentralize care into remote communities are urgently needed. PMID:26470035
An Information-Theoretic-Cluster Visualization for Self-Organizing Maps.
Brito da Silva, Leonardo Enzo; Wunsch, Donald C
2018-06-01
Improved data visualization will be a significant tool to enhance cluster analysis. In this paper, an information-theoretic-based method for cluster visualization using self-organizing maps (SOMs) is presented. The information-theoretic visualization (IT-vis) has the same structure as the unified distance matrix, but instead of depicting Euclidean distances between adjacent neurons, it displays the similarity between the distributions associated with adjacent neurons. Each SOM neuron has an associated subset of the data set whose cardinality controls the granularity of the IT-vis and with which the first- and second-order statistics are computed and used to estimate their probability density functions. These are used to calculate the similarity measure, based on Renyi's quadratic cross entropy and cross information potential (CIP). The introduced visualizations combine the low computational cost and kernel estimation properties of the representative CIP and the data structure representation of a single-linkage-based grouping algorithm to generate an enhanced SOM-based visualization. The visual quality of the IT-vis is assessed by comparing it with other visualization methods for several real-world and synthetic benchmark data sets. Thus, this paper also contains a significant literature survey. The experiments demonstrate the IT-vis cluster revealing capabilities, in which cluster boundaries are sharply captured. Additionally, the information-theoretic visualizations are used to perform clustering of the SOM. Compared with other methods, IT-vis of large SOMs yielded the best results in this paper, for which the quality of the final partitions was evaluated using external validity indices.
Individual Biometric Identification Using Multi-Cycle Electrocardiographic Waveform Patterns.
Lee, Wonki; Kim, Seulgee; Kim, Daeeun
2018-03-28
The electrocardiogram (ECG) waveform conveys information regarding the electrical property of the heart. The patterns vary depending on the individual heart characteristics. ECG features can be potentially used for biometric recognition. This study presents a new method using the entire ECG waveform pattern for matching and demonstrates that the approach can potentially be employed for individual biometric identification. Multi-cycle ECG signals were assessed using an ECG measuring circuit, and three electrodes can be patched on the wrists or fingers for considering various measurements. For biometric identification, our-fold cross validation was used in the experiments for assessing how the results of a statistical analysis will generalize to an independent data set. Four different pattern matching algorithms, i.e., cosine similarity, cross correlation, city block distance, and Euclidean distances, were tested to compare the individual identification performances with a single channel of ECG signal (3-wire ECG). To evaluate the pattern matching for biometric identification, the ECG recordings for each subject were partitioned into training and test set. The suggested method obtained a maximum performance of 89.9% accuracy with two heartbeats of ECG signals measured on the wrist and 93.3% accuracy with three heartbeats for 55 subjects. The performance rate with ECG signals measured on the fingers improved up to 99.3% with two heartbeats and 100% with three heartbeats of signals for 20 subjects.
NASA Astrophysics Data System (ADS)
Gu, Yameng; Zhang, Xuming
2017-05-01
Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).
NASA Astrophysics Data System (ADS)
Galanis, George; Famelis, Ioannis; Kalogeri, Christina
2014-10-01
The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.
Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.
Zhang, Ying-Ying; Yang, Cai; Zhang, Ping
2017-05-01
In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dynamic hyperbolic geometry: building intuition and understanding mediated by a Euclidean model
NASA Astrophysics Data System (ADS)
Moreno-Armella, Luis; Brady, Corey; Elizondo-Ramirez, Rubén
2018-05-01
This paper explores a deep transformation in mathematical epistemology and its consequences for teaching and learning. With the advent of non-Euclidean geometries, direct, iconic correspondences between physical space and the deductive structures of mathematical inquiry were broken. For non-Euclidean ideas even to become thinkable the mathematical community needed to accumulate over twenty centuries of reflection and effort: a precious instance of distributed intelligence at the cultural level. In geometry education after this crisis, relations between intuitions and geometrical reasoning must be established philosophically, rather than taken for granted. One approach seeks intuitive supports only for Euclidean explorations, viewing non-Euclidean inquiry as fundamentally non-intuitive in nature. We argue for moving beyond such an impoverished approach, using dynamic geometry environments to develop new intuitions even in the extremely challenging setting of hyperbolic geometry. Our efforts reverse the typical direction, using formal structures as a source for a new family of intuitions that emerge from exploring a digital model of hyperbolic geometry. This digital model is elaborated within a Euclidean dynamic geometry environment, enabling a conceptual dance that re-configures Euclidean knowledge as a support for building intuitions in hyperbolic space-intuitions based not directly on physical experience but on analogies extending Euclidean concepts.
Vértes, Petra E.; Stidd, Reva; Lalonde, François; Clasen, Liv; Rapoport, Judith; Giedd, Jay; Bullmore, Edward T.; Gogtay, Nitin
2013-01-01
The human brain is a topologically complex network embedded in anatomical space. Here, we systematically explored relationships between functional connectivity, complex network topology, and anatomical (Euclidean) distance between connected brain regions, in the resting-state functional magnetic resonance imaging brain networks of 20 healthy volunteers and 19 patients with childhood-onset schizophrenia (COS). Normal between-subject differences in average distance of connected edges in brain graphs were strongly associated with variation in topological properties of functional networks. In addition, a club or subset of connector hubs was identified, in lateral temporal, parietal, dorsal prefrontal, and medial prefrontal/cingulate cortical regions. In COS, there was reduced strength of functional connectivity over short distances especially, and therefore, global mean connection distance of thresholded graphs was significantly greater than normal. As predicted from relationships between spatial and topological properties of normal networks, this disorder-related proportional increase in connection distance was associated with reduced clustering and modularity and increased global efficiency of COS networks. Between-group differences in connection distance were localized specifically to connector hubs of multimodal association cortex. In relation to the neurodevelopmental pathogenesis of schizophrenia, we argue that the data are consistent with the interpretation that spatial and topological disturbances of functional network organization could arise from excessive “pruning” of short-distance functional connections in schizophrenia. PMID:22275481
Wang, Hui; Liu, Chunyue; Rong, Luge; Wang, Xiaoxu; Sun, Lina; Luo, Qing; Wu, Hao
2018-01-09
River monitoring networks play an important role in water environmental management and assessment, and it is critical to develop an appropriate method to optimize the monitoring network. In this study, an effective method was proposed based on the attainment rate of National Grade III water quality, optimal partition analysis and Euclidean distance, and Hun River was taken as a method validation case. There were 7 sampling sites in the monitoring network of the Hun River, and 17 monitoring items were analyzed once a month during January 2009 to December 2010. The results showed that the main monitoring items in the surface water of Hun River were ammonia nitrogen (NH 4 + -N), chemical oxygen demand, and biochemical oxygen demand. After optimization, the required number of monitoring sites was reduced from seven to three, and 57% of the cost was saved. In addition, there were no significant differences between non-optimized and optimized monitoring networks, and the optimized monitoring networks could correctly represent the original monitoring network. The duplicate setting degree of monitoring sites decreased after optimization, and the rationality of the monitoring network was improved. Therefore, the optimal method was identified as feasible, efficient, and economic.
The remapping of space in motor learning and human-machine interfaces
Mussa-Ivaldi, F.A.; Danziger, Z.
2009-01-01
Studies of motor adaptation to patterns of deterministic forces have revealed the ability of the motor control system to form and use predictive representations of the environment. One of the most fundamental elements of our environment is space itself. This article focuses on the notion of Euclidean space as it applies to common sensory motor experiences. Starting from the assumption that we interact with the world through a system of neural signals, we observe that these signals are not inherently endowed with metric properties of the ordinary Euclidean space. The ability of the nervous system to represent these properties depends on adaptive mechanisms that reconstruct the Euclidean metric from signals that are not Euclidean. Gaining access to these mechanisms will reveal the process by which the nervous system handles novel sophisticated coordinate transformation tasks, thus highlighting possible avenues to create functional human-machine interfaces that can make that task much easier. A set of experiments is presented that demonstrate the ability of the sensory-motor system to reorganize coordination in novel geometrical environments. In these environments multiple degrees of freedom of body motions are used to control the coordinates of a point in a two-dimensional Euclidean space. We discuss how practice leads to the acquisition of the metric properties of the controlled space. Methods of machine learning based on the reduction of reaching errors are tested as a means to facilitate learning by adaptively changing he map from body motions to controlled device. We discuss the relevance of the results to the development of adaptive human machine interfaces and optimal control. PMID:19665553
Vera, José Fernando; de Rooij, Mark; Heiser, Willem J
2014-11-01
In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low-dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented. © 2014 The British Psychological Society.
Similarity analysis of spectra obtained via reflectance spectrometry in legal medicine.
Belenki, Liudmila; Sterzik, Vera; Bohnert, Michael
2014-02-01
In the present study, a series of reflectance spectra of postmortem lividity, pallor, and putrefaction-affected skin for 195 investigated cases in the course of cooling down the corpse has been collected. The reflectance spectrometric measurements were stored together with their respective metadata in a MySQL database. The latter has been managed via a scientific information repository. We propose similarity measures and a criterion of similarity that capture similar spectra recorded at corpse skin. We systematically clustered reflectance spectra from the database as well as their metadata, such as case number, age, sex, skin temperature, duration of cooling, and postmortem time, with respect to the given criterion of similarity. Altogether, more than 500 reflectance spectra have been pairwisely compared. The measures that have been used to compare a pair of reflectance curve samples include the Euclidean distance between curves and the Euclidean distance between derivatives of the functions represented by the reflectance curves at the same wavelengths in the spectral range of visible light between 380 and 750 nm. For each case, using the recorded reflectance curves and the similarity criterion, the postmortem time interval during which a characteristic change in the shape of reflectance spectrum takes place is estimated. The latter is carried out via a software package composed of Java, Python, and MatLab scripts that query the MySQL database. We show that in legal medicine, matching and clustering of reflectance curves obtained by means of reflectance spectrometry with respect to a given criterion of similarity can be used to estimate the postmortem interval.
Modified multidimensional scaling approach to analyze financial markets.
Yin, Yi; Shang, Pengjian
2014-06-01
Detrended cross-correlation coefficient (σDCCA) and dynamic time warping (DTW) are introduced as the dissimilarity measures, respectively, while multidimensional scaling (MDS) is employed to translate the dissimilarities between daily price returns of 24 stock markets. We first propose MDS based on σDCCA dissimilarity and MDS based on DTW dissimilarity creatively, while MDS based on Euclidean dissimilarity is also employed to provide a reference for comparisons. We apply these methods in order to further visualize the clustering between stock markets. Moreover, we decide to confront MDS with an alternative visualization method, "Unweighed Average" clustering method, for comparison. The MDS analysis and "Unweighed Average" clustering method are employed based on the same dissimilarity. Through the results, we find that MDS gives us a more intuitive mapping for observing stable or emerging clusters of stock markets with similar behavior, while the MDS analysis based on σDCCA dissimilarity can provide more clear, detailed, and accurate information on the classification of the stock markets than the MDS analysis based on Euclidean dissimilarity. The MDS analysis based on DTW dissimilarity indicates more knowledge about the correlations between stock markets particularly and interestingly. Meanwhile, it reflects more abundant results on the clustering of stock markets and is much more intensive than the MDS analysis based on Euclidean dissimilarity. In addition, the graphs, originated from applying MDS methods based on σDCCA dissimilarity and DTW dissimilarity, may also guide the construction of multivariate econometric models.
Description of 3D digital curves using the theory free groups
NASA Astrophysics Data System (ADS)
Imiya, Atsushi; Oosawa, Muneaki
1999-09-01
In this paper, we propose a new descriptor for two- and three- dimensional digital curves using the theory of free groups. A spatial digital curve is expressed as a word which is an element of the free group which consists from three elements. These three symbols correspond to the directions of the orthogonal coordinates, respectively. Since a digital curve is treated as a word which is a sequence of alphabetical symbols, this expression permits us to describe any geometric operation as rewriting rules for words. Furthermore, the symbolic derivative of words yields geometric invariants of digital curves for digital Euclidean motion. These invariants enable us to design algorithms for the matching and searching procedures of partial structures of digital curves. Moreover, these symbolic descriptors define the global and local distances for digital curves as an editing distance.
Canonical Drude Weight for Non-integrable Quantum Spin Chains
NASA Astrophysics Data System (ADS)
Mastropietro, Vieri; Porta, Marcello
2018-03-01
The Drude weight is a central quantity for the transport properties of quantum spin chains. The canonical definition of Drude weight is directly related to Kubo formula of conductivity. However, the difficulty in the evaluation of such expression has led to several alternative formulations, accessible to different methods. In particular, the Euclidean, or imaginary-time, Drude weight can be studied via rigorous renormalization group. As a result, in the past years several universality results have been proven for such quantity at zero temperature; remarkably, the proofs work for both integrable and non-integrable quantum spin chains. Here we establish the equivalence of Euclidean and canonical Drude weights at zero temperature. Our proof is based on rigorous renormalization group methods, Ward identities, and complex analytic ideas.
2016-03-02
Nyquist tiles and sampling groups in Euclidean geometry, and discussed the extension of these concepts to hyperbolic and spherical geometry and...hyperbolic or spherical spaces. We look to develop a structure for the tiling of frequency spaces in both Euclidean and non-Euclidean domains. In particular...we establish Nyquist tiles and sampling groups in Euclidean geometry, and discuss the extension of these concepts to hyperbolic and spherical geometry
Reconstructing spatial organizations of chromosomes through manifold learning
Deng, Wenxuan; Hu, Hailin; Ma, Rui; Zhang, Sai; Yang, Jinglin; Peng, Jian; Kaplan, Tommy; Zeng, Jianyang
2018-01-01
Abstract Decoding the spatial organizations of chromosomes has crucial implications for studying eukaryotic gene regulation. Recently, chromosomal conformation capture based technologies, such as Hi-C, have been widely used to uncover the interaction frequencies of genomic loci in a high-throughput and genome-wide manner and provide new insights into the folding of three-dimensional (3D) genome structure. In this paper, we develop a novel manifold learning based framework, called GEM (Genomic organization reconstructor based on conformational Energy and Manifold learning), to reconstruct the three-dimensional organizations of chromosomes by integrating Hi-C data with biophysical feasibility. Unlike previous methods, which explicitly assume specific relationships between Hi-C interaction frequencies and spatial distances, our model directly embeds the neighboring affinities from Hi-C space into 3D Euclidean space. Extensive validations demonstrated that GEM not only greatly outperformed other state-of-art modeling methods but also provided a physically and physiologically valid 3D representations of the organizations of chromosomes. Furthermore, we for the first time apply the modeled chromatin structures to recover long-range genomic interactions missing from original Hi-C data. PMID:29408992
Reconstructing spatial organizations of chromosomes through manifold learning.
Zhu, Guangxiang; Deng, Wenxuan; Hu, Hailin; Ma, Rui; Zhang, Sai; Yang, Jinglin; Peng, Jian; Kaplan, Tommy; Zeng, Jianyang
2018-05-04
Decoding the spatial organizations of chromosomes has crucial implications for studying eukaryotic gene regulation. Recently, chromosomal conformation capture based technologies, such as Hi-C, have been widely used to uncover the interaction frequencies of genomic loci in a high-throughput and genome-wide manner and provide new insights into the folding of three-dimensional (3D) genome structure. In this paper, we develop a novel manifold learning based framework, called GEM (Genomic organization reconstructor based on conformational Energy and Manifold learning), to reconstruct the three-dimensional organizations of chromosomes by integrating Hi-C data with biophysical feasibility. Unlike previous methods, which explicitly assume specific relationships between Hi-C interaction frequencies and spatial distances, our model directly embeds the neighboring affinities from Hi-C space into 3D Euclidean space. Extensive validations demonstrated that GEM not only greatly outperformed other state-of-art modeling methods but also provided a physically and physiologically valid 3D representations of the organizations of chromosomes. Furthermore, we for the first time apply the modeled chromatin structures to recover long-range genomic interactions missing from original Hi-C data.
The Formalism of Quantum Mechanics Specified by Covariance Properties
NASA Astrophysics Data System (ADS)
Nisticò, G.
2009-03-01
The known methods, due for instance to G.W. Mackey and T.F. Jordan, which exploit the transformation properties with respect to the Euclidean and Galileian group to determine the formalism of the Quantum Theory of a localizable particle, fail in the case that the considered transformations are not symmetries of the physical system. In the present work we show that the formalism of standard Quantum Mechanics for a particle without spin can be completely recovered by exploiting the covariance properties with respect to the group of Euclidean transformations, without requiring that these transformations are symmetries of the physical system.
An empirical analysis of the Ebola outbreak in West Africa
NASA Astrophysics Data System (ADS)
Khaleque, Abdul; Sen, Parongama
2017-02-01
The data for the Ebola outbreak that occurred in 2014-2016 in three countries of West Africa are analysed within a common framework. The analysis is made using the results of an agent based Susceptible-Infected-Removed (SIR) model on a Euclidean network, where nodes at a distance l are connected with probability P(l) ∝ l-δ, δ determining the range of the interaction, in addition to nearest neighbors. The cumulative (total) density of infected population here has the form , where the parameters depend on δ and the infection probability q. This form is seen to fit well with the data. Using the best fitting parameters, the time at which the peak is reached is estimated and is shown to be consistent with the data. We also show that in the Euclidean model, one can choose δ and q values which reproduce the data for the three countries qualitatively. These choices are correlated with population density, control schemes and other factors. Comparing the real data and the results from the model one can also estimate the size of the actual population susceptible to the disease. Rescaling the real data a reasonably good quantitative agreement with the simulation results is obtained.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1990-01-01
An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.
Gravitational decoupling and the Picard-Lefschetz approach
NASA Astrophysics Data System (ADS)
Brown, Jon; Cole, Alex; Shiu, Gary; Cottrell, William
2018-01-01
In this work, we consider tunneling between nonmetastable states in gravitational theories. Such processes arise in various contexts, e.g., in inflationary scenarios where the inflaton potential involves multiple fields or multiple branches. They are also relevant for bubble wall nucleation in some cosmological settings. However, we show that the transition amplitudes computed using the Euclidean method generally do not approach the corresponding field theory limit as Mp→∞ . This implies that in the Euclidean framework, there is no systematic expansion in powers of GN for such processes. Such considerations also carry over directly to no-boundary scenarios involving Hawking-Turok instantons. In this note, we illustrate this failure of decoupling in the Euclidean approach with a simple model of axion monodromy and then argue that the situation can be remedied with a Lorentzian prescription such as the Picard-Lefschetz theory. As a proof of concept, we illustrate with a simple model how tunneling transition amplitudes can be calculated using the Picard-Lefschetz approach.
Modeling of Dissipation Element Statistics in Turbulent Non-Premixed Jet Flames
NASA Astrophysics Data System (ADS)
Denker, Dominik; Attili, Antonio; Boschung, Jonas; Hennig, Fabian; Pitsch, Heinz
2017-11-01
The dissipation element (DE) analysis is a method for analyzing and compartmentalizing turbulent scalar fields. DEs can be described by two parameters, namely the Euclidean distance l between their extremal points and the scalar difference in the respective points Δϕ . The joint probability density function (jPDF) of these two parameters P(Δϕ , l) is expected to suffice for a statistical reconstruction of the scalar field. In addition, reacting scalars show a strong correlation with these DE parameters in both premixed and non-premixed flames. Normalized DE statistics show a remarkable invariance towards changes in Reynolds numbers. This feature of DE statistics was exploited in a Boltzmann-type evolution equation based model for the probability density function (PDF) of the distance between the extremal points P(l) in isotropic turbulence. Later, this model was extended for the jPDF P(Δϕ , l) and then adapted for the use in free shear flows. The effect of heat release on the scalar scales and DE statistics is investigated and an extended model for non-premixed jet flames is introduced, which accounts for the presence of chemical reactions. This new model is validated against a series of DNS of temporally evolving jet flames. European Research Council Project ``Milestone''.
Multispectral Palmprint Recognition Using a Quaternion Matrix
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%. PMID:22666049
Multispectral palmprint recognition using a quaternion matrix.
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
Bercovich, A; Edan, Y; Alchanatis, V; Moallem, U; Parmet, Y; Honig, H; Maltz, E; Antler, A; Halachmi, I
2013-01-01
Body condition evaluation is a common tool to assess energy reserves of dairy cows and to estimate their fatness or thinness. This study presents a computer-vision tool that automatically estimates cow's body condition score. Top-view images of 151 cows were collected on an Israeli research dairy farm using a digital still camera located at the entrance to the milking parlor. The cow's tailhead area and its contour were segmented and extracted automatically. Two types of features of the tailhead contour were extracted: (1) the angles and distances between 5 anatomical points; and (2) the cow signature, which is a 1-dimensional vector of the Euclidean distances from each point in the normalized tailhead contour to the shape center. Two methods were applied to describe the cow's signature and to reduce its dimension: (1) partial least squares regression, and (2) Fourier descriptors of the cow signature. Three prediction models were compared with manual scores of an expert. Results indicate that (1) it is possible to automatically extract and predict body condition from color images without any manual interference; and (2) Fourier descriptors of the cow's signature result in improved performance (R(2)=0.77). Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Tensor manifold-based extreme learning machine for 2.5-D face recognition
NASA Astrophysics Data System (ADS)
Chong, Lee Ying; Ong, Thian Song; Teoh, Andrew Beng Jin
2018-01-01
We explore the use of the Gabor regional covariance matrix (GRCM), a flexible matrix-based descriptor that embeds the Gabor features in the covariance matrix, as a 2.5-D facial descriptor and an effective means of feature fusion for 2.5-D face recognition problems. Despite its promise, matching is not a trivial problem for GRCM since it is a special instance of a symmetric positive definite (SPD) matrix that resides in non-Euclidean space as a tensor manifold. This implies that GRCM is incompatible with the existing vector-based classifiers and distance matchers. Therefore, we bridge the gap of the GRCM and extreme learning machine (ELM), a vector-based classifier for the 2.5-D face recognition problem. We put forward a tensor manifold-compliant ELM and its two variants by embedding the SPD matrix randomly into reproducing kernel Hilbert space (RKHS) via tensor kernel functions. To preserve the pair-wise distance of the embedded data, we orthogonalize the random-embedded SPD matrix. Hence, classification can be done using a simple ridge regressor, an integrated component of ELM, on the random orthogonal RKHS. Experimental results show that our proposed method is able to improve the recognition performance and further enhance the computational efficiency.
Generating subtour elimination constraints for the TSP from pure integer solutions.
Pferschy, Ulrich; Staněk, Rostislav
2017-01-01
The traveling salesman problem ( TSP ) is one of the most prominent combinatorial optimization problems. Given a complete graph [Formula: see text] and non-negative distances d for every edge, the TSP asks for a shortest tour through all vertices with respect to the distances d. The method of choice for solving the TSP to optimality is a branch and cut approach . Usually the integrality constraints are relaxed first and all separation processes to identify violated inequalities are done on fractional solutions . In our approach we try to exploit the impressive performance of current ILP-solvers and work only with integer solutions without ever interfering with fractional solutions. We stick to a very simple ILP-model and relax the subtour elimination constraints only. The resulting problem is solved to integer optimality, violated constraints (which are trivial to find) are added and the process is repeated until a feasible solution is found. In order to speed up the algorithm we pursue several attempts to find as many relevant subtours as possible. These attempts are based on the clustering of vertices with additional insights gained from empirical observations and random graph theory. Computational results are performed on test instances taken from the TSPLIB95 and on random Euclidean graphs .
Zhang, Ying-Ying; Yang, Cai; Zhang, Ping
2017-08-01
In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.
Su, Mingzhe; Ma, Yan; Zhang, Xiangfen; Wang, Yan; Zhang, Yuping
2017-01-01
The traditional scale invariant feature transform (SIFT) method can extract distinctive features for image matching. However, it is extremely time-consuming in SIFT matching because of the use of the Euclidean distance measure. Recently, many binary SIFT (BSIFT) methods have been developed to improve matching efficiency; however, none of them is invariant to mirror reflection. To address these problems, in this paper, we present a horizontal or vertical mirror reflection invariant binary descriptor named MBR-SIFT, in addition to a novel image matching approach. First, 16 cells in the local region around the SIFT keypoint are reorganized, and then the 128-dimensional vector of the SIFT descriptor is transformed into a reconstructed vector according to eight directions. Finally, the MBR-SIFT descriptor is obtained after binarization and reverse coding. To improve the matching speed and accuracy, a fast matching algorithm that includes a coarse-to-fine two-step matching strategy in addition to two similarity measures for the MBR-SIFT descriptor are proposed. Experimental results on the UKBench dataset show that the proposed method not only solves the problem of mirror reflection, but also ensures desirable matching accuracy and speed.
Su, Mingzhe; Ma, Yan; Zhang, Xiangfen; Wang, Yan; Zhang, Yuping
2017-01-01
The traditional scale invariant feature transform (SIFT) method can extract distinctive features for image matching. However, it is extremely time-consuming in SIFT matching because of the use of the Euclidean distance measure. Recently, many binary SIFT (BSIFT) methods have been developed to improve matching efficiency; however, none of them is invariant to mirror reflection. To address these problems, in this paper, we present a horizontal or vertical mirror reflection invariant binary descriptor named MBR-SIFT, in addition to a novel image matching approach. First, 16 cells in the local region around the SIFT keypoint are reorganized, and then the 128-dimensional vector of the SIFT descriptor is transformed into a reconstructed vector according to eight directions. Finally, the MBR-SIFT descriptor is obtained after binarization and reverse coding. To improve the matching speed and accuracy, a fast matching algorithm that includes a coarse-to-fine two-step matching strategy in addition to two similarity measures for the MBR-SIFT descriptor are proposed. Experimental results on the UKBench dataset show that the proposed method not only solves the problem of mirror reflection, but also ensures desirable matching accuracy and speed. PMID:28542537
Talwar, Sameer; Roopwani, Rahul; Anderson, Carl A; Buckner, Ira S; Drennen, James K
2017-08-01
Near-infrared chemical imaging (NIR-CI) combines spectroscopy with digital imaging, enabling spatially resolved analysis and characterization of pharmaceutical samples. Hardness and relative density are critical quality attributes (CQA) that affect tablet performance. Intra-sample density or hardness variability can reveal deficiencies in formulation design or the tableting process. This study was designed to develop NIR-CI methods to predict spatially resolved tablet density and hardness. The method was implemented using a two-step procedure. First, NIR-CI was used to develop a relative density/solid fraction (SF) prediction method for pure microcrystalline cellulose (MCC) compacts only. A partial least squares (PLS) model for predicting SF was generated by regressing the spectra of certain representative pixels selected from each image against the compact SF. Pixel selection was accomplished with a threshold based on the Euclidean distance from the median tablet spectrum. Second, micro-indentation was performed on the calibration compacts to obtain hardness values. A univariate model was developed by relating the empirical hardness values to the NIR-CI predicted SF at the micro-indented pixel locations: this model generated spatially resolved hardness predictions for the entire tablet surface.
A PDE approach for quantifying and visualizing tumor progression and regression
NASA Astrophysics Data System (ADS)
Sintay, Benjamin J.; Bourland, J. Daniel
2009-02-01
Quantification of changes in tumor shape and size allows physicians the ability to determine the effectiveness of various treatment options, adapt treatment, predict outcome, and map potential problem sites. Conventional methods are often based on metrics such as volume, diameter, or maximum cross sectional area. This work seeks to improve the visualization and analysis of tumor changes by simultaneously analyzing changes in the entire tumor volume. This method utilizes an elliptic partial differential equation (PDE) to provide a roadmap of boundary displacement that does not suffer from the discontinuities associated with other measures such as Euclidean distance. Streamline pathways defined by Laplace's equation (a commonly used PDE) are used to track tumor progression and regression at the tumor boundary. Laplace's equation is particularly useful because it provides a smooth, continuous solution that can be evaluated with sub-pixel precision on variable grid sizes. Several metrics are demonstrated including maximum, average, and total regression and progression. This method provides many advantages over conventional means of quantifying change in tumor shape because it is observer independent, stable for highly unusual geometries, and provides an analysis of the entire three-dimensional tumor volume.
NASA Astrophysics Data System (ADS)
Mallast, U.; Gloaguen, R.; Geyer, S.; Rödiger, T.; Siebert, C.
2011-08-01
In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxiliary information and finally evaluated in terms of hydro-geological significance. Using the example of the western catchment of the Dead Sea (Israel/Palestine), the orientation and location of the differentiated lineaments are compared to characteristics of known structural features. We demonstrate that a strong correlation between lineaments and structural features exists. Using Euclidean distances between lineaments and wells provides an assessment criterion to evaluate the hydraulic significance of detected lineaments. Based on this analysis, we suggest that the statistical analysis of lineaments allows a delineation of flow-paths and thus significant information on groundwater movements. To validate the flow-paths we compare them to existing results of groundwater models that are based on well data.
a Hyperspectral Image Classification Method Using Isomap and Rvm
NASA Astrophysics Data System (ADS)
Chang, H.; Wang, T.; Fang, H.; Su, Y.
2018-04-01
Classification is one of the most significant applications of hyperspectral image processing and even remote sensing. Though various algorithms have been proposed to implement and improve this application, there are still drawbacks in traditional classification methods. Thus further investigations on some aspects, such as dimension reduction, data mining, and rational use of spatial information, should be developed. In this paper, we used a widely utilized global manifold learning approach, isometric feature mapping (ISOMAP), to address the intrinsic nonlinearities of hyperspectral image for dimension reduction. Considering the impropriety of Euclidean distance in spectral measurement, we applied spectral angle (SA) for substitute when constructed the neighbourhood graph. Then, relevance vector machines (RVM) was introduced to implement classification instead of support vector machines (SVM) for simplicity, generalization and sparsity. Therefore, a probability result could be obtained rather than a less convincing binary result. Moreover, taking into account the spatial information of the hyperspectral image, we employ a spatial vector formed by different classes' ratios around the pixel. At last, we combined the probability results and spatial factors with a criterion to decide the final classification result. To verify the proposed method, we have implemented multiple experiments with standard hyperspectral images compared with some other methods. The results and different evaluation indexes illustrated the effectiveness of our method.
A new method to cluster genomes based on cumulative Fourier power spectrum.
Dong, Rui; Zhu, Ziyue; Yin, Changchuan; He, Rong L; Yau, Stephen S-T
2018-06-20
Analyzing phylogenetic relationships using mathematical methods has always been of importance in bioinformatics. Quantitative research may interpret the raw biological data in a precise way. Multiple Sequence Alignment (MSA) is used frequently to analyze biological evolutions, but is very time-consuming. When the scale of data is large, alignment methods cannot finish calculation in reasonable time. Therefore, we present a new method using moments of cumulative Fourier power spectrum in clustering the DNA sequences. Each sequence is translated into a vector in Euclidean space. Distances between the vectors can reflect the relationships between sequences. The mapping between the spectra and moment vector is one-to-one, which means that no information is lost in the power spectra during the calculation. We cluster and classify several datasets including Influenza A, primates, and human rhinovirus (HRV) datasets to build up the phylogenetic trees. Results show that the new proposed cumulative Fourier power spectrum is much faster and more accurately than MSA and another alignment-free method known as k-mer. The research provides us new insights in the study of phylogeny, evolution, and efficient DNA comparison algorithms for large genomes. The computer programs of the cumulative Fourier power spectrum are available at GitHub (https://github.com/YaulabTsinghua/cumulative-Fourier-power-spectrum). Copyright © 2018. Published by Elsevier B.V.
Classification of microscopic images of breast tissue
NASA Astrophysics Data System (ADS)
Ballerini, Lucia; Franzen, Lennart
2004-05-01
Breast cancer is the most common form of cancer among women. The diagnosis is usually performed by the pathologist, that subjectively evaluates tissue samples. The aim of our research is to develop techniques for the automatic classification of cancerous tissue, by analyzing histological samples of intact tissue taken with a biopsy. In our study, we considered 200 images presenting four different conditions: normal tissue, fibroadenosis, ductal cancer and lobular cancer. Methods to extract features have been investigated and described. One method is based on granulometries, which are size-shape descriptors widely used in mathematical morphology. Applications of granulometries lead to distribution functions whose moments are used as features. A second method is based on fractal geometry, that seems very suitable to quantify biological structures. The fractal dimension of binary images has been computed using the euclidean distance mapping. Image classification has then been performed using the extracted features as input of a back-propagation neural network. A new method that combines genetic algorithms and morphological filters has been also investigated. In this case, the classification is based on a correlation measure. Very encouraging results have been obtained with pilot experiments using a small subset of images as training set. Experimental results indicate the effectiveness of the proposed methods. Cancerous tissue was correctly classified in 92.5% of the cases.
Analysis of facial expressions in parkinson's disease through video-based automatic methods.
Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia
2017-04-01
The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.
Dynamic Hyperbolic Geometry: Building Intuition and Understanding Mediated by a Euclidean Model
ERIC Educational Resources Information Center
Moreno-Armella, Luis; Brady, Corey; Elizondo-Ramirez, Rubén
2018-01-01
This paper explores a deep transformation in mathematical epistemology and its consequences for teaching and learning. With the advent of non-Euclidean geometries, direct, iconic correspondences between physical space and the deductive structures of mathematical inquiry were broken. For non-Euclidean ideas even to become "thinkable" the…
Can A "Hyperspace" Really Exist?
NASA Technical Reports Server (NTRS)
Zampino, Edward J.
1999-01-01
The idea of "hyperspace" is suggested as a possible approach to faster-than-light (FTL) motion. A brief summary of a 1986 study on the Euclidean representation of space-time by the author is presented. Some new calculations on the relativistic momentum and energy of a free particle in Euclidean "hyperspace" are now added and discussed. The superimposed Energy-Momentum curves for subluminal particles, tachyons, and particles in Euclidean "hyperspace" are presented. It is shown that in Euclidean "hyperspace", instead of a relativistic time dilation there is a time "compression" effect. Some fundamental questions are presented,
Spectral asymptotics of Euclidean quantum gravity with diff-invariant boundary conditions
NASA Astrophysics Data System (ADS)
Esposito, Giampiero; Fucci, Guglielmo; Kamenshchik, Alexander Yu; Kirsten, Klaus
2005-03-01
A general method is known to exist for studying Abelian and non-Abelian gauge theories, as well as Euclidean quantum gravity, at 1-loop level on manifolds with boundary. In the latter case, boundary conditions on metric perturbations h can be chosen to be completely invariant under infinitesimal diffeomorphisms, to preserve the invariance group of the theory and BRST symmetry. In the de Donder gauge, however, the resulting boundary-value problem for the Laplace-type operator acting on h is known to be self-adjoint but not strongly elliptic. The latter is a technical condition ensuring that a unique smooth solution of the boundary-value problem exists, which implies, in turn, that the global heat-kernel asymptotics yielding 1-loop divergences and 1-loop effective action actually exists. The present paper shows that, on the Euclidean 4-ball, only the scalar part of perturbative modes for quantum gravity is affected by the lack of strong ellipticity. Further evidence for lack of strong ellipticity, from an analytic point of view, is therefore obtained. Interestingly, three sectors of the scalar-perturbation problem remain elliptic, while lack of strong ellipticity is 'confined' to the remaining fourth sector. The integral representation of the resulting ζ-function asymptotics on the Euclidean 4-ball is also obtained; this remains regular at the origin by virtue of a spectral identity here obtained for the first time.
On chemical distances and shape theorems in percolation models with long-range correlations
NASA Astrophysics Data System (ADS)
Drewitz, Alexander; Ráth, Balázs; Sapozhnikov, Artëm
2014-08-01
In this paper, we provide general conditions on a one parameter family of random infinite subsets of {{Z}}^d to contain a unique infinite connected component for which the chemical distances are comparable to the Euclidean distance. In addition, we show that these conditions also imply a shape theorem for the corresponding infinite connected component. By verifying these conditions for specific models, we obtain novel results about the structure of the infinite connected component of the vacant set of random interlacements and the level sets of the Gaussian free field. As a byproduct, we obtain alternative proofs to the corresponding results for random interlacements in the work of Černý and Popov ["On the internal distance in the interlacement set," Electron. J. Probab. 17(29), 1-25 (2012)], and while our main interest is in percolation models with long-range correlations, we also recover results in the spirit of the work of Antal and Pisztora ["On the chemical distance for supercritical Bernoulli percolation," Ann Probab. 24(2), 1036-1048 (1996)] for Bernoulli percolation. Finally, as a corollary, we derive new results about the (chemical) diameter of the largest connected component in the complement of the trace of the random walk on the torus.
Analysis of Uncertainty in a Middle-Cost Device for 3D Measurements in BIM Perspective
Sánchez, Alonso; Naranjo, José-Manuel; Jiménez, Antonio; González, Alfonso
2016-01-01
Medium-cost devices equipped with sensors are being developed to get 3D measurements. Some allow for generating geometric models and point clouds. Nevertheless, the accuracy of these measurements should be evaluated, taking into account the requirements of the Building Information Model (BIM). This paper analyzes the uncertainty in outdoor/indoor three-dimensional coordinate measures and point clouds (using Spherical Accuracy Standard (SAS) methods) for Eyes Map, a medium-cost tablet manufactured by e-Capture Research & Development Company, Mérida, Spain. To achieve it, in outdoor tests, by means of this device, the coordinates of targets were measured from 1 to 6 m and cloud points were obtained. Subsequently, these were compared to the coordinates of the same targets measured by a Total Station. The Euclidean average distance error was 0.005–0.027 m for measurements by Photogrammetry and 0.013–0.021 m for the point clouds. All of them satisfy the tolerance for point cloud acquisition (0.051 m) according to the BIM Guide for 3D Imaging (General Services Administration); similar results are obtained in the indoor tests, with values of 0.022 m. In this paper, we establish the optimal distances for the observations in both, Photogrammetry and 3D Photomodeling modes (outdoor) and point out some working conditions to avoid in indoor environments. Finally, the authors discuss some recommendations for improving the performance and working methods of the device. PMID:27669245
Understanding multi-scale structural evolution in granular systems through gMEMS
NASA Astrophysics Data System (ADS)
Walker, David M.; Tordesillas, Antoinette
2013-06-01
We show how the rheological response of a material to applied loads can be systematically coded, analyzed and succinctly summarized, according to an individual grain's property (e.g. kinematics). Individual grains are considered as their own smart sensor akin to microelectromechanical systems (e.g. gyroscopes, accelerometers), each capable of recognizing their evolving role within self-organizing building block structures (e.g. contact cycles and force chains). A symbolic time series is used to represent their participation in such self-assembled building blocks and a complex network summarizing their interrelationship with other grains is constructed. In particular, relationships between grain time series are determined according to the information theory Hamming distance or the metric Euclidean distance. We then use topological distance to find network communities enabling groups of grains at remote physical metric distances in the material to share a classification. In essence grains with similar structural and functional roles at different scales are identified together. This taxonomy distills the dissipative structural rearrangements of grains down to its essential features and thus provides pointers for objective physics-based internal variable formalisms used in the construction of robust predictive continuum models.
Ghosh, Payel; Chandler, Adam G; Altinmakas, Emre; Rong, John; Ng, Chaan S
2016-01-01
The aim of this study was to investigate the feasibility of shuttle-mode computed tomography (CT) technology for body perfusion applications by quantitatively assessing and correcting motion artifacts. Noncontrast shuttle-mode CT scans (10 phases, 2 nonoverlapping bed locations) were acquired from 4 patients on a GE 750HD CT scanner. Shuttling effects were quantified using Euclidean distances (between-phase and between-bed locations) of corresponding fiducial points on the shuttle and reference phase scans (prior to shuttle mode). Motion correction with nonrigid registration was evaluated using sum-of-squares differences and distances between centers of segmented volumes of interest on shuttle and references images. Fiducial point analysis showed an average shuttling motion of 0.85 ± 1.05 mm (between-bed) and 1.18 ± 1.46 mm (between-phase), respectively. The volume-of-interest analysis of the nonrigid registration results showed improved sum-of-squares differences from 2950 to 597, between-bed distance from 1.64 to 1.20 mm, and between-phase distance from 2.64 to 1.33 mm, respectively, averaged over all cases. Shuttling effects introduced during shuttle-mode CT acquisitions can be computationally corrected for body perfusion applications.
Novel Perceptually Uniform Chromatic Space.
da Fonseca, María; Samengo, Inés
2018-06-01
Chromatically perceptive observers are endowed with a sense of similarity between colors. For example, two shades of green that are only slightly discriminable are perceived as similar, whereas other pairs of colors, for example, blue and yellow, typically elicit markedly different sensations. The notion of similarity need not be shared by different observers. Dichromat and trichromat subjects perceive colors differently, and two dichromats (or two trichromats, for that matter) may judge chromatic differences inconsistently. Moreover, there is ample evidence that different animal species sense colors diversely. To capture the subjective metric of color perception, here we construct a notion of distance in color space based on the physiology of the retina, and is thereby individually tailored for different observers. By applying the Fisher metric to an analytical model of color representation, we construct a notion of distance that reproduces behavioral experiments of classical discrimination tasks. We then derive a coordinate transformation that defines a new chromatic space in which the Euclidean distance between any two colors is equal to the perceptual distance, as seen by one individual subject, endowed with an arbitrary number of color-sensitive photoreceptors, each with arbitrary absorption probability curves and appearing in arbitrary proportions.
Assignment of EC Numbers to Enzymatic Reactions with Reaction Difference Fingerprints
Hu, Qian-Nan; Zhu, Hui; Li, Xiaobing; Zhang, Manman; Deng, Zhe; Yang, Xiaoyan; Deng, Zixin
2012-01-01
The EC numbers represent enzymes and enzyme genes (genomic information), but they are also utilized as identifiers of enzymatic reactions (chemical information). In the present work (ECAssigner), our newly proposed reaction difference fingerprints (RDF) are applied to assign EC numbers to enzymatic reactions. The fingerprints of reactant molecules minus the fingerprints of product molecules will generate reaction difference fingerprints, which are then used to calculate reaction Euclidean distance, a reaction similarity measurement, of two reactions. The EC number of the most similar training reaction will be assigned to an input reaction. For 5120 balanced enzymatic reactions, the RDF with a fingerprint length at 3 obtained at the sub-subclass, subclass, and main class level with cross-validation accuracies of 83.1%, 86.7%, and 92.6% respectively. Compared with three published methods, ECAssigner is the first fully automatic server for EC number assignment. The EC assignment system (ECAssigner) is freely available via: http://cadd.whu.edu.cn/ecassigner/. PMID:23285222
Descriptive Statistics and Cluster Analysis for Extreme Rainfall in Java Island
NASA Astrophysics Data System (ADS)
E Komalasari, K.; Pawitan, H.; Faqih, A.
2017-03-01
This study aims to describe regional pattern of extreme rainfall based on maximum daily rainfall for period 1983 to 2012 in Java Island. Descriptive statistics analysis was performed to obtain centralization, variation and distribution of maximum precipitation data. Mean and median are utilized to measure central tendency data while Inter Quartile Range (IQR) and standard deviation are utilized to measure variation of data. In addition, skewness and kurtosis used to obtain shape the distribution of rainfall data. Cluster analysis using squared euclidean distance and ward method is applied to perform regional grouping. Result of this study show that mean (average) of maximum daily rainfall in Java Region during period 1983-2012 is around 80-181mm with median between 75-160mm and standard deviation between 17 to 82. Cluster analysis produces four clusters and show that western area of Java tent to have a higher annual maxima of daily rainfall than northern area, and have more variety of annual maximum value.
Knock detection system to improve petrol engine performance, using microphone sensor
NASA Astrophysics Data System (ADS)
Sujono, Agus; Santoso, Budi; Juwana, Wibawa Endra
2017-01-01
An increase of power and efficiency of spark ignition engines (petrol engines) are always faced with the problem of knock. Even the characteristics of the engine itself are always determined from the occurrence of knock. Until today, this knocking problem has not been solved completely. Knock is caused by principal factors that are influenced by the engine rotation, the load or opening the throttle and spark advance (ignition timing). In this research, the engine is mounted on the engine test bed (ETB) which is equipped with the necessary sensors. Knock detection using a new method, which is based on pattern recognition, which through the knock sound detection by using a microphone sensor, active filter, the regression of the normalized envelope function, and the calculation of the Euclidean distance is used for identifying knock. This system is implemented with a microcontroller which uses fuzzy logic controller ignition (FLIC), which aims to set proper spark advance, in accordance with operating conditions. This system can improve the engine performance for approximately 15%.
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Giampaolo, Salvatore M.; Illuminati, Fabrizio
2007-10-01
We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1×M bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself and the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a , uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.
Boundary conformal anomalies on hyperbolic spaces and Euclidean balls
NASA Astrophysics Data System (ADS)
Rodriguez-Gomez, Diego; Russo, Jorge G.
2017-12-01
We compute conformal anomalies for conformal field theories with free conformal scalars and massless spin 1/2 fields in hyperbolic space ℍ d and in the ball B^d , for 2≤d≤7. These spaces are related by a conformal transformation. In even dimensional spaces, the conformal anomalies on ℍ2 n and B^{2n} are shown to be identical. In odd dimensional spaces, the conformal anomaly on B^{2n+1} comes from a boundary contribution, which exactly coincides with that of ℍ2 n + 1 provided one identifies the UV short-distance cutoff on B^{2n+1} with the inverse large distance IR cutoff on ℍ2 n + 1, just as prescribed by the conformal map. As an application, we determine, for the first time, the conformal anomaly coefficients multiplying the Euler characteristic of the boundary for scalars and half-spin fields with various boundary conditions in d = 5 and d = 7.
On the mixing time of geographical threshold graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan
In this paper, we study the mixing time of random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Wemore » specifically study the mixing times of random walks on 2-dimensional GTGs near the connectivity threshold. We provide a set of criteria on the distribution of vertex weights that guarantees that the mixing time is {Theta}(n log n).« less
Bayes classification of terrain cover using normalized polarimetric data
NASA Technical Reports Server (NTRS)
Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.
1988-01-01
The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.
Faithful Squashed Entanglement
NASA Astrophysics Data System (ADS)
Brandão, Fernando G. S. L.; Christandl, Matthias; Yard, Jon
2011-09-01
Squashed entanglement is a measure for the entanglement of bipartite quantum states. In this paper we present a lower bound for squashed entanglement in terms of a distance to the set of separable states. This implies that squashed entanglement is faithful, that is, it is strictly positive if and only if the state is entangled. We derive the lower bound on squashed entanglement from a lower bound on the quantum conditional mutual information which is used to define squashed entanglement. The quantum conditional mutual information corresponds to the amount by which strong subadditivity of von Neumann entropy fails to be saturated. Our result therefore sheds light on the structure of states that almost satisfy strong subadditivity with equality. The proof is based on two recent results from quantum information theory: the operational interpretation of the quantum mutual information as the optimal rate for state redistribution and the interpretation of the regularised relative entropy of entanglement as an error exponent in hypothesis testing. The distance to the set of separable states is measured in terms of the LOCC norm, an operationally motivated norm giving the optimal probability of distinguishing two bipartite quantum states, each shared by two parties, using any protocol formed by local quantum operations and classical communication (LOCC) between the parties. A similar result for the Frobenius or Euclidean norm follows as an immediate consequence. The result has two applications in complexity theory. The first application is a quasipolynomial-time algorithm solving the weak membership problem for the set of separable states in LOCC or Euclidean norm. The second application concerns quantum Merlin-Arthur games. Here we show that multiple provers are not more powerful than a single prover when the verifier is restricted to LOCC operations thereby providing a new characterisation of the complexity class QMA.
San Segundo, Eugenia; Tsanas, Athanasios; Gómez-Vilda, Pedro
2017-01-01
There is a growing consensus that hybrid approaches are necessary for successful speaker characterization in Forensic Speaker Comparison (FSC); hence this study explores the forensic potential of voice features combining source and filter characteristics. The former relate to the action of the vocal folds while the latter reflect the geometry of the speaker's vocal tract. This set of features have been extracted from pause fillers, which are long enough for robust feature estimation while spontaneous enough to be extracted from voice samples in real forensic casework. Speaker similarity was measured using standardized Euclidean Distances (ED) between pairs of speakers: 54 different-speaker (DS) comparisons, 54 same-speaker (SS) comparisons and 12 comparisons between monozygotic twins (MZ). Results revealed that the differences between DS and SS comparisons were significant in both high quality and telephone-filtered recordings, with no false rejections and limited false acceptances; this finding suggests that this set of voice features is highly speaker-dependent and therefore forensically useful. Mean ED for MZ pairs lies between the average ED for SS comparisons and DS comparisons, as expected according to the literature on twin voices. Specific cases of MZ speakers with very high ED (i.e. strong dissimilarity) are discussed in the context of sociophonetic and twin studies. A preliminary simplification of the Vocal Profile Analysis (VPA) Scheme is proposed, which enables the quantification of voice quality features in the perceptual assessment of speaker similarity, and allows for the calculation of perceptual-acoustic correlations. The adequacy of z-score normalization for this study is also discussed, as well as the relevance of heat maps for detecting the so-called phantoms in recent approaches to the biometric menagerie. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Xu, Erqi; Zhang, Hongqi; Li, Mengxian
2013-08-01
The processes of karst rocky desertification (KRD) have been found to cause the most severe environmental degradation in southwestern China. Understanding the driving forces that cause KRD is essential for managing and restoring the areas that it impacts. Studies of the human driving forces of KRD are limited to the county level, a specific administrative unit in China; census data are acquired at this scale, which can lead to scale biases. Changshun County is studied here as a representative area and anthropogenic influences in the county are accounted for by using Euclidean distances for the proximity to roads and settlements. We propose a standard coefficient of human influence (SOI) that standardizes the Euclidean distances for different KRD transformations to compare the effects of human activities in different areas. In Changshun County, the individual influences of roads and settlements share similar characteristics. The SOIs of improved KRD transformation types are almost negative, but the SOIs of deteriorated types are nearly positive except for one form of KRD turning to the extremely severe KRD. The results indicated that the distribution and evolution of the KRD areas from 2000 to 2010 in Changshun were affected positively by human activities (e.g., KRD restoration projects) and also negatively (e.g., by intense and irrational land use). Our results demonstrate that the spatial techniques and SOI used in this study can effectively incorporate information concerning human influences and internal KRD transformations. This provides a suitable approach for studying the relationships between human activities and KRD processes at fine scales. Copyright © 2013 Elsevier B.V. All rights reserved.
The role of river drainages in shaping the genetic structure of capybara populations.
Byrne, María Soledad; Quintana, Rubén Darío; Bolkovic, María Luisa; Cassini, Marcelo H; Túnez, Juan Ignacio
2015-12-01
The capybara, Hydrochoerus hydrochaeris, is an herbivorous rodent widely distributed throughout most of South American wetlands that lives closely associated with aquatic environments. In this work, we studied the genetic structure of the capybara throughout part of its geographic range in Argentina using a DNA fragment of the mitochondrial control region. Haplotypes obtained were compared with those available for populations from Paraguay and Venezuela. We found 22 haplotypes in 303 individuals. Hierarchical AMOVAs were performed to evaluate the role of river drainages in shaping the genetic structure of capybara populations at the regional and basin scales. In addition, two landscape genetic models, isolation by distance and isolation by resistance, were used to test whether genetic distance was associated with Euclidean distance (i.e. isolation by distance) or river corridor distance (i.e. isolation by resistance) at the basin scale. At the regional scale, the results of the AMOVA grouping populations by mayor river basins showed significant differences between them. At the basin scale, we also found significant differences between sub-basins in Paraguay, together with a significant correlation between genetic and river corridor distance. For Argentina and Venezuela, results were not significant. These results suggest that in Paraguay, the current genetic structure of capybaras is associated with the lack of dispersion corridors through permanent rivers. In contrast, limited structuring in Argentina and Venezuela is likely the result of periodic flooding facilitating dispersion.
NASA Astrophysics Data System (ADS)
Arimbi, Mentari Dian; Bustamam, Alhadi; Lestari, Dian
2017-03-01
Data clustering can be executed through partition or hierarchical method for many types of data including DNA sequences. Both clustering methods can be combined by processing partition algorithm in the first level and hierarchical in the second level, called hybrid clustering. In the partition phase some popular methods such as PAM, K-means, or Fuzzy c-means methods could be applied. In this study we selected partitioning around medoids (PAM) in our partition stage. Furthermore, following the partition algorithm, in hierarchical stage we applied divisive analysis algorithm (DIANA) in order to have more specific clusters and sub clusters structures. The number of main clusters is determined using Davies Bouldin Index (DBI) value. We choose the optimal number of clusters if the results minimize the DBI value. In this work, we conduct the clustering on 1252 HPV DNA sequences data from GenBank. The characteristic extraction is initially performed, followed by normalizing and genetic distance calculation using Euclidean distance. In our implementation, we used the hybrid PAM and DIANA using the R open source programming tool. In our results, we obtained 3 main clusters with average DBI value is 0.979, using PAM in the first stage. After executing DIANA in the second stage, we obtained 4 sub clusters for Cluster-1, 9 sub clusters for Cluster-2 and 2 sub clusters in Cluster-3, with the BDI value 0.972, 0.771, and 0.768 for each main cluster respectively. Since the second stage produce lower DBI value compare to the DBI value in the first stage, we conclude that this hybrid approach can improve the accuracy of our clustering results.
Spacetime and Euclidean geometry
NASA Astrophysics Data System (ADS)
Brill, Dieter; Jacobson, Ted
2006-04-01
Using only the principle of relativity and Euclidean geometry we show in this pedagogical article that the square of proper time or length in a two-dimensional spacetime diagram is proportional to the Euclidean area of the corresponding causal domain. We use this relation to derive the Minkowski line element by two geometric proofs of the spacetime Pythagoras theorem.
Students Discovering Spherical Geometry Using Dynamic Geometry Software
ERIC Educational Resources Information Center
Guven, Bulent; Karatas, Ilhan
2009-01-01
Dynamic geometry software (DGS) such as Cabri and Geometers' Sketchpad has been regularly used worldwide for teaching and learning Euclidean geometry for a long time. The DGS with its inductive nature allows students to learn Euclidean geometry via explorations. However, with respect to non-Euclidean geometries, do we need to introduce them to…
A Case Example of Insect Gymnastics: How Is Non-Euclidean Geometry Learned?
ERIC Educational Resources Information Center
Junius, Premalatha
2008-01-01
The focus of the article is on the complex cognitive process involved in learning the concept of "straightness" in Non-Euclidean geometry. Learning new material is viewed through a conflict resolution framework, as a student questions familiar assumptions understood in Euclidean geometry. A case study reveals how mathematization of the straight…
Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei
2012-12-01
Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.
Cordova, James S; Schreibmann, Eduard; Hadjipanayis, Costas G; Guo, Ying; Shu, Hui-Kuo G; Shim, Hyunsuk; Holder, Chad A
2014-01-01
Standard-of-care therapy for glioblastomas, the most common and aggressive primary adult brain neoplasm, is maximal safe resection, followed by radiation and chemotherapy. Because maximizing resection may be beneficial for these patients, improving tumor extent of resection (EOR) with methods such as intraoperative 5-aminolevulinic acid fluorescence-guided surgery (FGS) is currently under evaluation. However, it is difficult to reproducibly judge EOR in these studies due to the lack of reliable tumor segmentation methods, especially for postoperative magnetic resonance imaging (MRI) scans. Therefore, a reliable, easily distributable segmentation method is needed to permit valid comparison, especially across multiple sites. We report a segmentation method that combines versatile region-of-interest blob generation with automated clustering methods. We applied this to glioblastoma cases undergoing FGS and matched controls to illustrate the method's reliability and accuracy. Agreement and interrater variability between segmentations were assessed using the concordance correlation coefficient, and spatial accuracy was determined using the Dice similarity index and mean Euclidean distance. Fuzzy C-means clustering with three classes was the best performing method, generating volumes with high agreement with manual contouring and high interrater agreement preoperatively and postoperatively. The proposed segmentation method allows tumor volume measurements of contrast-enhanced T1-weighted images in the unbiased, reproducible fashion necessary for quantifying EOR in multicenter trials. PMID:24772206
Yadav, Bechu K V; Nandy, S
2015-05-01
Mapping forest biomass is fundamental for estimating CO₂ emissions, and planning and monitoring of forests and ecosystem productivity. The present study attempted to map aboveground woody biomass (AGWB) integrating forest inventory, remote sensing and geostatistical techniques, viz., direct radiometric relationships (DRR), k-nearest neighbours (k-NN) and cokriging (CoK) and to evaluate their accuracy. A part of the Timli Forest Range of Kalsi Soil and Water Conservation Division, Uttarakhand, India was selected for the present study. Stratified random sampling was used to collect biophysical data from 36 sample plots of 0.1 ha (31.62 m × 31.62 m) size. Species-specific volumetric equations were used for calculating volume and multiplied by specific gravity to get biomass. Three forest-type density classes, viz. 10-40, 40-70 and >70% of Shorea robusta forest and four non-forest classes were delineated using on-screen visual interpretation of IRS P6 LISS-III data of December 2012. The volume in different strata of forest-type density ranged from 189.84 to 484.36 m(3) ha(-1). The total growing stock of the forest was found to be 2,024,652.88 m(3). The AGWB ranged from 143 to 421 Mgha(-1). Spectral bands and vegetation indices were used as independent variables and biomass as dependent variable for DRR, k-NN and CoK. After validation and comparison, k-NN method of Mahalanobis distance (root mean square error (RMSE) = 42.25 Mgha(-1)) was found to be the best method followed by fuzzy distance and Euclidean distance with RMSE of 44.23 and 45.13 Mgha(-1) respectively. DRR was found to be the least accurate method with RMSE of 67.17 Mgha(-1). The study highlighted the potential of integrating of forest inventory, remote sensing and geostatistical techniques for forest biomass mapping.
Overlapping communities detection based on spectral analysis of line graphs
NASA Astrophysics Data System (ADS)
Gui, Chun; Zhang, Ruisheng; Hu, Rongjing; Huang, Guoming; Wei, Jiaxuan
2018-05-01
Community in networks are often overlapping where one vertex belongs to several clusters. Meanwhile, many networks show hierarchical structure such that community is recursively grouped into hierarchical organization. In order to obtain overlapping communities from a global hierarchy of vertices, a new algorithm (named SAoLG) is proposed to build the hierarchical organization along with detecting the overlap of community structure. SAoLG applies the spectral analysis into line graphs to unify the overlap and hierarchical structure of the communities. In order to avoid the limitation of absolute distance such as Euclidean distance, SAoLG employs Angular distance to compute the similarity between vertices. Furthermore, we make a micro-improvement partition density to evaluate the quality of community structure and use it to obtain the more reasonable and sensible community numbers. The proposed SAoLG algorithm achieves a balance between overlap and hierarchy by applying spectral analysis to edge community detection. The experimental results on one standard network and six real-world networks show that the SAoLG algorithm achieves higher modularity and reasonable community number values than those generated by Ahn's algorithm, the classical CPM and GN ones.
Yang, Yao Ming; Jia, Ruo; Xun, Hui; Yang, Jie; Chen, Qiang; Zeng, Xiang Guang; Yang, Ming
2018-02-21
Simulium quinquestriatum Shiraki (Diptera: Simuliidae), a human-biting fly that is distributed widely across Asia, is a vector for multiple pathogens. However, the larval development of this species is poorly understood. In this study, we determined the number of instars in this pest using three batches of field-collected larvae from Guiyang, Guizhou, China. The postgenal length, head capsule width, mandibular phragma length, and body length of 773 individuals were measured, and k-means clustering was used for instar grouping. Four distance measures-Manhattan, Euclidean, Chebyshev, and Canberra-were determined. The reported instar numbers, ranging from 4 to 11, were set as initial cluster centers for k-means clustering. The Canberra distance yielded reliable instar grouping, which was consistent with the first instar, as characterized by egg bursters and prepupae with dark histoblasts. Females and males of the last cluster of larvae were identified using Feulgen-stained gonads. Morphometric differences between the two sexes were not significant. Validation was performed using the Brooks-Dyar and Crosby rules, revealing that the larval stage of S. quinquestriatum is composed of eight instars.
Hilbert-Schmidt quantum coherence in multi-qudit systems
NASA Astrophysics Data System (ADS)
Maziero, Jonas
2017-11-01
Using Bloch's parametrization for qudits ( d-level quantum systems), we write the Hilbert-Schmidt distance (HSD) between two generic n-qudit states as an Euclidean distance between two vectors of observables mean values in R^{Π_{s=1}nds2-1}, where ds is the dimension for qudit s. Then, applying the generalized Gell-Mann's matrices to generate SU(ds), we use that result to obtain the Hilbert-Schmidt quantum coherence (HSC) of n-qudit systems. As examples, we consider in detail one-qubit, one-qutrit, two-qubit, and two copies of one-qubit states. In this last case, the possibility for controlling local and non-local coherences by tuning local populations is studied, and the contrasting behaviors of HSC, l1-norm coherence, and relative entropy of coherence in this regard are noticed. We also investigate the decoherent dynamics of these coherence functions under the action of qutrit dephasing and dissipation channels. At last, we analyze the non-monotonicity of HSD under tensor products and report the first instance of a consequence (for coherence quantification) of this kind of property of a quantum distance measure.
NASA Astrophysics Data System (ADS)
Bachrudin, A.; Mohamed, N. B.; Supian, S.; Sukono; Hidayat, Y.
2018-03-01
Application of existing geostatistical theory of stream networks provides a number of interesting and challenging problems. Most of statistical tools in the traditional geostatistics have been based on a Euclidean distance such as autocovariance functions, but for stream data is not permissible since it deals with a stream distance. To overcome this autocovariance developed a model based on the distance the flow with using convolution kernel approach (moving average construction). Spatial model for a stream networks is widely used to monitor environmental on a river networks. In a case study of a river in province of West Java, the objective of this paper is to analyze a capability of a predictive on two environmental variables, potential of hydrogen (PH) and temperature using ordinary kriging. Several the empirical results show: (1) The best fit of autocovariance functions for temperature and potential hydrogen (ph) of Citarik River is linear which also yields the smallest root mean squared prediction error (RMSPE), (2) the spatial correlation values between the locations on upstream and on downstream of Citarik river exhibit decreasingly
Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories
NASA Astrophysics Data System (ADS)
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.
Bayesian approach to spectral function reconstruction for Euclidean quantum field theories.
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33T(C).
New approaches to spatially analyse primary health care usage patterns in rural South Africa.
Tanser, F; Hosegood, V; Benzler, J; Solarsh, G
2001-10-01
To develop indices to quantitatively assess and understand the spatial usage patterns of health facilities in the Hlabisa district of South Africa. We mapped and interviewed more than 23 000 homesteads (approximately 200 000 people) in Hlabisa district, South Africa and spatially analysed their modal primary health usage patterns using a geographical information system. We generated contour maps of health service use and quantified the relationship between clinic catchments and distance-defined catchments using inclusion and exclusion error. We propose the distance usage index (DUI) as an overall spatial measure of clinic usage. This index is the sum of the distances from clinic to all client homesteads divided by the sum of the distances from clinic to all homesteads within its distance-defined catchment. The index encompasses inclusion, exclusion, and strength of patient attraction for each clinic. Eighty-seven per cent of homesteads use the nearest clinic. Residents of homesteads travel an average Euclidean distance of 4.72 km to attend clinics. There is a significant logarithmic relationship between distance from clinic and their use by homesteads (r(2)=0.774, P < 0.0001). The DUI values range between 31 and 198% (mean=110%, SD=43.7) for 12 clinics and highlight clinic usage patterns across the district. The DUI is a powerful and informative composite measure of clinic usage. The results of the study have important implications for health care provision in developing countries.
Using a motion capture system for spatial localization of EEG electrodes
Reis, Pedro M. R.; Lochmann, Matthias
2015-01-01
Electroencephalography (EEG) is often used in source analysis studies, in which the locations of cortex regions responsible for a signal are determined. For this to be possible, accurate positions of the electrodes at the scalp surface must be determined, otherwise errors in the source estimation will occur. Today, several methods for acquiring these positions exist but they are often not satisfyingly accurate or take a long time to perform. Therefore, in this paper we describe a method capable of determining the positions accurately and fast. This method uses an infrared light motion capture system (IR-MOCAP) with 8 cameras arranged around a human participant. It acquires 3D coordinates of each electrode and automatically labels them. Each electrode has a small reflector on top of it thus allowing its detection by the cameras. We tested the accuracy of the presented method by acquiring the electrodes positions on a rigid sphere model and comparing these with measurements from computer tomography (CT). The average Euclidean distance between the sphere model CT measurements and the presented method was 1.23 mm with an average standard deviation of 0.51 mm. We also tested the method with a human participant. The measurement was quickly performed and all positions were captured. These results tell that, with this method, it is possible to acquire electrode positions with minimal error and little time effort for the study participants and investigators. PMID:25941468
Renormalized vacuum polarization of rotating black holes
NASA Astrophysics Data System (ADS)
Ferreira, Hugo R. C.
2015-04-01
Quantum field theory on rotating black hole spacetimes is plagued with technical difficulties. Here, we describe a general method to renormalize and compute the vacuum polarization of a quantum field in the Hartle-Hawking state on rotating black holes. We exemplify the technique with a massive scalar field on the warped AdS3 black hole solution to topologically massive gravity, a deformation of (2 + 1)-dimensional Einstein gravity. We use a "quasi-Euclidean" technique, which generalizes the Euclidean techniques used for static spacetimes, and we subtract the divergences by matching to a sum over mode solutions on Minkowski spacetime. This allows us, for the first time, to have a general method to compute the renormalized vacuum polarization, for a given quantum state, on a rotating black hole, such as the physically relevant case of the Kerr black hole in four dimensions.
The Common Evolution of Geometry and Architecture from a Geodetic Point of View
NASA Astrophysics Data System (ADS)
Bellone, T.; Fiermonte, F.; Mussio, L.
2017-05-01
Throughout history the link between geometry and architecture has been strong and while architects have used mathematics to construct their buildings, geometry has always been the essential tool allowing them to choose spatial shapes which are aesthetically appropriate. Sometimes it is geometry which drives architectural choices, but at other times it is architectural innovation which facilitates the emergence of new ideas in geometry. Among the best known types of geometry (Euclidean, projective, analytical, Topology, descriptive, fractal,…) those most frequently employed in architectural design are: - Euclidean Geometry - Projective Geometry - The non-Euclidean geometries. Entire architectural periods are linked to specific types of geometry. Euclidean geometry, for example, was the basis for architectural styles from Antiquity through to the Romanesque period. Perspective and Projective geometry, for their part, were important from the Gothic period through the Renaissance and into the Baroque and Neo-classical eras, while non-Euclidean geometries characterize modern architecture.
A novel profit-allocation strategy for SDN enterprises
NASA Astrophysics Data System (ADS)
Hu, Wei; Hou, Ye; Tian, Longwei; Li, Yuan
2017-01-01
Aiming to solve the problem of profit allocation for supply and demand network (SDN) enterprises that ignores risk factors and generates low satisfaction, a novel profit-allocation model based on cooperative game theory and TOPSIS is proposed. This new model avoids the defect of the single-profit allocation model by introducing risk factors, compromise coefficients and high negotiation points. By measuring the Euclidean distance between the ideal solution vector and the negative ideal solution vector, every node's satisfaction problem for the SDN was resolved, and the mess phenomenon was avoided. Finally, the rationality and effectiveness of the proposed model was verified using a numerical example.
Pardo-Montero, Juan; Fenwick, John D
2010-06-01
The purpose of this work is twofold: To further develop an approach to multiobjective optimization of rotational therapy treatments recently introduced by the authors [J. Pardo-Montero and J. D. Fenwick, "An approach to multiobjective optimization of rotational therapy," Med. Phys. 36, 3292-3303 (2009)], especially regarding its application to realistic geometries, and to study the quality (Pareto optimality) of plans obtained using such an approach by comparing them with Pareto optimal plans obtained through inverse planning. In the previous work of the authors, a methodology is proposed for constructing a large number of plans, with different compromises between the objectives involved, from a small number of geometrically based arcs, each arc prioritizing different objectives. Here, this method has been further developed and studied. Two different techniques for constructing these arcs are investigated, one based on image-reconstruction algorithms and the other based on more common gradient-descent algorithms. The difficulty of dealing with organs abutting the target, briefly reported in previous work of the authors, has been investigated using partial OAR unblocking. Optimality of the solutions has been investigated by comparison with a Pareto front obtained from inverse planning. A relative Euclidean distance has been used to measure the distance of these plans to the Pareto front, and dose volume histogram comparisons have been used to gauge the clinical impact of these distances. A prostate geometry has been used for the study. For geometries where a blocked OAR abuts the target, moderate OAR unblocking can substantially improve target dose distribution and minimize hot spots while not overly compromising dose sparing of the organ. Image-reconstruction type and gradient-descent blocked-arc computations generate similar results. The Pareto front for the prostate geometry, reconstructed using a large number of inverse plans, presents a hockey-stick shape comprising two regions: One where the dose to the target is close to prescription and trade-offs can be made between doses to the organs at risk and (small) changes in target dose, and one where very substantial rectal sparing is achieved at the cost of large target underdosage. Plans computed following the approach using a conformal arc and four blocked arcs generally lie close to the Pareto front, although distances of some plans from high gradient regions of the Pareto front can be greater. Only around 12% of plans lie a relative Euclidean distance of 0.15 or greater from the Pareto front. Using the alternative distance measure of Craft ["Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization," Phys. Medica (to be published)], around 2/5 of plans lie more than 0.05 from the front. Computation of blocked arcs is quite fast, the algorithms requiring 35%-80% of the running time per iteration needed for conventional inverse plan computation. The geometry-based arc approach to multicriteria optimization of rotational therapy allows solutions to be obtained that lie close to the Pareto front. Both the image-reconstruction type and gradient-descent algorithms produce similar modulated arcs, the latter one perhaps being preferred because it is more easily implementable in standard treatment planning systems. Moderate unblocking provides a good way of dealing with OARs which abut the PTV. Optimization of geometry-based arcs is faster than usual inverse optimization of treatment plans, making this approach more rapid than an inverse-based Pareto front reconstruction.
Hydropathic self-organized criticality: a magic wand for protein physics.
Phillips, J C
2012-10-01
Self-organized criticality (SOC) is a popular concept that has been the subject of more than 3000 articles in the last 25 years. The characteristic signature of SOC is the appearance of self-similarity (power-law scaling) in observable properties. A characteristic observable protein property that describes protein-water interactions is the water-accessible (hydropathic) interfacial area of compacted globular protein networks. Here we show that hydropathic power-law (size- or length-scale-dependent) exponents derived from SOC enable theory to connect standard Web-based (BLAST) short-range amino acid (aa) sequence similarities to long-range aa sequence hydropathic roughening form factors that hierarchically describe evolutionary trends in water - membrane protein interactions. Our method utilizes hydropathic aa exponents that define a non-Euclidean metric realistically rooted in the atomic coordinates of 5526 protein segments. These hydropathic aa exponents thereby encapsulate universal (but previously only implicit) non-Euclidean long-range differential geometrical features of the Protein Data Bank. These hydropathic aa exponents easily organize small mutated aa sequence differences between human and proximate species proteins. For rhodopsin, the most studied transmembrane signaling protein associated with night vision, analysis shows that this approach separates Euclidean short- and non-Euclidean long-range aa sequence properties, and shows that they correlate with 96% success for humans, monkeys, cats, mice and rabbits. Proper application of SOC using hydropathic aa exponents promises unprecedented simplifications of exponentially complex protein sequence-structure-function problems, both conceptual and practical.
Kim, Hanvit; Minh Phuong Nguyen; Se Young Chun
2017-07-01
Biometrics such as ECG provides a convenient and powerful security tool to verify or identify an individual. However, one important drawback of biometrics is that it is irrevocable. In other words, biometrics cannot be re-used practically once it is compromised. Cancelable biometrics has been investigated to overcome this drawback. In this paper, we propose a cancelable ECG biometrics by deriving a generalized likelihood ratio test (GLRT) detector from a composite hypothesis testing in randomly projected domain. Since it is common to observe performance degradation for cancelable biometrics, we also propose a guided filtering (GF) with irreversible guide signal that is a non-invertibly transformed signal of ECG authentication template. We evaluated our proposed method using ECG-ID database with 89 subjects. Conventional Euclidean detector with original ECG template yielded 93.9% PD1 (detection probability at 1% FAR) while Euclidean detector with 10% compressed ECG (1/10 of the original data size) yielded 90.8% PD1. Our proposed GLRT detector with 10% compressed ECG yielded 91.4%, which is better than Euclidean with the same compressed ECG. GF with our proposed irreversible ECG template further improved the performance of our GLRT with 10% compressed ECG up to 94.3%, which is higher than Euclidean detector with original ECG. Lastly, we showed that our proposed cancelable ECG biometrics practically met cancelable biometrics criteria such as efficiency, re-usability, diversity and non-invertibility.
NASA Astrophysics Data System (ADS)
Kobylkin, Konstantin
2016-10-01
Computational complexity and approximability are studied for the problem of intersecting of a set of straight line segments with the smallest cardinality set of disks of fixed radii r > 0 where the set of segments forms straight line embedding of possibly non-planar geometric graph. This problem arises in physical network security analysis for telecommunication, wireless and road networks represented by specific geometric graphs defined by Euclidean distances between their vertices (proximity graphs). It can be formulated in a form of known Hitting Set problem over a set of Euclidean r-neighbourhoods of segments. Being of interest computational complexity and approximability of Hitting Set over so structured sets of geometric objects did not get much focus in the literature. Strong NP-hardness of the problem is reported over special classes of proximity graphs namely of Delaunay triangulations, some of their connected subgraphs, half-θ6 graphs and non-planar unit disk graphs as well as APX-hardness is given for non-planar geometric graphs at different scales of r with respect to the longest graph edge length. Simple constant factor approximation algorithm is presented for the case where r is at the same scale as the longest edge length.
Neural network modeling of a dolphin's sonar discrimination capabilities.
Au, W W; Andersen, L N; Rasmussen, A R; Roitblat, H L; Nachtigall, P E
1995-07-01
The capability of an echolocating dolphin to discriminate differences in the wall thickness of cylinders was previously modeled by a counterpropagation neural network using only spectral information from the echoes. In this study, both time and frequency information were used to model the dolphin discrimination capabilities. Echoes from the same cylinders were digitized using a broadband simulated dolphin sonar signal with the transducer mounted on the dolphin's pen. The echoes were filtered by a bank of continuous constant-Q digital filters and the energy from each filter was computed in time increments of 1/bandwidth. Echo features of the standard and each comparison target were analyzed in pairs by a counterpropagation neural network, a backpropagation neural network, and a model using Euclidean distance measures. The backpropagation network performed better than both the counterpropagation network, and the Euclidean model, using either spectral-only features or combined temporal and spectral features. All models performed better using features containing both temporal and spectral information. The backpropagation network was able to perform better than the dolphins for noise-free echoes with Q values as low as 2 and 3. For a Q of 2, only temporal information was available. However, with noisy data, the network required a Q of 8 in order to perform as well as the dolphin.
Baker, Mariwan; Cooper, David T; Behrens, Claus F
2016-10-01
In cervical radiotherapy, it is essential that the uterine position is correctly determined prior to treatment delivery. The aim of this study was to evaluate an autoscan ultrasound (A-US) probe, a motorized transducer creating three-dimensional (3D) images by sweeping, by comparing it with a conventional ultrasound (C-US) probe, where manual scanning is required to acquire 3D images. Nine healthy volunteers were scanned by seven operators, using the Clarity(®) system (Elekta, Stockholm, Sweden). In total, 72 scans, 36 scans from the C-US and 36 scans from the A-US probes, were acquired. Two observers delineated the uterine structure, using the software-assisted segmentation in the Clarity workstation. The data of uterine volume, uterine centre of mass (COM) and maximum uterine lengths, in three orthogonal directions, were analyzed. In 53% of the C-US scans, the whole uterus was captured, compared with 89% using the A-US. F-test on 36 scans demonstrated statistically significant differences in interobserver COM standard deviation (SD) when comparing the C-US with the A-US probe for the inferior-superior (p < 0.006), left-right (p < 0.012) and anteroposterior directions (p < 0.001). The median of the interobserver COM distance (Euclidean distance for 36 scans) was reduced from 8.5 (C-US) to 6.0 mm (A-US). An F-test on the 36 scans showed strong significant differences (p < 0.001) in the SD of the Euclidean interobserver distance when comparing the C-US with the A-US scans. The average Dice coefficient when comparing the two observers was 0.67 (C-US) and 0.75 (A-US). The predictive interval demonstrated better interobserver delineation concordance using the A-US probe. The A-US probe imaging might be a better choice of image-guided radiotherapy system for correcting for daily uterine positional changes in cervical radiotherapy. Using a novel A-US probe might reduce the uncertainty in interoperator variability during ultrasound scanning.
Some constructions of biharmonic maps and Chen’s conjecture on biharmonic hypersurfaces
NASA Astrophysics Data System (ADS)
Ou, Ye-Lin
2012-04-01
We give several construction methods and use them to produce many examples of proper biharmonic maps including biharmonic tori of any dimension in Euclidean spheres (Theorem 2.2, Corollaries 2.3, 2.4 and 2.6), biharmonic maps between spheres (Theorem 2.9) and into spheres (Theorem 2.10) via orthogonal multiplications and eigenmaps. We also study biharmonic graphs of maps, derive the equation for a function whose graph is a biharmonic hypersurface in a Euclidean space, and give an equivalent formulation of Chen's conjecture on biharmonic hypersurfaces by using the biharmonic graph equation (Theorem 4.1) which paves a way for the analytic study of the conjecture.
Identification of vegetable diseases using neural network
NASA Astrophysics Data System (ADS)
Zhang, Jiacai; Tang, Jianjun; Li, Yao
2007-02-01
Vegetables are widely planted all over China, but they often suffer from the some diseases. A method of major technical and economical importance is introduced in this paper, which explores the feasibility of implementing fast and reliable automatic identification of vegetable diseases and their infection grades from color and morphological features of leaves. Firstly, leaves are plucked from clustered plant and pictures of the leaves are taken with a CCD digital color camera. Secondly, color and morphological characteristics are obtained by standard image processing techniques, for examples, Otsu thresholding method segments the region of interest, image opening following closing algorithm removes noise, Principal Components Analysis reduces the dimension of the original features. Then, a recently proposed boosting algorithm AdaBoost. M2 is applied to RBF networks for diseases classification based on the above features, where the kernel function of RBF networks is Gaussian form with argument taking Euclidean distance of the input vector from a center. Our experiment performs on the database collected by Chinese Academy of Agricultural Sciences, and result shows that Boosting RBF Networks classifies the 230 cucumber leaves into 2 different diseases (downy-mildew and angular-leaf-spot), and identifies the infection grades of each disease according to the infection degrees.
Graph-based layout analysis for PDF documents
NASA Astrophysics Data System (ADS)
Xu, Canhui; Tang, Zhi; Tao, Xin; Li, Yun; Shi, Cao
2013-03-01
To increase the flexibility and enrich the reading experience of e-book on small portable screens, a graph based method is proposed to perform layout analysis on Portable Document Format (PDF) documents. Digital born document has its inherent advantages like representing texts and fractional images in explicit form, which can be straightforwardly exploited. To integrate traditional image-based document analysis and the inherent meta-data provided by PDF parser, the page primitives including text, image and path elements are processed to produce text and non text layer for respective analysis. Graph-based method is developed in superpixel representation level, and page text elements corresponding to vertices are used to construct an undirected graph. Euclidean distance between adjacent vertices is applied in a top-down manner to cut the graph tree formed by Kruskal's algorithm. And edge orientation is then used in a bottom-up manner to extract text lines from each sub tree. On the other hand, non-textual objects are segmented by connected component analysis. For each segmented text and non-text composite, a 13-dimensional feature vector is extracted for labelling purpose. The experimental results on selected pages from PDF books are presented.
Dai, Yanyan; Kim, YoonGu; Wee, SungGil; Lee, DongHa; Lee, SukGyu
2016-01-01
In this paper, the problem of object caging and transporting is considered for multiple mobile robots. With the consideration of minimizing the number of robots and decreasing the rotation of the object, the proper points are calculated and assigned to the multiple mobile robots to allow them to form a symmetric caging formation. The caging formation guarantees that all of the Euclidean distances between any two adjacent robots are smaller than the minimal width of the polygonal object so that the object cannot escape. In order to avoid collision among robots, the parameter of the robots radius is utilized to design the caging formation, and the A⁎ algorithm is used so that mobile robots can move to the proper points. In order to avoid obstacles, the robots and the object are regarded as a rigid body to apply artificial potential field method. The fuzzy sliding mode control method is applied for tracking control of the nonholonomic mobile robots. Finally, the simulation and experimental results show that multiple mobile robots are able to cage and transport the polygonal object to the goal position, avoiding obstacles. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Statistical 3D shape analysis of gender differences in lateral ventricles
NASA Astrophysics Data System (ADS)
He, Qing; Karpman, Dmitriy; Duan, Ye
2010-03-01
This paper aims at analyzing gender differences in the 3D shapes of lateral ventricles, which will provide reference for the analysis of brain abnormalities related to neurological disorders. Previous studies mostly focused on volume analysis, and the main challenge in shape analysis is the required step of establishing shape correspondence among individual shapes. We developed a simple and efficient method based on anatomical landmarks. 14 females and 10 males with matching ages participated in this study. 3D ventricle models were segmented from MR images by a semiautomatic method. Six anatomically meaningful landmarks were identified by detecting the maximum curvature point in a small neighborhood of a manually clicked point on the 3D model. Thin-plate spline was used to transform a randomly selected template shape to each of the rest shape instances, and the point correspondence was established according to Euclidean distance and surface normal. All shapes were spatially aligned by Generalized Procrustes Analysis. Hotelling T2 twosample metric was used to compare the ventricle shapes between males and females, and False Discovery Rate estimation was used to correct for the multiple comparison. The results revealed significant differences in the anterior horn of the right ventricle.
Essential oil variation among natural populations of Lavandula multifida L. (Lamiaceae).
Chograni, Hnia; Zaouali, Yosr; Rajeb, Chayma; Boussaid, Mohamed
2010-04-01
Volatiles from twelve wild Tunisian populations of Lavandula multifida L. growing in different bioclimatic zones were assessed by GC (RI) and GC/MS. Thirty-six constituents, representing 83.48% of the total oil were identified. The major components at the species level were carvacrol (31.81%), beta-bisabolene (14.89%), and acrylic acid dodecyl ester (11.43%). These volatiles, together with alpha-pinene, were also the main compounds discriminating the populations. According to these dominant compounds, one chemotype was revealed, a carvacrol/beta-bisabolene/acrylic acid dodecyl ester chemotype. However, a significant variation among the populations was observed for the majority of the constituents. A high chemical-population structure, estimated both by principal component analysis (PCA) and unweighted pair group method with averaging (UPGMA) cluster analysis based on Euclidean distances, was observed. Both methods allowed separation of the populations in three groups defined rather by minor than by major compounds. The population groups were not strictly concordant with their bioclimatic or geographic location. Conservation strategies should concern all populations, because of their low size and their high level of destruction. Populations exhibiting particular compounds other than the major ones should be protected first.
Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction
NASA Astrophysics Data System (ADS)
Rizal Isnanto, R.
2015-06-01
Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)
Yan, Fang; Xu, Kaili; Li, Deshun; Cui, Zhikai
2017-01-01
Biomass gasification stations are facing many hazard factors, therefore, it is necessary to make hazard assessment for them. In this study, a novel hazard assessment method called extended set pair analysis (ESPA) is proposed based on set pair analysis (SPA). However, the calculation of the connection degree (CD) requires the classification of hazard grades and their corresponding thresholds using SPA for the hazard assessment. In regard to the hazard assessment using ESPA, a novel calculation algorithm of the CD is worked out when hazard grades and their corresponding thresholds are unknown. Then the CD can be converted into Euclidean distance (ED) by a simple and concise calculation, and the hazard of each sample will be ranked based on the value of ED. In this paper, six biomass gasification stations are introduced to make hazard assessment using ESPA and general set pair analysis (GSPA), respectively. By the comparison of hazard assessment results obtained from ESPA and GSPA, the availability and validity of ESPA can be proved in the hazard assessment for biomass gasification stations. Meanwhile, the reasonability of ESPA is also justified by the sensitivity analysis of hazard assessment results obtained by ESPA and GSPA. PMID:28938011
García-Jacas, C R; Marrero-Ponce, Y; Barigye, S J; Hernández-Ortega, T; Cabrera-Leyva, L; Fernández-Castillo, A
2016-12-01
Novel N-tuple topological/geometric cutoffs to consider specific inter-atomic relations in the QuBiLS-MIDAS framework are introduced in this manuscript. These molecular cutoffs permit the taking into account of relations between more than two atoms by using (dis-)similarity multi-metrics and the concepts related with topological and Euclidean-geometric distances. To this end, the kth two-, three- and four-tuple topological and geometric neighbourhood quotient (NQ) total (or local-fragment) spatial-(dis)similarity matrices are defined, to represent 3D information corresponding to the relations between two, three and four atoms of the molecular structures that satisfy certain cutoff criteria. First, an analysis of a diverse chemical space for the most common values of topological/Euclidean-geometric distances, bond/dihedral angles, triangle/quadrilateral perimeters, triangle area and volume was performed in order to determine the intervals to take into account in the cutoff procedures. A variability analysis based on Shannon's entropy reveals that better distribution patterns are attained with the descriptors based on the cutoffs proposed (QuBiLS-MIDAS NQ-MDs) with regard to the results obtained when all inter-atomic relations are considered (QuBiLS-MIDAS KA-MDs - 'Keep All'). A principal component analysis shows that the novel molecular cutoffs codify chemical information captured by the respective QuBiLS-MIDAS KA-MDs, as well as information not captured by the latter. Lastly, a QSAR study to obtain deeper knowledge of the contribution of the proposed methods was carried out, using four molecular datasets (steroids (STER), angiotensin converting enzyme (ACE), thermolysin inhibitors (THER) and thrombin inhibitors (THR)) widely used as benchmarks in the evaluation of several methodologies. One to four variable QSAR models based on multiple linear regression were developed for each compound dataset following the original division into training and test sets. The results obtained reveal that the novel cutoff procedures yield superior performances relative to those of the QuBiLS-MIDAS KA-MDs in the prediction of the biological activities considered. From the results achieved, it can be suggested that the proposed N-tuple topological/geometric cutoffs constitute a relevant criteria for generating MDs codifying particular atomic relations, ultimately useful in enhancing the modelling capacity of the QuBiLS-MIDAS 3D-MDs.
Dong, Wei-Feng; Canil, Sarah; Lai, Raymond; Morel, Didier; Swanson, Paul E.; Izevbaye, Iyare
2018-01-01
A new automated MYC IHC classifier based on bivariate logistic regression is presented. The predictor relies on image analysis developed with the open-source ImageJ platform. From a histologic section immunostained for MYC protein, 2 dimensionless quantitative variables are extracted: (a) relative distance between nuclei positive for MYC IHC based on euclidean minimum spanning tree graph and (b) coefficient of variation of the MYC IHC stain intensity among MYC IHC-positive nuclei. Distance between positive nuclei is suggested to inversely correlate MYC gene rearrangement status, whereas coefficient of variation is suggested to inversely correlate physiological regulation of MYC protein expression. The bivariate classifier was compared with 2 other MYC IHC classifiers (based on percentage of MYC IHC positive nuclei), all tested on 113 lymphomas including mostly diffuse large B-cell lymphomas with known MYC fluorescent in situ hybridization (FISH) status. The bivariate classifier strongly outperformed the “percentage of MYC IHC-positive nuclei” methods to predict MYC+ FISH status with 100% sensitivity (95% confidence interval, 94-100) associated with 80% specificity. The test is rapidly performed and might at a minimum provide primary IHC screening for MYC gene rearrangement status in diffuse large B-cell lymphomas. Furthermore, as this bivariate classifier actually predicts “permanent overexpressed MYC protein status,” it might identify nontranslocation-related chromosomal anomalies missed by FISH. PMID:27093450
Buildings Change Detection Based on Shape Matching for Multi-Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Abdessetar, M.; Zhong, Y.
2017-09-01
Buildings change detection has the ability to quantify the temporal effect, on urban area, for urban evolution study or damage assessment in disaster cases. In this context, changes analysis might involve the utilization of the available satellite images with different resolutions for quick responses. In this paper, to avoid using traditional method with image resampling outcomes and salt-pepper effect, building change detection based on shape matching is proposed for multi-resolution remote sensing images. Since the object's shape can be extracted from remote sensing imagery and the shapes of corresponding objects in multi-scale images are similar, it is practical for detecting buildings changes in multi-scale imagery using shape analysis. Therefore, the proposed methodology can deal with different pixel size for identifying new and demolished buildings in urban area using geometric properties of objects of interest. After rectifying the desired multi-dates and multi-resolutions images, by image to image registration with optimal RMS value, objects based image classification is performed to extract buildings shape from the images. Next, Centroid-Coincident Matching is conducted, on the extracted building shapes, based on the Euclidean distance measurement between shapes centroid (from shape T0 to shape T1 and vice versa), in order to define corresponding building objects. Then, New and Demolished buildings are identified based on the obtained distances those are greater than RMS value (No match in the same location).
Circuity analyses of HSR network and high-speed train paths in China
Zhao, Shuo; Huang, Jie; Shan, Xinghua
2017-01-01
Circuity, defined as the ratio of the shortest network distance to the Euclidean distance between one origin–destination (O-D) pair, can be adopted as a helpful evaluation method of indirect degrees of train paths. In this paper, the maximum circuity of the paths of operated trains is set to be the threshold value of the circuity of high-speed train paths. For the shortest paths of any node pairs, if their circuity is not higher than the threshold value, the paths can be regarded as the reasonable paths. With the consideration of a certain relative or absolute error, we cluster the reasonable paths on the basis of their inclusion relationship and the center path of each class represents a passenger transit corridor. We take the high-speed rail (HSR) network in China at the end of 2014 as an example, and obtain 51 passenger transit corridors, which are alternative sets of train paths. Furthermore, we analyze the circuity distribution of paths of all node pairs in the network. We find that the high circuity of train paths can be decreased with the construction of a high-speed railway line, which indicates that the structure of the HSR network in China tends to be more complete and the HSR network can make the Chinese railway network more efficient. PMID:28945757
Stochastic Spectral Descent for Discrete Graphical Models
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...
2015-12-14
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less
Euclidean chemical spaces from molecular fingerprints: Hamming distance and Hempel's ravens.
Martin, Eric; Cao, Eddie
2015-05-01
Molecules are often characterized by sparse binary fingerprints, where 1s represent the presence of substructures and 0s represent their absence. Fingerprints are especially useful for similarity calculations, such as database searching or clustering, generally measuring similarity as the Tanimoto coefficient. In other cases, such as visualization, design of experiments, or latent variable regression, a low-dimensional Euclidian "chemical space" is more useful, where proximity between points reflects chemical similarity. A temptation is to apply principal components analysis (PCA) directly to these fingerprints to obtain a low dimensional continuous chemical space. However, Gower has shown that distances from PCA on bit vectors are proportional to the square root of Hamming distance. Unlike Tanimoto similarity, Hamming similarity (HS) gives equal weight to shared 0s as to shared 1s, that is, HS gives as much weight to substructures that neither molecule contains, as to substructures which both molecules contain. Illustrative examples show that proximity in the corresponding chemical space reflects mainly similar size and complexity rather than shared chemical substructures. These spaces are ill-suited for visualizing and optimizing coverage of chemical space, or as latent variables for regression. A more suitable alternative is shown to be Multi-dimensional scaling on the Tanimoto distance matrix, which produces a space where proximity does reflect structural similarity.
NASA Astrophysics Data System (ADS)
Adeniyi, D. A.; Wei, Z.; Yang, Y.
2017-10-01
Recommendation problem has been extensively studied by researchers in the field of data mining, database and information retrieval. This study presents the design and realisation of an automated, personalised news recommendations system based on Chi-square statistics-based K-nearest neighbour (χ2SB-KNN) model. The proposed χ2SB-KNN model has the potential to overcome computational complexity and information overloading problems, reduces runtime and speeds up execution process through the use of critical value of χ2 distribution. The proposed recommendation engine can alleviate scalability challenges through combined online pattern discovery and pattern matching for real-time recommendations. This work also showcases the development of a novel method of feature selection referred to as Data Discretisation-Based feature selection method. This is used for selecting the best features for the proposed χ2SB-KNN algorithm at the preprocessing stage of the classification procedures. The implementation of the proposed χ2SB-KNN model is achieved through the use of a developed in-house Java program on an experimental website called OUC newsreaders' website. Finally, we compared the performance of our system with two baseline methods which are traditional Euclidean distance K-nearest neighbour and Naive Bayesian techniques. The result shows a significant improvement of our method over the baseline methods studied.
Prediction model for the return to work of workers with injuries in Hong Kong.
Xu, Yanwen; Chan, Chetwyn C H; Lo, Karen Hui Yu-Ling; Tang, Dan
2008-01-01
This study attempts to formulate a prediction model of return to work for a group of workers who have been suffering from chronic pain and physical injury while also being out of work in Hong Kong. The study used Case-based Reasoning (CBR) method, and compared the result with the statistical method of logistic regression model. The database of the algorithm of CBR was composed of 67 cases who were also used in the logistic regression model. The testing cases were 32 participants who had a similar background and characteristics to those in the database. The methods of setting constraints and Euclidean distance metric were used in CBR to search the closest cases to the trial case based on the matrix. The usefulness of the algorithm was tested on 32 new participants, and the accuracy of predicting return to work outcomes was 62.5%, which was no better than the 71.2% accuracy derived from the logistic regression model. The results of the study would enable us to have a better understanding of the CBR applied in the field of occupational rehabilitation by comparing with the conventional regression analysis. The findings would also shed light on the development of relevant interventions for the return-to-work process of these workers.
Automated color classification of urine dipstick image in urine examination
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Royananda; Muchtar, M. A.; Taqiuddin, R.; Adnan, S.; Anugrahwaty, R.; Budiarto, R.
2018-03-01
Urine examination using urine dipstick has long been used to determine the health status of a person. The economical and convenient use of urine dipstick is one of the reasons urine dipstick is still used to check people health status. The real-life implementation of urine dipstick is done manually, in general, that is by comparing it with the reference color visually. This resulted perception differences in the color reading of the examination results. In this research, authors used a scanner to obtain the urine dipstick color image. The use of scanner can be one of the solutions in reading the result of urine dipstick because the light produced is consistent. A method is required to overcome the problems of urine dipstick color matching and the test reference color that have been conducted manually. The method proposed by authors is Euclidean Distance, Otsu along with RGB color feature extraction method to match the colors on the urine dipstick with the standard reference color of urine examination. The result shows that the proposed approach was able to classify the colors on a urine dipstick with an accuracy of 95.45%. The accuracy of color classification on urine dipstick against the standard reference color is influenced by the level of scanner resolution used, the higher the scanner resolution level, the higher the accuracy.
ERIC Educational Resources Information Center
Vaughan, Herbert E.; Szabo, Steven
This is the teacher's edition of a text for the second year of a two-year high school geometry course. The course bases plane and solid geometry and trigonometry on the fact that the translations of a Euclidean space constitute a vector space which has an inner product. Congruence is a geometric topic reserved for Volume 2. Volume 2 opens with an…
Anomalously soft non-Euclidean spring
NASA Astrophysics Data System (ADS)
Levin, Ido; Sharon, Eran
In this work we study the mechanical properties of a frustrated elastic ribbon spring - the non-Euclidean minimal spring. This spring belongs to the family of non-Euclidean plates: it has no spontaneous curvature, but its lateral intrinsic geometry is described by a non-Euclidean reference metric. The reference metric of the minimal spring is hyperbolic, and can be embedded as a minimal surface. We argue that the existence of a continuous set of such isometric minimal surfaces with different extensions leads to a complete degeneracy of the bulk elastic energy of the minimal spring under elongation. This degeneracy is removed only by boundary layer effects. As a result, the mechanical properties of the minimal spring are unusual: the spring is ultra-soft with rigidity that depends on the thickness, t , as t raise 0 . 7 ex 7
Koike, Narihiko; Ii, Satoshi; Yoshinaga, Tsukasa; Nozaki, Kazunori; Wada, Shigeo
2017-11-07
This paper presents a novel inverse estimation approach for the active contraction stresses of tongue muscles during speech. The proposed method is based on variational data assimilation using a mechanical tongue model and 3D tongue surface shapes for speech production. The mechanical tongue model considers nonlinear hyperelasticity, finite deformation, actual geometry from computed tomography (CT) images, and anisotropic active contraction by muscle fibers, the orientations of which are ideally determined using anatomical drawings. The tongue deformation is obtained by solving a stationary force-equilibrium equation using a finite element method. An inverse problem is established to find the combination of muscle contraction stresses that minimizes the Euclidean distance of the tongue surfaces between the mechanical analysis and CT results of speech production, where a signed-distance function represents the tongue surface. Our approach is validated through an ideal numerical example and extended to the real-world case of two Japanese vowels, /ʉ/ and /ɯ/. The results capture the target shape completely and provide an excellent estimation of the active contraction stresses in the ideal case, and exhibit similar tendencies as in previous observations and simulations for the actual vowel cases. The present approach can reveal the relative relationship among the muscle contraction stresses in similar utterances with different tongue shapes, and enables the investigation of the coordination of tongue muscles during speech using only the deformed tongue shape obtained from medical images. This will enhance our understanding of speech motor control. Copyright © 2017 Elsevier Ltd. All rights reserved.
Big data and computational biology strategy for personalized prognosis.
Ow, Ghim Siong; Tang, Zhiqun; Kuznetsov, Vladimir A
2016-06-28
The era of big data and precision medicine has led to accumulation of massive datasets of gene expression data and clinical information of patients. For a new patient, we propose that identification of a highly similar reference patient from an existing patient database via similarity matching of both clinical and expression data could be useful for predicting the prognostic risk or therapeutic efficacy.Here, we propose a novel methodology to predict disease/treatment outcome via analysis of the similarity between any pair of patients who are each characterized by a certain set of pre-defined biological variables (biomarkers or clinical features) represented initially as a prognostic binary variable vector (PBVV) and subsequently transformed to a prognostic signature vector (PSV). Our analyses revealed that Euclidean distance rather correlation distance measure was effective in defining an unbiased similarity measure calculated between two PSVs.We implemented our methods to high-grade serous ovarian cancer (HGSC) based on a 36-mRNA predictor that was previously shown to stratify patients into 3 distinct prognostic subgroups. We studied and revealed that patient's age, when converted into binary variable, was positively correlated with the overall risk of succumbing to the disease. When applied to an independent testing dataset, the inclusion of age into the molecular predictor provided more robust personalized prognosis of overall survival correlated with the therapeutic response of HGSC and provided benefit for treatment targeting of the tumors in HGSC patients.Finally, our method can be generalized and implemented in many other diseases to accurately predict personalized patients' outcomes.
Measuring geographic access to health care: raster and network-based methods
2012-01-01
Background Inequalities in geographic access to health care result from the configuration of facilities, population distribution, and the transportation infrastructure. In recent accessibility studies, the traditional distance measure (Euclidean) has been replaced with more plausible measures such as travel distance or time. Both network and raster-based methods are often utilized for estimating travel time in a Geographic Information System. Therefore, exploring the differences in the underlying data models and associated methods and their impact on geographic accessibility estimates is warranted. Methods We examine the assumptions present in population-based travel time models. Conceptual and practical differences between raster and network data models are reviewed, along with methodological implications for service area estimates. Our case study investigates Limited Access Areas defined by Michigan’s Certificate of Need (CON) Program. Geographic accessibility is calculated by identifying the number of people residing more than 30 minutes from an acute care hospital. Both network and raster-based methods are implemented and their results are compared. We also examine sensitivity to changes in travel speed settings and population assignment. Results In both methods, the areas identified as having limited accessibility were similar in their location, configuration, and shape. However, the number of people identified as having limited accessibility varied substantially between methods. Over all permutations, the raster-based method identified more area and people with limited accessibility. The raster-based method was more sensitive to travel speed settings, while the network-based method was more sensitive to the specific population assignment method employed in Michigan. Conclusions Differences between the underlying data models help to explain the variation in results between raster and network-based methods. Considering that the choice of data model/method may substantially alter the outcomes of a geographic accessibility analysis, we advise researchers to use caution in model selection. For policy, we recommend that Michigan adopt the network-based method or reevaluate the travel speed assignment rule in the raster-based method. Additionally, we recommend that the state revisit the population assignment method. PMID:22587023
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, X; Gao, H; Sharp, G
Purpose: Accurate image segmentation is a crucial step during image guided radiation therapy. This work proposes multi-atlas machine learning (MAML) algorithm for automated segmentation of head-and-neck CT images. Methods: As the first step, the algorithm utilizes normalized mutual information as similarity metric, affine registration combined with multiresolution B-Spline registration, and then fuses together using the label fusion strategy via Plastimatch. As the second step, the following feature selection strategy is proposed to extract five feature components from reference or atlas images: intensity (I), distance map (D), box (B), center of gravity (C) and stable point (S). The box feature Bmore » is novel. It describes a relative position from each point to minimum inscribed rectangle of ROI. The center-of-gravity feature C is the 3D Euclidean distance from a sample point to the ROI center of gravity, and then S is the distance of the sample point to the landmarks. Then, we adopt random forest (RF) in Scikit-learn, a Python module integrating a wide range of state-of-the-art machine learning algorithms as classifier. Different feature and atlas strategies are used for different ROIs for improved performance, such as multi-atlas strategy with reference box for brainstem, and single-atlas strategy with reference landmark for optic chiasm. Results: The algorithm was validated on a set of 33 CT images with manual contours using a leave-one-out cross-validation strategy. Dice similarity coefficients between manual contours and automated contours were calculated: the proposed MAML method had an improvement from 0.79 to 0.83 for brainstem and 0.11 to 0.52 for optic chiasm with respect to multi-atlas segmentation method (MA). Conclusion: A MAML method has been proposed for automated segmentation of head-and-neck CT images with improved performance. It provides the comparable result in brainstem and the improved result in optic chiasm compared with MA. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
NASA Astrophysics Data System (ADS)
Jung, Richard; Ehlers, Manfred
2016-10-01
The spectral features of intertidal sediments are all influenced by the same biophysical properties, such as water, salinity, grain size or vegetation and therefore they are hard to separate by using only multispectral sensors. This could be shown by a previous study of Jung et al. (2015). A more detailed analysis of their characteristic spectral feature has to be carried out to understand the differences and similarities. Spectrometry data (i.e., hyperspectral sensors), for instance, have the opportunity to measure the reflection of the landscape as a continuous spectral pattern for each pixel of an image built from dozen to hundreds of narrow spectral bands. This reveals a high potential to measure unique spectral responses of different ecological conditions (Hennig et al., 2007). In this context, this study uses spectrometric datasets to distinguish between 14 different sediment classes obtained from a study area in the German Wadden Sea. A new feature selection method is proposed (Jeffries-Matusita distance bases feature selection; JMDFS), which uses the Euclidean distance to eliminate the wavelengths with the most similar reflectance values in an iterative process. Subsequent to each iteration, the separation capability is estimated by the Jeffries-Matusita distance (JMD). Two classes can be separated if the JMD is greater than 1.9 and if less than four wavelengths remain, no separation can be assumed. The results of the JMDFS are compared with a state-of-the-art feature selection method called ReliefF. Both methods showed the ability to improve the separation by achieving overall accuracies greater than 82%. The accuracies are 4%-13% better than the results with all wavelengths applied. The number of remaining wavelengths is very diverse and ranges from 14 to 213 of 703. The advantage of JMDFS compared with ReliefF is clearly the processing time. ReliefF needs 30 min for one temporary result. It is necessary to repeat the process several times and to average all temporary results to achieve a final result. In this study 50 iterations were carried out, which makes four days of processing. In contrast, JMDFS needs only 30 min for a final result.
High-Order Local Pooling and Encoding Gaussians Over a Dictionary of Gaussians.
Li, Peihua; Zeng, Hui; Wang, Qilong; Shiu, Simon C K; Zhang, Lei
2017-07-01
Local pooling (LP) in configuration (feature) space proposed by Boureau et al. explicitly restricts similar features to be aggregated, which can preserve as much discriminative information as possible. At the time it appeared, this method combined with sparse coding achieved competitive classification results with only a small dictionary. However, its performance lags far behind the state-of-the-art results as only the zero-order information is exploited. Inspired by the success of high-order statistical information in existing advanced feature coding or pooling methods, we make an attempt to address the limitation of LP. To this end, we present a novel method called high-order LP (HO-LP) to leverage the information higher than the zero-order one. Our idea is intuitively simple: we compute the first- and second-order statistics per configuration bin and model them as a Gaussian. Accordingly, we employ a collection of Gaussians as visual words to represent the universal probability distribution of features from all classes. Our problem is naturally formulated as encoding Gaussians over a dictionary of Gaussians as visual words. This problem, however, is challenging since the space of Gaussians is not a Euclidean space but forms a Riemannian manifold. We address this challenge by mapping Gaussians into the Euclidean space, which enables us to perform coding with common Euclidean operations rather than complex and often expensive Riemannian operations. Our HO-LP preserves the advantages of the original LP: pooling only similar features and using a small dictionary. Meanwhile, it achieves very promising performance on standard benchmarks, with either conventional, hand-engineered features or deep learning-based features.
Nonstandard Methods in Lie Theory
ERIC Educational Resources Information Center
Goldbring, Isaac Martin
2009-01-01
In this thesis, we apply model theory to Lie theory and geometric group theory. These applications of model theory come via nonstandard analysis. In Lie theory, we use nonstandard methods to prove two results. First, we give a positive solution to the local form of Hilbert's Fifth Problem, which asks whether every locally euclidean local…
ROPE: Recoverable Order-Preserving Embedding of Natural Language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widemann, David P.; Wang, Eric X.; Thiagarajan, Jayaraman J.
We present a novel Recoverable Order-Preserving Embedding (ROPE) of natural language. ROPE maps natural language passages from sparse concatenated one-hot representations to distributed vector representations of predetermined fixed length. We use Euclidean distance to return search results that are both grammatically and semantically similar. ROPE is based on a series of random projections of distributed word embeddings. We show that our technique typically forms a dictionary with sufficient incoherence such that sparse recovery of the original text is possible. We then show how our embedding allows for efficient and meaningful natural search and retrieval on Microsoft’s COCO dataset and themore » IMDB Movie Review dataset.« less
Machine learning with quantum relative entropy
NASA Astrophysics Data System (ADS)
Tsuda, Koji
2009-12-01
Density matrices are a central tool in quantum physics, but it is also used in machine learning. A positive definite matrix called kernel matrix is used to represent the similarities between examples. Positive definiteness assures that the examples are embedded in an Euclidean space. When a positive definite matrix is learned from data, one has to design an update rule that maintains the positive definiteness. Our update rule, called matrix exponentiated gradient update, is motivated by the quantum relative entropy. Notably, the relative entropy is an instance of Bregman divergences, which are asymmetric distance measures specifying theoretical properties of machine learning algorithms. Using the calculus commonly used in quantum physics, we prove an upperbound of the generalization error of online learning.
Iris recognition using image moments and k-means algorithm.
Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed
2014-01-01
This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%.
Iris Recognition Using Image Moments and k-Means Algorithm
Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed
2014-01-01
This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%. PMID:24977221
The performance of trellis coded multilevel DPSK on a fading mobile satellite channel
NASA Technical Reports Server (NTRS)
Simon, Marvin K.; Divsalar, Dariush
1987-01-01
The performance of trellis coded multilevel differential phase-shift-keying (MDPSK) over Rician and Rayleigh fading channels is discussed. For operation at L-Band, this signalling technique leads to a more robust system than the coherent system with dual pilot tone calibration previously proposed for UHF. The results are obtained using a combination of analysis and simulation. The analysis shows that the design criterion for trellis codes to be operated on fading channels with interleaving/deinterleaving is no longer free Euclidean distance. The correct design criterion for optimizing bit error probability of trellis coded MDPSK over fading channels will be presented along with examples illustrating its application.
NASA Astrophysics Data System (ADS)
Nutku, Y.; Sheftel, M. B.
2014-02-01
This is a corrected and essentially extended version of the unpublished manuscript by Y Nutku and M Sheftel which contains new results. It is proposed to be published in honour of Y Nutku’s memory. All corrections and new results in sections 1, 2 and 4 are due to M Sheftel. We present new anti-self-dual exact solutions of the Einstein field equations with Euclidean and neutral (ultra-hyperbolic) signatures that admit only one rotational Killing vector. Such solutions of the Einstein field equations are determined by non-invariant solutions of Boyer-Finley (BF) equation. For the case of Euclidean signature such a solution of the BF equation was first constructed by Calderbank and Tod. Two years later, Martina, Sheftel and Winternitz applied the method of group foliation to the BF equation and reproduced the Calderbank-Tod solution together with new solutions for the neutral signature. In the case of Euclidean signature we obtain new metrics which asymptotically locally look like a flat space and have a non-removable singular point at the origin. In the case of ultra-hyperbolic signature there exist three inequivalent forms of metric. Only one of these can be obtained by analytic continuation from the Calderbank-Tod solution whereas the other two are new.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adesso, Gerardo; CNR-INFM Coherentia, Naples; CNISM, Unita di Salerno, Salerno
2007-10-15
We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1xM bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself andmore » the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a, uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.« less
Carbon, Claus-Christian
2010-07-01
Participants with personal and without personal experiences with the Earth as a sphere estimated large-scale distances between six cities located on different continents. Cognitive distances were submitted to a specific multidimensional scaling algorithm in the 3D Euclidean space with the constraint that all cities had to lie on the same sphere. A simulation was run that calculated respective 3D configurations of the city positions for a wide range of radii of the proposed sphere. People who had personally experienced the Earth as a sphere, at least once in their lifetime, showed a clear optimal solution of the multidimensional scaling (MDS) routine with a mean radius deviating only 8% from the actual radius of the Earth. In contrast, the calculated configurations for people without any personal experience with the Earth as a sphere were compatible with a cognitive concept of a flat Earth. 2010 Elsevier B.V. All rights reserved.
Anomalously Soft Non-Euclidean Springs
NASA Astrophysics Data System (ADS)
Levin, Ido; Sharon, Eran
2016-01-01
In this work we study the mechanical properties of a frustrated elastic ribbon spring—the non-Euclidean minimal spring. This spring belongs to the family of non-Euclidean plates: it has no spontaneous curvature, but its lateral intrinsic geometry is described by a non-Euclidean reference metric. The reference metric of the minimal spring is hyperbolic, and can be embedded as a minimal surface. We argue that the existence of a continuous set of such isometric minimal surfaces with different extensions leads to a complete degeneracy of the bulk elastic energy of the minimal spring under elongation. This degeneracy is removed only by boundary layer effects. As a result, the mechanical properties of the minimal spring are unusual: the spring is ultrasoft with a rigidity that depends on the thickness t as t7 /2 and does not explicitly depend on the ribbon's width. Moreover, we show that as the ribbon is widened, the rigidity may even decrease. These predictions are confirmed by a numerical study of a constrained spring. This work is the first to address the unusual mechanical properties of constrained non-Euclidean elastic objects.
A fast estimation of shock wave pressure based on trend identification
NASA Astrophysics Data System (ADS)
Yao, Zhenjian; Wang, Zhongyu; Wang, Chenchen; Lv, Jing
2018-04-01
In this paper, a fast method based on trend identification is proposed to accurately estimate the shock wave pressure in a dynamic measurement. Firstly, the collected output signal of the pressure sensor is reconstructed by discrete cosine transform (DCT) to reduce the computational complexity for the subsequent steps. Secondly, the empirical mode decomposition (EMD) is applied to decompose the reconstructed signal into several components with different frequency-bands, and the last few low-frequency components are chosen to recover the trend of the reconstructed signal. In the meantime, the optimal component number is determined based on the correlation coefficient and the normalized Euclidean distance between the trend and the reconstructed signal. Thirdly, with the areas under the gradient curve of the trend signal, the stable interval that produces the minimum can be easily identified. As a result, the stable value of the output signal is achieved in this interval. Finally, the shock wave pressure can be estimated according to the stable value of the output signal and the sensitivity of the sensor in the dynamic measurement. A series of shock wave pressure measurements are carried out with a shock tube system to validate the performance of this method. The experimental results show that the proposed method works well in shock wave pressure estimation. Furthermore, comparative experiments also demonstrate the superiority of the proposed method over the existing approaches in both estimation accuracy and computational efficiency.
Li, Yushuang; Yang, Jiasheng; Zhang, Yi
2016-01-01
In this paper, we have proposed a novel alignment-free method for comparing the similarity of protein sequences. We first encode a protein sequence into a 440 dimensional feature vector consisting of a 400 dimensional Pseudo-Markov transition probability vector among the 20 amino acids, a 20 dimensional content ratio vector, and a 20 dimensional position ratio vector of the amino acids in the sequence. By evaluating the Euclidean distances among the representing vectors, we compare the similarity of protein sequences. We then apply this method into the ND5 dataset consisting of the ND5 protein sequences of 9 species, and the F10 and G11 datasets representing two of the xylanases containing glycoside hydrolase families, i.e., families 10 and 11. As a result, our method achieves a correlation coefficient of 0.962 with the canonical protein sequence aligner ClustalW in the ND5 dataset, much higher than those of other 5 popular alignment-free methods. In addition, we successfully separate the xylanases sequences in the F10 family and the G11 family and illustrate that the F10 family is more heat stable than the G11 family, consistent with a few previous studies. Moreover, we prove mathematically an identity equation involving the Pseudo-Markov transition probability vector and the amino acids content ratio vector. PMID:27918587
Geometry-based populated chessboard recognition
NASA Astrophysics Data System (ADS)
Xie, Youye; Tang, Gongguo; Hoff, William
2018-04-01
Chessboards are commonly used to calibrate cameras, and many robust methods have been developed to recognize the unoccupied boards. However, when the chessboard is populated with chess pieces, such as during an actual game, the problem of recognizing the board is much harder. Challenges include occlusion caused by the chess pieces, the presence of outlier lines and low viewing angles of the chessboard. In this paper, we present a novel approach to address the above challenges and recognize the chessboard. The Canny edge detector and Hough transform are used to capture all possible lines in the scene. The k-means clustering and a k-nearest-neighbors inspired algorithm are applied to cluster and reject the outlier lines based on their Euclidean distances to the nearest neighbors in a scaled Hough transform space. Finally, based on prior knowledge of the chessboard structure, a geometric constraint is used to find the correspondences between image lines and the lines on the chessboard through the homography transformation. The proposed algorithm works for a wide range of the operating angles and achieves high accuracy in experiments.
Centered reduced moments and associate density functions applied to alkaline comet assay.
Castaneda, Roman; Pelaez, Alejandro; Marquez, Maria-Elena; Abad, Pablo
2005-01-01
The single cell gel electrophoresis assay is a sensitive, rapid, and visual technique for deoxyribonucleic acid (DNA) strand-break detection in individual mammalian cells, whose application has significantly increased in the past few years. The cells are embedded in agarose on glass slides followed by lyses of the cell membrane. Thereafter, damaged DNA strands are electrophoresed away from the nucleus towards the anode giving the appearance of a comet tail. Nowadays, charge coupled device cameras are attached at optical microscopes for recording the images of the cells, and digital image processing is applied for obtaining quantitative descriptors. However, the conventional software is usually expensive, inflexible and, in many cases, can only provide low-order descriptors based in image segmentation, determination of centers of mass, and Euclidean distances. Associated density functions and centered reduced moments offer an effective and flexible alternative for quantitative analysis of the comet cells. We will show how the position of the center of mass, the lengths and orientation of the main semiaxes, and the eccentricity of such images can be accurately determined by this method.
Remote monitoring of LED lighting system performance
NASA Astrophysics Data System (ADS)
Thotagamuwa, Dinusha R.; Perera, Indika U.; Narendran, Nadarajah
2016-09-01
The concept of connected lighting systems using LED lighting for the creation of intelligent buildings is becoming attractive to building owners and managers. In this application, the two most important parameters include power demand and the remaining useful life of the LED fixtures. The first enables energy-efficient buildings and the second helps building managers schedule maintenance services. The failure of an LED lighting system can be parametric (such as lumen depreciation) or catastrophic (such as complete cessation of light). Catastrophic failures in LED lighting systems can create serious consequences in safety critical and emergency applications. Therefore, both failure mechanisms must be considered and the shorter of the two must be used as the failure time. Furthermore, because of significant variation between the useful lives of similar products, it is difficult to accurately predict the life of LED systems. Real-time data gathering and analysis of key operating parameters of LED systems can enable the accurate estimation of the useful life of a lighting system. This paper demonstrates the use of a data-driven method (Euclidean distance) to monitor the performance of an LED lighting system and predict its time to failure.
An Unsupervised Deep Hyperspectral Anomaly Detector
Ma, Ning; Peng, Yu; Wang, Shaojun
2018-01-01
Hyperspectral image (HSI) based detection has attracted considerable attention recently in agriculture, environmental protection and military applications as different wavelengths of light can be advantageously used to discriminate different types of objects. Unfortunately, estimating the background distribution and the detection of interesting local objects is not straightforward, and anomaly detectors may give false alarms. In this paper, a Deep Belief Network (DBN) based anomaly detector is proposed. The high-level features and reconstruction errors are learned through the network in a manner which is not affected by previous background distribution assumption. To reduce contamination by local anomalies, adaptive weights are constructed from reconstruction errors and statistical information. By using the code image which is generated during the inference of DBN and modified by adaptively updated weights, a local Euclidean distance between under test pixels and their neighboring pixels is used to determine the anomaly targets. Experimental results on synthetic and recorded HSI datasets show the performance of proposed method outperforms the classic global Reed-Xiaoli detector (RXD), local RX detector (LRXD) and the-state-of-the-art Collaborative Representation detector (CRD). PMID:29495410
NASA Technical Reports Server (NTRS)
Scholz, D.; Fuhs, N.; Hixson, M.; Akiyama, T. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Data sets for corn, soybeans, winter wheat, and spring wheat were used to evaluate the following schemes for crop identification: (1) per point Gaussian maximum classifier; (2) per point sum of normal densities classifiers; (3) per point linear classifier; (4) per point Gaussian maximum likelihood decision tree classifiers; and (5) texture sensitive per field Gaussian maximum likelihood classifier. Test site location and classifier both had significant effects on classification accuracy of small grains; classifiers did not differ significantly in overall accuracy, with the majority of the difference among classifiers being attributed to training method rather than to the classification algorithm applied. The complexity of use and computer costs for the classifiers varied significantly. A linear classification rule which assigns each pixel to the class whose mean is closest in Euclidean distance was the easiest for the analyst and cost the least per classification.
Morales-Bayuelo, Alejandro
2016-07-01
Though QSAR was originally developed in the context of physical organic chemistry, it has been applied very extensively to chemicals (drugs) which act on biological systems, in this idea one of the most important QSAR methods is the 3D QSAR model. However, due to the complexity of understanding the results it is necessary to postulate new methodologies to highlight their physical-chemical meaning. In this sense, this work postulates new insights to understand the CoMFA results using molecular quantum similarity and chemical reactivity descriptors within the framework of density functional theory. To obtain these insights a simple theoretical scheme involving quantum similarity (overlap, coulomb operators, their euclidean distances) and chemical reactivity descriptors such as chemical potential (μ), hardness (ɳ), softness (S), electrophilicity (ω), and the Fukui functions, was used to understand the substitution effect. In this sense, this methodology can be applied to analyze the biological activity and the stabilization process in the non-covalent interactions on a particular molecular set taking a reference compound.
Principal component analysis and the locus of the Fréchet mean in the space of phylogenetic trees.
Nye, Tom M W; Tang, Xiaoxian; Weyenberg, Grady; Yoshida, Ruriko
2017-12-01
Evolutionary relationships are represented by phylogenetic trees, and a phylogenetic analysis of gene sequences typically produces a collection of these trees, one for each gene in the analysis. Analysis of samples of trees is difficult due to the multi-dimensionality of the space of possible trees. In Euclidean spaces, principal component analysis is a popular method of reducing high-dimensional data to a low-dimensional representation that preserves much of the sample's structure. However, the space of all phylogenetic trees on a fixed set of species does not form a Euclidean vector space, and methods adapted to tree space are needed. Previous work introduced the notion of a principal geodesic in this space, analogous to the first principal component. Here we propose a geometric object for tree space similar to the [Formula: see text]th principal component in Euclidean space: the locus of the weighted Fréchet mean of [Formula: see text] vertex trees when the weights vary over the [Formula: see text]-simplex. We establish some basic properties of these objects, in particular showing that they have dimension [Formula: see text], and propose algorithms for projection onto these surfaces and for finding the principal locus associated with a sample of trees. Simulation studies demonstrate that these algorithms perform well, and analyses of two datasets, containing Apicomplexa and African coelacanth genomes respectively, reveal important structure from the second principal components.
Euclideanization of Maxwell-Chern-Simons theory
NASA Astrophysics Data System (ADS)
Bowman, Daniel Alan
We quantize the theory of electromagnetism in 2 + 1-spacetime dimensions with the addition of the topological Chern-Simons term using an indefinite metric formalism. In the process, we also quantize the Proca and pure Maxwell theories, which are shown to be related to the Maxwell-Chern-Simons theory. Next, we Euclideanize these three theories, obtaining path space formulae and investigating Osterwalder-Schrader positivity in each case. Finally, we obtain a characterization of those Euclidean states that correspond to physical states in the relativistic theories.
Variational method for lattice spectroscopy with ghosts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burch, Tommy; Hagen, Christian; Gattringer, Christof
2006-01-01
We discuss the variational method used in lattice spectroscopy calculations. In particular we address the role of ghost contributions which appear in quenched or partially quenched simulations and have a nonstandard euclidean time dependence. We show that the ghosts can be separated from the physical states. Our result is illustrated with numerical data for the scalar meson.
Vacuum polarization in the field of a multidimensional global monopole
NASA Astrophysics Data System (ADS)
Grats, Yu. V.; Spirin, P. A.
2016-11-01
An approximate expression for the Euclidean Green function of a massless scalar field in the spacetime of a multidimensional global monopole has been derived. Expressions for the vacuum expectation values <ϕ2>ren and < T 00>ren have been derived by the dimensional regularization method. Comparison with the results obtained by alternative regularization methods is made.
Han, Lianghao; Dong, Hua; McClelland, Jamie R; Han, Liangxiu; Hawkes, David J; Barratt, Dean C
2017-07-01
This paper presents a new hybrid biomechanical model-based non-rigid image registration method for lung motion estimation. In the proposed method, a patient-specific biomechanical modelling process captures major physically realistic deformations with explicit physical modelling of sliding motion, whilst a subsequent non-rigid image registration process compensates for small residuals. The proposed algorithm was evaluated with 10 4D CT datasets of lung cancer patients. The target registration error (TRE), defined as the Euclidean distance of landmark pairs, was significantly lower with the proposed method (TRE = 1.37 mm) than with biomechanical modelling (TRE = 3.81 mm) and intensity-based image registration without specific considerations for sliding motion (TRE = 4.57 mm). The proposed method achieved a comparable accuracy as several recently developed intensity-based registration algorithms with sliding handling on the same datasets. A detailed comparison on the distributions of TREs with three non-rigid intensity-based algorithms showed that the proposed method performed especially well on estimating the displacement field of lung surface regions (mean TRE = 1.33 mm, maximum TRE = 5.3 mm). The effects of biomechanical model parameters (such as Poisson's ratio, friction and tissue heterogeneity) on displacement estimation were investigated. The potential of the algorithm in optimising biomechanical models of lungs through analysing the pattern of displacement compensation from the image registration process has also been demonstrated. Copyright © 2017 Elsevier B.V. All rights reserved.
Vickers, Douglas; Bovet, Pierre; Lee, Michael D; Hughes, Peter
2003-01-01
The planar Euclidean version of the travelling salesperson problem (TSP) requires finding a tour of minimal length through a two-dimensional set of nodes. Despite the computational intractability of the TSP, people can produce rapid, near-optimal solutions to visually presented versions of such problems. To explain this, MacGregor et al (1999, Perception 28 1417-1428) have suggested that people use a global-to-local process, based on a perceptual tendency to organise stimuli into convex figures. We review the evidence for this idea and propose an alternative, local-to-global hypothesis, based on the detection of least distances between the nodes in an array. We present the results of an experiment in which we examined the relationships between three objective measures and performance measures of optimality and response uncertainty in tasks requiring participants to construct a closed tour or an open path. The data are not well accounted for by a process based on the convex hull. In contrast, results are generally consistent with a locally focused process based initially on the detection of nearest-neighbour clusters. Individual differences are interpreted in terms of a hierarchical process of constructing solutions, and the findings are related to a more general analysis of the role of nearest neighbours in the perception of structure and motion.
Gastl, Mareike; Brünner, Yvonne F; Wiesmann, Martin; Freiherr, Jessica
2014-09-01
The nose is important not only for breathing, filtering air, and perceiving olfactory stimuli. Although the face and hands have been mapped, the representation of the internal and external surface of the nose on the primary somatosensory cortex (SI) is still poorly understood. To fill this gap functional magnetic resonance imaging (fMRI) was used to localize the nose and the nasal mucosa in the Brodman areas (BAs) 3b, 1, and 2 of the human postcentral gyrus (PG). Tactile stimulation during fMRI was applied via a customized pneumatically driven device to six stimulation sites: the alar wing of the nose, the lateral nasal mucosa, and the hand (serving as a reference area) on the left and right side of the body. Individual representations could be discriminated for the left and right hand, for the left nasal mucosa and left alar wing of the nose in BA 3b and BA 1 by comparing mean activation maxima and Euclidean distances. Right-sided nasal conditions and conditions in BA 2 could further be separated by different Euclidean distances. Regarding the alar wing of the nose, the results concurred with the classic sensory homunculus proposed by Penfield and colleagues. The nasal mucosa was not only determined an individual and bilateral representation, its position on the somatosensory cortex is also situated closer to the caudal end of the PG compared to that of the alar wing of the nose and the hand. As SI is commonly activated during the perception of odors, these findings underscore the importance of the knowledge of the representation of the nasal mucosa on the primary somatosensory cortex, especially for interpretation of results of functional imaging studies about the sense of smell. Copyright © 2014 Wiley Periodicals, Inc.
Groundwater similarity across a watershed derived from time-warped and flow-corrected time series
NASA Astrophysics Data System (ADS)
Rinderer, M.; McGlynn, B. L.; van Meerveld, H. J.
2017-05-01
Information about catchment-scale groundwater dynamics is necessary to understand how catchments store and release water and why water quantity and quality varies in streams. However, groundwater level monitoring is often restricted to a limited number of sites. Knowledge of the factors that determine similarity between monitoring sites can be used to predict catchment-scale groundwater storage and connectivity of different runoff source areas. We used distance-based and correlation-based similarity measures to quantify the spatial and temporal differences in shallow groundwater similarity for 51 monitoring sites in a Swiss prealpine catchment. The 41 months long time series were preprocessed using Dynamic Time-Warping and a Flow-corrected Time Transformation to account for small timing differences and bias toward low-flow periods. The mean distance-based groundwater similarity was correlated to topographic indices, such as upslope contributing area, topographic wetness index, and local slope. Correlation-based similarity was less related to landscape position but instead revealed differences between seasons. Analysis of variance and partial Mantel tests showed that landscape position, represented by the topographic wetness index, explained 52% of the variability in mean distance-based groundwater similarity, while spatial distance, represented by the Euclidean distance, explained only 5%. The variability in distance-based similarity and correlation-based similarity between groundwater and streamflow time series was significantly larger for midslope locations than for other landscape positions. This suggests that groundwater dynamics at these midslope sites, which are important to understand runoff source areas and hydrological connectivity at the catchment scale, are most difficult to predict.
Equivalence Testing of Complex Particle Size Distribution Profiles Based on Earth Mover's Distance.
Hu, Meng; Jiang, Xiaohui; Absar, Mohammad; Choi, Stephanie; Kozak, Darby; Shen, Meiyu; Weng, Yu-Ting; Zhao, Liang; Lionberger, Robert
2018-04-12
Particle size distribution (PSD) is an important property of particulates in drug products. In the evaluation of generic drug products formulated as suspensions, emulsions, and liposomes, the PSD comparisons between a test product and the branded product can provide useful information regarding in vitro and in vivo performance. Historically, the FDA has recommended the population bioequivalence (PBE) statistical approach to compare the PSD descriptors D50 and SPAN from test and reference products to support product equivalence. In this study, the earth mover's distance (EMD) is proposed as a new metric for comparing PSD particularly when the PSD profile exhibits complex distribution (e.g., multiple peaks) that is not accurately described by the D50 and SPAN descriptor. EMD is a statistical metric that measures the discrepancy (distance) between size distribution profiles without a prior assumption of the distribution. PBE is then adopted to perform statistical test to establish equivalence based on the calculated EMD distances. Simulations show that proposed EMD-based approach is effective in comparing test and reference profiles for equivalence testing and is superior compared to commonly used distance measures, e.g., Euclidean and Kolmogorov-Smirnov distances. The proposed approach was demonstrated by evaluating equivalence of cyclosporine ophthalmic emulsion PSDs that were manufactured under different conditions. Our results show that proposed approach can effectively pass an equivalent product (e.g., reference product against itself) and reject an inequivalent product (e.g., reference product against negative control), thus suggesting its usefulness in supporting bioequivalence determination of a test product to the reference product which both possess multimodal PSDs.
Ghadie, Mohamed A; Japkowicz, Nathalie; Perkins, Theodore J
2015-08-15
Stem cell differentiation is largely guided by master transcriptional regulators, but it also depends on the expression of other types of genes, such as cell cycle genes, signaling genes, metabolic genes, trafficking genes, etc. Traditional approaches to understanding gene expression patterns across multiple conditions, such as principal components analysis or K-means clustering, can group cell types based on gene expression, but they do so without knowledge of the differentiation hierarchy. Hierarchical clustering can organize cell types into a tree, but in general this tree is different from the differentiation hierarchy itself. Given the differentiation hierarchy and gene expression data at each node, we construct a weighted Euclidean distance metric such that the minimum spanning tree with respect to that metric is precisely the given differentiation hierarchy. We provide a set of linear constraints that are provably sufficient for the desired construction and a linear programming approach to identify sparse sets of weights, effectively identifying genes that are most relevant for discriminating different parts of the tree. We apply our method to microarray gene expression data describing 38 cell types in the hematopoiesis hierarchy, constructing a weighted Euclidean metric that uses just 175 genes. However, we find that there are many alternative sets of weights that satisfy the linear constraints. Thus, in the style of random-forest training, we also construct metrics based on random subsets of the genes and compare them to the metric of 175 genes. We then report on the selected genes and their biological functions. Our approach offers a new way to identify genes that may have important roles in stem cell differentiation. tperkins@ohri.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Automated interpretation of 3D laserscanned point clouds for plant organ segmentation.
Wahabzada, Mirwaes; Paulus, Stefan; Kersting, Kristian; Mahlein, Anne-Katrin
2015-08-08
Plant organ segmentation from 3D point clouds is a relevant task for plant phenotyping and plant growth observation. Automated solutions are required to increase the efficiency of recent high-throughput plant phenotyping pipelines. However, plant geometrical properties vary with time, among observation scales and different plant types. The main objective of the present research is to develop a fully automated, fast and reliable data driven approach for plant organ segmentation. The automated segmentation of plant organs using unsupervised, clustering methods is crucial in cases where the goal is to get fast insights into the data or no labeled data is available or costly to achieve. For this we propose and compare data driven approaches that are easy-to-realize and make the use of standard algorithms possible. Since normalized histograms, acquired from 3D point clouds, can be seen as samples from a probability simplex, we propose to map the data from the simplex space into Euclidean space using Aitchisons log ratio transformation, or into the positive quadrant of the unit sphere using square root transformation. This, in turn, paves the way to a wide range of commonly used analysis techniques that are based on measuring the similarities between data points using Euclidean distance. We investigate the performance of the resulting approaches in the practical context of grouping 3D point clouds and demonstrate empirically that they lead to clustering results with high accuracy for monocotyledonous and dicotyledonous plant species with diverse shoot architecture. An automated segmentation of 3D point clouds is demonstrated in the present work. Within seconds first insights into plant data can be deviated - even from non-labelled data. This approach is applicable to different plant species with high accuracy. The analysis cascade can be implemented in future high-throughput phenotyping scenarios and will support the evaluation of the performance of different plant genotypes exposed to stress or in different environmental scenarios.
ERIC Educational Resources Information Center
Rogers, Pat
1972-01-01
Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)
NASA Astrophysics Data System (ADS)
de Wit, Bernard; Reys, Valentin
2017-12-01
Supergravity with eight supercharges in a four-dimensional Euclidean space is constructed at the full non-linear level by performing an off-shell time-like reduction of five-dimensional supergravity. The resulting four-dimensional theory is realized off-shell with the Weyl, vector and tensor supermultiplets and a corresponding multiplet calculus. Hypermultiplets are included as well, but they are themselves only realized with on-shell supersymmetry. We also briefly discuss the non-linear supermultiplet. The off-shell reduction leads to a full understanding of the Euclidean theory. A complete multiplet calculus is presented along the lines of the Minkowskian theory. Unlike in Minkowski space, chiral and anti-chiral multiplets are real and supersymmetric actions are generally unbounded from below. Precisely as in the Minkowski case, where one has different formulations of Poincaré supergravity upon introducing different compensating supermultiplets, one can also obtain different versions of Euclidean supergravity.
Flexible intuitions of Euclidean geometry in an Amazonian indigene group
Izard, Véronique; Pica, Pierre; Spelke, Elizabeth S.; Dehaene, Stanislas
2011-01-01
Kant argued that Euclidean geometry is synthesized on the basis of an a priori intuition of space. This proposal inspired much behavioral research probing whether spatial navigation in humans and animals conforms to the predictions of Euclidean geometry. However, Euclidean geometry also includes concepts that transcend the perceptible, such as objects that are infinitely small or infinitely large, or statements of necessity and impossibility. We tested the hypothesis that certain aspects of nonperceptible Euclidian geometry map onto intuitions of space that are present in all humans, even in the absence of formal mathematical education. Our tests probed intuitions of points, lines, and surfaces in participants from an indigene group in the Amazon, the Mundurucu, as well as adults and age-matched children controls from the United States and France and younger US children without education in geometry. The responses of Mundurucu adults and children converged with that of mathematically educated adults and children and revealed an intuitive understanding of essential properties of Euclidean geometry. For instance, on a surface described to them as perfectly planar, the Mundurucu's estimations of the internal angles of triangles added up to ∼180 degrees, and when asked explicitly, they stated that there exists one single parallel line to any given line through a given point. These intuitions were also partially in place in the group of younger US participants. We conclude that, during childhood, humans develop geometrical intuitions that spontaneously accord with the principles of Euclidean geometry, even in the absence of training in mathematics. PMID:21606377
Surnames in Albania: a study of the population of Albania through isonymy.
Mikerezi, Ilia; Xhina, Endrit; Scapoli, Chiara; Barbujani, Guido; Mamolini, Elisabetta; Sandri, Massimo; Carrieri, Alberto; Rodriguez-Larralde, Alvaro; Barrai, Italo
2013-05-01
In order to describe the isonymic structure of Albania, the distribution of 3,068,447 surnames was studied in the 12 prefectures and their administrative subdivisions: the 36 districts and 321 communes. The number of different surnames found was 37,184. Effective surname number for the entire country was 1327, the average for prefectures was 653.3 ± 84.3, for districts 365.9 ± 42.0 and for communes 122.6 ± 8.7. These values display a variation of inbreeding between administrative levels in the Albanian population, which can be attributed to the previously published "Prefecture effect". Matrices of isonymic distances between units within administrative levels were tested for correlation with geographic distances. The correlations were highest for prefectures (r = 0.71 ± 0.06 for Euclidean distance) and lowest for communes (r = 0.37 ± 0.011 for Nei's distance). The multivariate analyses (Principal component analysis and Multidimensional Scaling) of prefectures identify three main clusters, one toward the North, the second in Central Albania, and the third in the South. This pattern is consistent with important subclusters from districts and communes, which point out that the country may have been colonised by diffusion of groups in the North-South direction, and from Macedonia in the East, over a pre-existing Illiryan population. © 2013 Blackwell Publishing Ltd/University College London.
Gray, Brian R.; Bushek, David; Drane, J. Wanzer; Porter, Dwayne
2009-01-01
Infection levels of eastern oysters by the unicellular pathogen Perkinsus marinus have been associated with anthropogenic influences in laboratory studies. However, these relationships have been difficult to investigate in the field because anthropogenic inputs are often associated with natural influences such as freshwater inflow, which can also affect infection levels. We addressed P. marinus-land use associations using field-collected data from Murrells Inlet, South Carolina, USA, a developed, coastal estuary with relatively minor freshwater inputs. Ten oysters from each of 30 reefs were sampled quarterly in each of 2 years. Distances to nearest urbanized land class and to nearest stormwater outfall were measured via both tidal creeks and an elaboration of Euclidean distance. As the forms of any associations between oyster infection and distance to urbanization were unknown a priori, we used data from the first and second years of the study as exploratory and confirmatory datasets, respectively. With one exception, quarterly land use associations identified using the exploratory dataset were not confirmed using the confirmatory dataset. The exception was an association between the prevalence of moderate to high infection levels in winter and decreasing distance to nearest urban land use. Given that the study design appeared adequate to detect effects inferred from the exploratory dataset, these results suggest that effects of land use gradients were largely insubstantial or were ephemeral with duration less than 3 months.
Barr, Kelly R.; Kus, Barbara E.; Preston, Kristine; Howell, Scarlett; Perkins, Emily; Vandergast, Amy
2015-01-01
Achieving long-term persistence of species in urbanized landscapes requires characterizing population genetic structure to understand and manage the effects of anthropogenic disturbance on connectivity. Urbanization over the past century in coastal southern California has caused both precipitous loss of coastal sage scrub habitat and declines in populations of the cactus wren (Campylorhynchus brunneicapillus). Using 22 microsatellite loci, we found that remnant cactus wren aggregations in coastal southern California comprised 20 populations based on strict exact tests for population differentiation, and 12 genetic clusters with hierarchical Bayesian clustering analyses. Genetic structure patterns largely mirrored underlying habitat availability, with cluster and population boundaries coinciding with fragmentation caused primarily by urbanization. Using a habitat model we developed, we detected stronger associations between habitat-based distances and genetic distances than Euclidean geographic distance. Within populations, we detected a positive association between available local habitat and allelic richness and a negative association with relatedness. Isolation-by-distance patterns varied over the study area, which we attribute to temporal differences in anthropogenic landscape development. We also found that genetic bottleneck signals were associated with wildfire frequency. These results indicate that habitat fragmentation and alterations have reduced genetic connectivity and diversity of cactus wren populations in coastal southern California. Management efforts focused on improving connectivity among remaining populations may help to ensure population persistence.
Euclid and Descartes: A Partnership.
ERIC Educational Resources Information Center
Wasdovich, Dorothy Hoy
1991-01-01
Presented is a method of reorganizing a high school geometry course to integrate coordinate geometry together with Euclidean geometry at an earlier stage in the course, thus enabling students to prove subsequent theorems from either perspective. Several examples contrasting different proofs from both perspectives are provided. (MDH)
NASA Technical Reports Server (NTRS)
Dowker, Fay; Gregory, Ruth; Traschen, Jennie
1991-01-01
We argue the existence of solutions of the Euclidean Einstein equations that correspond to a vortex sitting at the horizon of a black hole. We find the asymptotic behaviors, at the horizon and at infinity, of vortex solutions for the gauge and scalar fields in an abelian Higgs model on a Euclidean Schwarzschild background and interpolate between them by integrating the equations numerically. Calculating the backreaction shows that the effect of the vortex is to cut a slice out of the Schwarzschild geometry. Consequences of these solutions for black hole thermodynamics are discussed.
Stephens, William Edryd
2016-09-01
Illicit tobacco products have a disproportionately negative effect on public health. Counterfeits and cheap whites as well as legal brands smuggled from countries not adopting track and trace technologies will require novel forensic tools to aid the disruption of their supply chains. Data sets of trace element concentrations in tobacco were obtained using X-ray fluorescence spectrometry on samples of legal and illicit products mainly from Europe. Authentic and counterfeit products were discriminated by identifying outliers from data sets of legal products using Mahalanobis distance and graphical profiling methods. Identical and closely similar counterfeits were picked out using Euclidean distance, and counterfeit provenance was addressed using chemometric methods to identify geographical affinities. Taking Marlboro as an exemplar, the major brands are shown to be remarkably consistent in composition, in marked contrast to counterfeits bearing the same brand name. Analysis of 35 illicit products seized in the European Union (EU) indicates that 18 are indistinguishable or closely similar to Marlboro legally sold in the EU, while 17 are sufficiently different to be deemed counterfeit, among them being 2 counterfeits so closely similar that their tobaccos are likely to come from the same source. The tobacco in the large majority of counterfeits in this data set appears to originate in Asia. Multivariate and graphical analysis of trace elements in tobacco can effectively authenticate brands, crossmatch illicit products across jurisdictions and may identify their geographical sources. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouchard, Chris; Chang, Chia Cheng; Kurth, Thorsten
In this paper, the Feynman-Hellmann theorem can be derived from the long Euclidean-time limit of correlation functions determined with functional derivatives of the partition function. Using this insight, we fully develop an improved method for computing matrix elements of external currents utilizing only two-point correlation functions. Our method applies to matrix elements of any external bilinear current, including nonzero momentum transfer, flavor-changing, and two or more current insertion matrix elements. The ability to identify and control all the systematic uncertainties in the analysis of the correlation functions stems from the unique time dependence of the ground-state matrix elements and the fact that all excited states and contact terms are Euclidean-time dependent. We demonstrate the utility of our method with a calculation of the nucleon axial charge using gradient-flowed domain-wall valence quarks on themore » $$N_f=2+1+1$$ MILC highly improved staggered quark ensemble with lattice spacing and pion mass of approximately 0.15 fm and 310 MeV respectively. We show full control over excited-state systematics with the new method and obtain a value of $$g_A = 1.213(26)$$ with a quark-mass-dependent renormalization coefficient.« less
Entanglement Entropy of Black Holes.
Solodukhin, Sergey N
2011-01-01
The entanglement entropy is a fundamental quantity, which characterizes the correlations between sub-systems in a larger quantum-mechanical system. For two sub-systems separated by a surface the entanglement entropy is proportional to the area of the surface and depends on the UV cutoff, which regulates the short-distance correlations. The geometrical nature of entanglement-entropy calculation is particularly intriguing when applied to black holes when the entangling surface is the black-hole horizon. I review a variety of aspects of this calculation: the useful mathematical tools such as the geometry of spaces with conical singularities and the heat kernel method, the UV divergences in the entropy and their renormalization, the logarithmic terms in the entanglement entropy in four and six dimensions and their relation to the conformal anomalies. The focus in the review is on the systematic use of the conical singularity method. The relations to other known approaches such as 't Hooft's brick-wall model and the Euclidean path integral in the optical metric are discussed in detail. The puzzling behavior of the entanglement entropy due to fields, which non-minimally couple to gravity, is emphasized. The holographic description of the entanglement entropy of the blackhole horizon is illustrated on the two- and four-dimensional examples. Finally, I examine the possibility to interpret the Bekenstein-Hawking entropy entirely as the entanglement entropy.
Entanglement Entropy of Black Holes
NASA Astrophysics Data System (ADS)
Solodukhin, Sergey N.
2011-10-01
The entanglement entropy is a fundamental quantity, which characterizes the correlations between sub-systems in a larger quantum-mechanical system. For two sub-systems separated by a surface the entanglement entropy is proportional to the area of the surface and depends on the UV cutoff, which regulates the short-distance correlations. The geometrical nature of entanglement-entropy calculation is particularly intriguing when applied to black holes when the entangling surface is the black-hole horizon. I review a variety of aspects of this calculation: the useful mathematical tools such as the geometry of spaces with conical singularities and the heat kernel method, the UV divergences in the entropy and their renormalization, the logarithmic terms in the entanglement entropy in four and six dimensions and their relation to the conformal anomalies. The focus in the review is on the systematic use of the conical singularity method. The relations to other known approaches such as 't Hooft's brick-wall model and the Euclidean path integral in the optical metric are discussed in detail. The puzzling behavior of the entanglement entropy due to fields, which non-minimally couple to gravity, is emphasized. The holographic description of the entanglement entropy of the blackhole horizon is illustrated on the two- and four-dimensional examples. Finally, I examine the possibility to interpret the Bekenstein-Hawking entropy entirely as the entanglement entropy.
Human action recognition based on spatial-temporal descriptors using key poses
NASA Astrophysics Data System (ADS)
Hu, Shuo; Chen, Yuxin; Wang, Huaibao; Zuo, Yaqing
2014-11-01
Human action recognition is an important area of pattern recognition today due to its direct application and need in various occasions like surveillance and virtual reality. In this paper, a simple and effective human action recognition method is presented based on the key poses of human silhouette and the spatio-temporal feature. Firstly, the contour points of human silhouette have been gotten, and the key poses are learned by means of K-means clustering based on the Euclidean distance between each contour point and the centre point of the human silhouette, and then the type of each action is labeled for further match. Secondly, we obtain the trajectories of centre point of each frame, and create a spatio-temporal feature value represented by W to describe the motion direction and speed of each action. The value W contains the information of location and temporal order of each point on the trajectories. Finally, the matching stage is performed by comparing the key poses and W between training sequences and test sequences, the nearest neighbor sequences is found and its label supplied the final result. Experiments on the public available Weizmann datasets show the proposed method can improve accuracy by distinguishing amphibious poses and increase suitability for real-time applications by reducing the computational cost.
Indexing and retrieving motions of characters in close contact.
Ho, Edmond S L; Komura, Taku
2009-01-01
Human motion indexing and retrieval are important for animators due to the need to search for motions in the database which can be blended and concatenated. Most of the previous researches of human motion indexing and retrieval compute the Euclidean distance of joint angles or joint positions. Such approaches are difficult to apply for cases in which multiple characters are closely interacting with each other, as the relationships of the characters are not encoded in the representation. In this research, we propose a topology-based approach to index the motions of two human characters in close contact. We compute and encode how the two bodies are tangled based on the concept of rational tangles. The encoded relationships, which we define as TangleList, are used to determine the similarity of the pairs of postures. Using our method, we can index and retrieve motions such as one person piggy-backing another, one person assisting another in walking, and two persons dancing / wrestling. Our method is useful to manage a motion database of multiple characters. We can also produce motion graph structures of two characters closely interacting with each other by interpolating and concatenating topologically similar postures and motion clips, which are applicable to 3D computer games and computer animation.
An improved spatial contour tree constructed method
NASA Astrophysics Data System (ADS)
Zheng, Yi; Zhang, Ling; Guilbert, Eric; Long, Yi
2018-05-01
Contours are important data to delineate the landform on a map. A contour tree provides an object-oriented description of landforms and can be used to enrich the topological information. The traditional contour tree is used to store topological relationships between contours in a hierarchical structure and allows for the identification of eminences and depressions as sets of nested contours. This research proposes an improved contour tree so-called spatial contour tree that contains not only the topological but also the geometric information. It can be regarded as a terrain skeleton in 3-dimention, and it is established based on the spatial nodes of contours which have the latitude, longitude and elevation information. The spatial contour tree is built by connecting spatial nodes from low to high elevation for a positive landform, and from high to low elevation for a negative landform to form a hierarchical structure. The connection between two spatial nodes can provide the real distance and direction as a Euclidean vector in 3-dimention. In this paper, the construction method is tested in the experiment, and the results are discussed. The proposed hierarchical structure is in 3-demintion and can show the skeleton inside a terrain. The structure, where all nodes have geo-information, can be used to distinguish different landforms and applied for contour generalization with consideration of geographic characteristics.
COSMIC monthly progress report
NASA Technical Reports Server (NTRS)
1993-01-01
Activities of the Computer Software Management and Information Center (COSMIC) are summarized for the month of August, 1993. Tables showing the current inventory of programs available from COSMIC are presented and program processing and evaluation activities are discussed. Ten articles were prepared for publication in the NASA Tech Brief Journal. These articles (included in this report) describe the following software items: (1) MOM3D - A Method of Moments Code for Electromagnetic Scattering (UNIX Version); (2) EM-Animate - Computer Program for Displaying and Animating the Steady-State Time-Harmonic Electromagnetic Near Field and Surface-Current Solutions; (3) MOM3D - A Method of Moments Code for Electromagnetic Scattering (IBM PC Version); (4) M414 - MIL-STD-414 Variable Sampling Procedures Computer Program; (5) MEDOF - Minimum Euclidean Distance Optimal Filter; (6) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (Macintosh Version); (7) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (IBM PC Version); (8) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (UNIX Version); (9) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (DEC VAX VMS Version); and (10) TFSSRA - Thick Frequency Selective Surface with Rectangular Apertures. Activities in the areas of marketing, customer service, benefits identification, maintenance and support, and dissemination are also described along with a budget summary.
Ries, Kernell G.; Eng, Ken
2010-01-01
The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, operated a network of 20 low-flow partial-record stations during 2008 in a region that extends from southwest of Baltimore to the northeastern corner of Maryland to obtain estimates of selected streamflow statistics at the station locations. The study area is expected to face a substantial influx of new residents and businesses as a result of military and civilian personnel transfers associated with the Federal Base Realignment and Closure Act of 2005. The estimated streamflow statistics, which include monthly 85-percent duration flows, the 10-year recurrence-interval minimum base flow, and the 7-day, 10-year low flow, are needed to provide a better understanding of the availability of water resources in the area to be affected by base-realignment activities. Streamflow measurements collected for this study at the low-flow partial-record stations and measurements collected previously for 8 of the 20 stations were related to concurrent daily flows at nearby index streamgages to estimate the streamflow statistics. Three methods were used to estimate the streamflow statistics and two methods were used to select the index streamgages. Of the three methods used to estimate the streamflow statistics, two of them--the Moments and MOVE1 methods--rely on correlating the streamflow measurements at the low-flow partial-record stations with concurrent streamflows at nearby, hydrologically similar index streamgages to determine the estimates. These methods, recommended for use by the U.S. Geological Survey, generally require about 10 streamflow measurements at the low-flow partial-record station. The third method transfers the streamflow statistics from the index streamgage to the partial-record station based on the average of the ratios of the measured streamflows at the partial-record station to the concurrent streamflows at the index streamgage. This method can be used with as few as one pair of streamflow measurements made on a single streamflow recession at the low-flow partial-record station, although additional pairs of measurements will increase the accuracy of the estimates. Errors associated with the two correlation methods generally were lower than the errors associated with the flow-ratio method, but the advantages of the flow-ratio method are that it can produce reasonably accurate estimates from streamflow measurements much faster and at lower cost than estimates obtained using the correlation methods. The two index-streamgage selection methods were (1) selection based on the highest correlation coefficient between the low-flow partial-record station and the index streamgages, and (2) selection based on Euclidean distance, where the Euclidean distance was computed as a function of geographic proximity and the basin characteristics: drainage area, percentage of forested area, percentage of impervious area, and the base-flow recession time constant, t. Method 1 generally selected index streamgages that were significantly closer to the low-flow partial-record stations than method 2. The errors associated with the estimated streamflow statistics generally were lower for method 1 than for method 2, but the differences were not statistically significant. The flow-ratio method for estimating streamflow statistics at low-flow partial-record stations was shown to be independent from the two correlation-based estimation methods. As a result, final estimates were determined for eight low-flow partial-record stations by weighting estimates from the flow-ratio method with estimates from one of the two correlation methods according to the respective variances of the estimates. Average standard errors of estimate for the final estimates ranged from 90.0 to 7.0 percent, with an average value of 26.5 percent. Average standard errors of estimate for the weighted estimates were, on average, 4.3 percent less than the best average standard errors of estima
NASA Astrophysics Data System (ADS)
Tisdell, Christopher C.
2017-11-01
For over 50 years, the learning of teaching of a priori bounds on solutions to linear differential equations has involved a Euclidean approach to measuring the size of a solution. While the Euclidean approach to a priori bounds on solutions is somewhat manageable in the learning and teaching of the proofs involving second-order, linear problems with constant co-efficients, we believe it is not pedagogically optimal. Moreover, the Euclidean method becomes pedagogically unwieldy in the proofs involving higher-order cases. The purpose of this work is to propose a simpler pedagogical approach to establish a priori bounds on solutions by considering a different way of measuring the size of a solution to linear problems, which we refer to as the Uber size. The Uber form enables a simplification of pedagogy from the literature and the ideas are accessible to learners who have an understanding of the Fundamental Theorem of Calculus and the exponential function, both usually seen in a first course in calculus. We believe that this work will be of mathematical and pedagogical interest to those who are learning and teaching in the area of differential equations or in any of the numerous disciplines where linear differential equations are used.
Kim, Won Hwa; Singh, Vikas; Chung, Moo K.; Hinrichs, Chris; Pachauri, Deepti; Okonkwo, Ozioma C.; Johnson, Sterling C.
2014-01-01
Statistical analysis on arbitrary surface meshes such as the cortical surface is an important approach to understanding brain diseases such as Alzheimer’s disease (AD). Surface analysis may be able to identify specific cortical patterns that relate to certain disease characteristics or exhibit differences between groups. Our goal in this paper is to make group analysis of signals on surfaces more sensitive. To do this, we derive multi-scale shape descriptors that characterize the signal around each mesh vertex, i.e., its local context, at varying levels of resolution. In order to define such a shape descriptor, we make use of recent results from harmonic analysis that extend traditional continuous wavelet theory from the Euclidean to a non-Euclidean setting (i.e., a graph, mesh or network). Using this descriptor, we conduct experiments on two different datasets, the Alzheimer’s Disease NeuroImaging Initiative (ADNI) data and images acquired at the Wisconsin Alzheimer’s Disease Research Center (W-ADRC), focusing on individuals labeled as having Alzheimer’s disease (AD), mild cognitive impairment (MCI) and healthy controls. In particular, we contrast traditional univariate methods with our multi-resolution approach which show increased sensitivity and improved statistical power to detect a group-level effects. We also provide an open source implementation. PMID:24614060
Some Metric Properties of Planar Gaussian Free Field
NASA Astrophysics Data System (ADS)
Goswami, Subhajit
In this thesis we study the properties of some metrics arising from two-dimensional Gaussian free field (GFF), namely the Liouville first-passage percolation (Liouville FPP), the Liouville graph distance and an effective resistance metric. In Chapter 1, we define these metrics as well as discuss the motivations for studying them. Roughly speaking, Liouville FPP is the shortest path metric in a planar domain D where the length of a path P is given by ∫Pe gammah(z)|dz| where h is the GFF on D and gamma > 0. In Chapter 2, we present an upper bound on the expected Liouville FPP distance between two typical points for small values of gamma (the near-Euclidean regime). A similar upper bound is derived in Chapter 3 for the Liouville graph distance which is, roughly, the minimal number of Euclidean balls with comparable Liouville quantum gravity (LQG) measure whose union contains a continuous path between two endpoints. Our bounds seem to be in disagreement with Watabiki's prediction (1993) on the random metric of Liouville quantum gravity in this regime. The contents of these two chapters are based on a joint work with Jian Ding. In Chapter 4, we derive some asymptotic estimates for effective resistances on a random network which is defined as follows. Given any gamma > 0 and for eta = {etav}v∈Z2 denoting a sample of the two-dimensional discrete Gaussian free field on Z2 pinned at the origin, we equip the edge ( u, v) with conductance egamma(etau + eta v). The metric structure of effective resistance plays a crucial role in our proof of the main result in Chapter 4. The primary motivation behind this metric is to understand the random walk on Z 2 where the edge (u, v) has weight egamma(etau + etav). Using the estimates from Chapter 4 we show in Chapter 5 that for almost every eta, this random walk is recurrent and that, with probability tending to 1 as T → infinity, the return probability at time 2T decays as T-1+o(1). In addition, we prove a version of subdiffusive behavior by showing that the expected exit time from a ball of radius N scales as Npsi(gamma)+o(1) with psi(gamma) > 2 for all gamma > 0. The contents of these chapters are based on a joint work with Marek Biskup and Jian Ding.