Sample records for minimum squared euclidean

  1. DENBRAN: A basic program for a significance test for multivariate normality of clusters from branching patterns in dendrograms

    NASA Astrophysics Data System (ADS)

    Sneath, P. H. A.

    A BASIC program is presented for significance tests to determine whether a dendrogram is derived from clustering of points that belong to a single multivariate normal distribution. The significance tests are based on statistics of the Kolmogorov—Smirnov type, obtained by comparing the observed cumulative graph of branch levels with a graph for the hypothesis of multivariate normality. The program also permits testing whether the dendrogram could be from a cluster of lower dimensionality due to character correlations. The program makes provision for three similarity coefficients, (1) Euclidean distances, (2) squared Euclidean distances, and (3) Simple Matching Coefficients, and for five cluster methods (1) WPGMA, (2) UPGMA, (3) Single Linkage (or Minimum Spanning Trees), (4) Complete Linkage, and (5) Ward's Increase in Sums of Squares. The program is entitled DENBRAN.

  2. Characterization of separability and entanglement in (2xD)- and (3xD)-dimensional systems by single-qubit and single-qutrit unitary transformations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giampaolo, Salvatore M.; CNR-INFM Coherentia, Naples; CNISM Unita di Salerno and INFN Sezione di Napoli, Gruppo collegato di Salerno, Baronissi

    2007-10-15

    We investigate the geometric characterization of pure state bipartite entanglement of (2xD)- and (3xD)-dimensional composite quantum systems. To this aim, we analyze the relationship between states and their images under the action of particular classes of local unitary operations. We find that invariance of states under the action of single-qubit and single-qutrit transformations is a necessary and sufficient condition for separability. We demonstrate that in the (2xD)-dimensional case the von Neumann entropy of entanglement is a monotonic function of the minimum squared Euclidean distance between states and their images over the set of single qubit unitary transformations. Moreover, both inmore » the (2xD)- and in the (3xD)-dimensional cases the minimum squared Euclidean distance exactly coincides with the linear entropy [and thus as well with the tangle measure of entanglement in the (2xD)-dimensional case]. These results provide a geometric characterization of entanglement measures originally established in informational frameworks. Consequences and applications of the formalism to quantum critical phenomena in spin systems are discussed.« less

  3. Towards a PTAS for the generalized TSP in grid clusters

    NASA Astrophysics Data System (ADS)

    Khachay, Michael; Neznakhina, Katherine

    2016-10-01

    The Generalized Traveling Salesman Problem (GTSP) is a combinatorial optimization problem, which is to find a minimum cost cycle visiting one point (city) from each cluster exactly. We consider a geometric case of this problem, where n nodes are given inside the integer grid (in the Euclidean plane), each grid cell is a unit square. Clusters are induced by cells `populated' by nodes of the given instance. Even in this special setting, the GTSP remains intractable enclosing the classic Euclidean TSP on the plane. Recently, it was shown that the problem has (1.5+8√2+ɛ)-approximation algorithm with complexity bound depending on n and k polynomially, where k is the number of clusters. In this paper, we propose two approximation algorithms for the Euclidean GTSP on grid clusters. For any fixed k, both algorithms are PTAS. Time complexity of the first one remains polynomial for k = O(log n) while the second one is a PTAS, when k = n - O(log n).

  4. Squared Euclidean distance: a statistical test to evaluate plant community change

    Treesearch

    Raymond D. Ratliff; Sylvia R. Mori

    1993-01-01

    The concepts and a procedure for evaluating plant community change using the squared Euclidean distance (SED) resemblance function are described. Analyses are based on the concept that Euclidean distances constitute a sample from a population of distances between sampling units (SUs) for a specific number of times and SUs. With different times, the distances will be...

  5. On the Partitioning of Squared Euclidean Distance and Its Applications in Cluster Analysis.

    ERIC Educational Resources Information Center

    Carter, Randy L.; And Others

    1989-01-01

    The partitioning of squared Euclidean--E(sup 2)--distance between two vectors in M-dimensional space into the sum of squared lengths of vectors in mutually orthogonal subspaces is discussed. Applications to specific cluster analysis problems are provided (i.e., to design Monte Carlo studies for performance comparisons of several clustering methods…

  6. Multi-level bandwidth efficient block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1989-01-01

    The multilevel technique is investigated for combining block coding and modulation. There are four parts. In the first part, a formulation is presented for signal sets on which modulation codes are to be constructed. Distance measures on a signal set are defined and their properties are developed. In the second part, a general formulation is presented for multilevel modulation codes in terms of component codes with appropriate Euclidean distances. The distance properties, Euclidean weight distribution and linear structure of multilevel modulation codes are investigated. In the third part, several specific methods for constructing multilevel block modulation codes with interdependency among component codes are proposed. Given a multilevel block modulation code C with no interdependency among the binary component codes, the proposed methods give a multilevel block modulation code C which has the same rate as C, a minimum squared Euclidean distance not less than that of code C, a trellis diagram with the same number of states as that of C and a smaller number of nearest neighbor codewords than that of C. In the last part, error performance of block modulation codes is analyzed for an AWGN channel based on soft-decision maximum likelihood decoding. Error probabilities of some specific codes are evaluated based on their Euclidean weight distributions and simulation results.

  7. Estimating gene function with least squares nonnegative matrix factorization.

    PubMed

    Wang, Guoli; Ochs, Michael F

    2007-01-01

    Nonnegative matrix factorization is a machine learning algorithm that has extracted information from data in a number of fields, including imaging and spectral analysis, text mining, and microarray data analysis. One limitation with the method for linking genes through microarray data in order to estimate gene function is the high variance observed in transcription levels between different genes. Least squares nonnegative matrix factorization uses estimates of the uncertainties on the mRNA levels for each gene in each condition, to guide the algorithm to a local minimum in normalized chi2, rather than a Euclidean distance or divergence between the reconstructed data and the data itself. Herein, application of this method to microarray data is demonstrated in order to predict gene function.

  8. Spacetime and Euclidean geometry

    NASA Astrophysics Data System (ADS)

    Brill, Dieter; Jacobson, Ted

    2006-04-01

    Using only the principle of relativity and Euclidean geometry we show in this pedagogical article that the square of proper time or length in a two-dimensional spacetime diagram is proportional to the Euclidean area of the corresponding causal domain. We use this relation to derive the Minkowski line element by two geometric proofs of the spacetime Pythagoras theorem.

  9. Euclidean sections of protein conformation space and their implications in dimensionality reduction

    PubMed Central

    Duan, Mojie; Li, Minghai; Han, Li; Huo, Shuanghong

    2014-01-01

    Dimensionality reduction is widely used in searching for the intrinsic reaction coordinates for protein conformational changes. We find the dimensionality–reduction methods using the pairwise root–mean–square deviation as the local distance metric face a challenge. We use Isomap as an example to illustrate the problem. We believe that there is an implied assumption for the dimensionality–reduction approaches that aim to preserve the geometric relations between the objects: both the original space and the reduced space have the same kind of geometry, such as Euclidean geometry vs. Euclidean geometry or spherical geometry vs. spherical geometry. When the protein free energy landscape is mapped onto a 2D plane or 3D space, the reduced space is Euclidean, thus the original space should also be Euclidean. For a protein with N atoms, its conformation space is a subset of the 3N-dimensional Euclidean space R3N. We formally define the protein conformation space as the quotient space of R3N by the equivalence relation of rigid motions. Whether the quotient space is Euclidean or not depends on how it is parameterized. When the pairwise root–mean–square deviation is employed as the local distance metric, implicit representations are used for the protein conformation space, leading to no direct correspondence to a Euclidean set. We have demonstrated that an explicit Euclidean-based representation of protein conformation space and the local distance metric associated to it improve the quality of dimensionality reduction in the tetra-peptide and β–hairpin systems. PMID:24913095

  10. Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms

    NASA Astrophysics Data System (ADS)

    Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.

    2017-09-01

    Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.

  11. Prediction of acoustic feature parameters using myoelectric signals.

    PubMed

    Lee, Ki-Seung

    2010-07-01

    It is well-known that a clear relationship exists between human voices and myoelectric signals (MESs) from the area of the speaker's mouth. In this study, we utilized this information to implement a speech synthesis scheme in which MES alone was used to predict the parameters characterizing the vocal-tract transfer function of specific speech signals. Several feature parameters derived from MES were investigated to find the optimal feature for maximization of the mutual information between the acoustic and the MES features. After the optimal feature was determined, an estimation rule for the acoustic parameters was proposed, based on a minimum mean square error (MMSE) criterion. In a preliminary study, 60 isolated words were used for both objective and subjective evaluations. The results showed that the average Euclidean distance between the original and predicted acoustic parameters was reduced by about 30% compared with the average Euclidean distance of the original parameters. The intelligibility of the synthesized speech signals using the predicted features was also evaluated. A word-level identification ratio of 65.5% and a syllable-level identification ratio of 73% were obtained through a listening test.

  12. Geometric characterization of separability and entanglement in pure Gaussian states by single-mode unitary operations

    NASA Astrophysics Data System (ADS)

    Adesso, Gerardo; Giampaolo, Salvatore M.; Illuminati, Fabrizio

    2007-10-01

    We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1×M bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself and the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a , uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.

  13. Riemannian geometric approach to human arm dynamics, movement optimization, and invariance

    NASA Astrophysics Data System (ADS)

    Biess, Armin; Flash, Tamar; Liebermann, Dario G.

    2011-03-01

    We present a generally covariant formulation of human arm dynamics and optimization principles in Riemannian configuration space. We extend the one-parameter family of mean-squared-derivative (MSD) cost functionals from Euclidean to Riemannian space, and we show that they are mathematically identical to the corresponding dynamic costs when formulated in a Riemannian space equipped with the kinetic energy metric. In particular, we derive the equivalence of the minimum-jerk and minimum-torque change models in this metric space. Solutions of the one-parameter family of MSD variational problems in Riemannian space are given by (reparametrized) geodesic paths, which correspond to movements with least muscular effort. Finally, movement invariants are derived from symmetries of the Riemannian manifold. We argue that the geometrical structure imposed on the arm’s configuration space may provide insights into the emerging properties of the movements generated by the motor system.

  14. Consistent and powerful non-Euclidean graph-based change-point test with applications to segmenting random interfered video data.

    PubMed

    Shi, Xiaoping; Wu, Yuehua; Rao, Calyampudi Radhakrishna

    2018-06-05

    The change-point detection has been carried out in terms of the Euclidean minimum spanning tree (MST) and shortest Hamiltonian path (SHP), with successful applications in the determination of authorship of a classic novel, the detection of change in a network over time, the detection of cell divisions, etc. However, these Euclidean graph-based tests may fail if a dataset contains random interferences. To solve this problem, we present a powerful non-Euclidean SHP-based test, which is consistent and distribution-free. The simulation shows that the test is more powerful than both Euclidean MST- and SHP-based tests and the non-Euclidean MST-based test. Its applicability in detecting both landing and departure times in video data of bees' flower visits is illustrated.

  15. Geometric characterization of separability and entanglement in pure Gaussian states by single-mode unitary operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adesso, Gerardo; CNR-INFM Coherentia, Naples; CNISM, Unita di Salerno, Salerno

    2007-10-15

    We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1xM bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself andmore » the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a, uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.« less

  16. Energy-efficient constellations design and fast decoding for space-collaborative MIMO visible light communications

    NASA Astrophysics Data System (ADS)

    Zhu, Yi-Jun; Liang, Wang-Feng; Wang, Chao; Wang, Wen-Ya

    2017-01-01

    In this paper, space-collaborative constellations (SCCs) for indoor multiple-input multiple-output (MIMO) visible light communication (VLC) systems are considered. Compared with traditional VLC MIMO techniques, such as repetition coding (RC), spatial modulation (SM) and spatial multiplexing (SMP), SCC achieves the minimum average optical power for a fixed minimum Euclidean distance. We have presented a unified SCC structure for 2×2 MIMO VLC systems and extended it to larger MIMO VLC systems with more transceivers. Specifically for 2×2 MIMO VLC, a fast decoding algorithm is developed with decoding complexity almost linear in terms of the square root of the cardinality of SCC, and the expressions of symbol error rate of SCC are presented. In addition, bit mappings similar to Gray mapping are proposed for SCC. Computer simulations are performed to verify the fast decoding algorithm and the performance of SCC, and the results demonstrate that the performance of SCC is better than those of RC, SM and SMP for indoor channels in general.

  17. Trellis coding with multidimensional QAM signal sets

    NASA Technical Reports Server (NTRS)

    Pietrobon, Steven S.; Costello, Daniel J.

    1993-01-01

    Trellis coding using multidimensional QAM signal sets is investigated. Finite-size 2D signal sets are presented that have minimum average energy, are 90-deg rotationally symmetric, and have from 16 to 1024 points. The best trellis codes using the finite 16-QAM signal set with two, four, six, and eight dimensions are found by computer search (the multidimensional signal set is constructed from the 2D signal set). The best moderate complexity trellis codes for infinite lattices with two, four, six, and eight dimensions are also found. The minimum free squared Euclidean distance and number of nearest neighbors for these codes were used as the selection criteria. Many of the multidimensional codes are fully rotationally invariant and give asymptotic coding gains up to 6.0 dB. From the infinite lattice codes, the best codes for transmitting J, J + 1/4, J + 1/3, J + 1/2, J + 2/3, and J + 3/4 bit/sym (J an integer) are presented.

  18. Polynomial-Time Approximation Algorithm for the Problem of Cardinality-Weighted Variance-Based 2-Clustering with a Given Center

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Motkova, A. V.

    2018-01-01

    A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.

  19. Gender classification in children based on speech characteristics: using fundamental and formant frequencies of Malay vowels.

    PubMed

    Zourmand, Alireza; Ting, Hua-Nong; Mirhassani, Seyed Mostafa

    2013-03-01

    Speech is one of the prevalent communication mediums for humans. Identifying the gender of a child speaker based on his/her speech is crucial in telecommunication and speech therapy. This article investigates the use of fundamental and formant frequencies from sustained vowel phonation to distinguish the gender of Malay children aged between 7 and 12 years. The Euclidean minimum distance and multilayer perceptron were used to classify the gender of 360 Malay children based on different combinations of fundamental and formant frequencies (F0, F1, F2, and F3). The Euclidean minimum distance with normalized frequency data achieved a classification accuracy of 79.44%, which was higher than that of the nonnormalized frequency data. Age-dependent modeling was used to improve the accuracy of gender classification. The Euclidean distance method obtained 84.17% based on the optimal classification accuracy for all age groups. The accuracy was further increased to 99.81% using multilayer perceptron based on mel-frequency cepstral coefficients. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  20. The distance function effect on k-nearest neighbor classification for medical datasets.

    PubMed

    Hu, Li-Yu; Huang, Min-Wei; Ke, Shih-Wen; Tsai, Chih-Fong

    2016-01-01

    K-nearest neighbor (k-NN) classification is conventional non-parametric classifier, which has been used as the baseline classifier in many pattern classification problems. It is based on measuring the distances between the test data and each of the training data to decide the final classification output. Since the Euclidean distance function is the most widely used distance metric in k-NN, no study examines the classification performance of k-NN by different distance functions, especially for various medical domain problems. Therefore, the aim of this paper is to investigate whether the distance function can affect the k-NN performance over different medical datasets. Our experiments are based on three different types of medical datasets containing categorical, numerical, and mixed types of data and four different distance functions including Euclidean, cosine, Chi square, and Minkowsky are used during k-NN classification individually. The experimental results show that using the Chi square distance function is the best choice for the three different types of datasets. However, using the cosine and Euclidean (and Minkowsky) distance function perform the worst over the mixed type of datasets. In this paper, we demonstrate that the chosen distance function can affect the classification accuracy of the k-NN classifier. For the medical domain datasets including the categorical, numerical, and mixed types of data, K-NN based on the Chi square distance function performs the best.

  1. Numerical methods for comparing fresh and weathered oils by their FTIR spectra.

    PubMed

    Li, Jianfeng; Hibbert, D Brynn; Fuller, Stephen

    2007-08-01

    Four comparison statistics ('similarity indices') for the identification of the source of a petroleum oil spill based on the ASTM standard test method D3414 were investigated. Namely, (1) first difference correlation coefficient squared and (2) correlation coefficient squared, (3) first difference Euclidean cosine squared and (4) Euclidean cosine squared. For numerical comparison, an FTIR spectrum is divided into three regions, described as: fingerprint (900-700 cm(-1)), generic (1350-900 cm(-1)) and supplementary (1770-1685 cm(-1)), which are the same as the three major regions recommended by the ASTM standard. For fresh oil samples, each similarity index was able to distinguish between replicate independent spectra of the same sample and between different samples. In general, the two first difference-based indices worked better than their parent indices. To provide samples to reveal relationships between weathered and fresh oils, a simple artificial weathering procedure was carried out. Euclidean cosine and correlation coefficients both worked well to maintain identification of a match in the fingerprint region and the two first difference indices were better in the generic region. Receiver operating characteristic curves (true positive rate versus false positive rate) for decisions on matching using the fingerprint region showed two samples could be matched when the difference in weathering time was up to 7 days. Beyond this time the true positive rate falls and samples cannot be reliably matched. However, artificial weathering of a fresh source sample can aid the matching of a weathered sample to its real source from a pool of very similar candidates.

  2. Tessellating the Sphere with Regular Polygons

    ERIC Educational Resources Information Center

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  3. Assessment of gene order computing methods for Alzheimer's disease

    PubMed Central

    2013-01-01

    Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541

  4. Phylogenetic trees and Euclidean embeddings.

    PubMed

    Layer, Mark; Rhodes, John A

    2017-01-01

    It was recently observed by de Vienne et al. (Syst Biol 60(6):826-832, 2011) that a simple square root transformation of distances between taxa on a phylogenetic tree allowed for an embedding of the taxa into Euclidean space. While the justification for this was based on a diffusion model of continuous character evolution along the tree, here we give a direct and elementary explanation for it that provides substantial additional insight. We use this embedding to reinterpret the differences between the NJ and BIONJ tree building algorithms, providing one illustration of how this embedding reflects tree structures in data.

  5. Lazy orbits: An optimization problem on the sphere

    NASA Astrophysics Data System (ADS)

    Vincze, Csaba

    2018-01-01

    Non-transitive subgroups of the orthogonal group play an important role in the non-Euclidean geometry. If G is a closed subgroup in the orthogonal group such that the orbit of a single Euclidean unit vector does not cover the (Euclidean) unit sphere centered at the origin then there always exists a non-Euclidean Minkowski functional such that the elements of G preserve the Minkowskian length of vectors. In other words the Minkowski geometry is an alternative of the Euclidean geometry for the subgroup G. It is rich of isometries if G is "close enough" to the orthogonal group or at least to one of its transitive subgroups. The measure of non-transitivity is related to the Hausdorff distances of the orbits under the elements of G to the Euclidean sphere. Its maximum/minimum belongs to the so-called lazy/busy orbits, i.e. they are the solutions of an optimization problem on the Euclidean sphere. The extremal distances allow us to characterize the reducible/irreducible subgroups. We also formulate an upper and a lower bound for the ratio of the extremal distances. As another application of the analytic tools we introduce the rank of a closed non-transitive group G. We shall see that if G is of maximal rank then it is finite or reducible. Since the reducible and the finite subgroups form two natural prototypes of non-transitive subgroups, the rank seems to be a fundamental notion in their characterization. Closed, non-transitive groups of rank n - 1 will be also characterized. Using the general results we classify all their possible types in lower dimensional cases n = 2 , 3 and 4. Finally we present some applications of the results to the holonomy group of a metric linear connection on a connected Riemannian manifold.

  6. Combinatorial quantisation of the Euclidean torus universe

    NASA Astrophysics Data System (ADS)

    Meusburger, C.; Noui, K.

    2010-12-01

    We quantise the Euclidean torus universe via a combinatorial quantisation formalism based on its formulation as a Chern-Simons gauge theory and on the representation theory of the Drinfel'd double DSU(2). The resulting quantum algebra of observables is given by two commuting copies of the Heisenberg algebra, and the associated Hilbert space can be identified with the space of square integrable functions on the torus. We show that this Hilbert space carries a unitary representation of the modular group and discuss the role of modular invariance in the theory. We derive the classical limit of the theory and relate the quantum observables to the geometry of the torus universe.

  7. Establishing Benchmarks for Outcome Indicators: A Statistical Approach to Developing Performance Standards.

    ERIC Educational Resources Information Center

    Henry, Gary T.; And Others

    1992-01-01

    A statistical technique is presented for developing performance standards based on benchmark groups. The benchmark groups are selected using a multivariate technique that relies on a squared Euclidean distance method. For each observation unit (a school district in the example), a unique comparison group is selected. (SLD)

  8. Sexual dimorphism in the human face assessed by euclidean distance matrix analysis.

    PubMed Central

    Ferrario, V F; Sforza, C; Pizzini, G; Vogel, G; Miani, A

    1993-01-01

    The form of any object can be viewed as a combination of size and shape. A recently proposed method (euclidean distance matrix analysis) can differentiate between size and shape differences. It has been applied to analyse the sexual dimorphism in facial form in a sample of 108 healthy young adults (57 men, 51 women). The face was wider and longer in men than in women. A global shape difference was demonstrated, the male face being more rectangular and the female face more square. Gender variations involved especially the lower third of the face and, in particular, the position of the pogonion relative to the other structures. PMID:8300436

  9. Euclidean bridge to the relativistic constituent quark model

    NASA Astrophysics Data System (ADS)

    Hobbs, T. J.; Alberg, Mary; Miller, Gerald A.

    2017-03-01

    Background: Knowledge of nucleon structure is today ever more of a precision science, with heightened theoretical and experimental activity expected in coming years. At the same time, a persistent gap lingers between theoretical approaches grounded in Euclidean methods (e.g., lattice QCD, Dyson-Schwinger equations [DSEs]) as opposed to traditional Minkowski field theories (such as light-front constituent quark models). Purpose: Seeking to bridge these complementary world views, we explore the potential of a Euclidean constituent quark model (ECQM). This formalism enables us to study the gluonic dressing of the quark-level axial-vector vertex, which we undertake as a test of the framework. Method: To access its indispensable elements with a minimum of inessential detail, we develop our ECQM using the simplified quark + scalar diquark picture of the nucleon. We construct a hyperspherical formalism involving polynomial expansions of diquark propagators to marry our ECQM with the results of Bethe-Salpeter equation (BSE) analyses, and constrain model parameters by fitting electromagnetic form factor data. Results: From this formalism, we define and compute a new quantity—the Euclidean density function (EDF)—an object that characterizes the nucleon's various charge distributions as functions of the quark's Euclidean momentum. Applying this technology and incorporating information from BSE analyses, we find the quenched dressing effect on the proton's axial-singlet charge to be small in magnitude and consistent with zero, while use of recent determinations of unquenched BSEs results in a large suppression. Conclusions: The quark + scalar diquark ECQM is a step toward a realistic quark model in Euclidean space, and needs additional refinements. The substantial effect we obtain for the impact on the axial-singlet charge of the unquenched dressed vertex compared to the quenched demands further investigation.

  10. Understanding Our Understanding of Strategic Scenarios: What Role Do Chunks Play?

    ERIC Educational Resources Information Center

    Linhares, Alexandre; Brum, Paulo

    2007-01-01

    There is a crucial debate concerning the nature of chess chunks: One current possibility states that chunks are built by encoding particular combinations of pieces-on-squares (POSs), and that chunks are formed mostly by "close" pieces (in a "Euclidean" sense). A complementary hypothesis is that chunks are encoded by abstract,…

  11. Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT

    PubMed Central

    Nguyen, Thu L. N.; Shin, Yoan

    2016-01-01

    Localization in wireless sensor networks (WSNs) is one of the primary functions of the intelligent Internet of Things (IoT) that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton’s method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach. PMID:27213378

  12. On the complexity of some quadratic Euclidean 2-clustering problems

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Pyatkin, A. V.

    2016-03-01

    Some problems of partitioning a finite set of points of Euclidean space into two clusters are considered. In these problems, the following criteria are minimized: (1) the sum over both clusters of the sums of squared pairwise distances between the elements of the cluster and (2) the sum of the (multiplied by the cardinalities of the clusters) sums of squared distances from the elements of the cluster to its geometric center, where the geometric center (or centroid) of a cluster is defined as the mean value of the elements in that cluster. Additionally, another problem close to (2) is considered, where the desired center of one of the clusters is given as input, while the center of the other cluster is unknown (is the variable to be optimized) as in problem (2). Two variants of the problems are analyzed, in which the cardinalities of the clusters are (1) parts of the input or (2) optimization variables. It is proved that all the considered problems are strongly NP-hard and that, in general, there is no fully polynomial-time approximation scheme for them (unless P = NP).

  13. Human Performance on Hard Non-Euclidean Graph Problems: Vertex Cover

    ERIC Educational Resources Information Center

    Carruthers, Sarah; Masson, Michael E. J.; Stege, Ulrike

    2012-01-01

    Recent studies on a computationally hard visual optimization problem, the Traveling Salesperson Problem (TSP), indicate that humans are capable of finding close to optimal solutions in near-linear time. The current study is a preliminary step in investigating human performance on another hard problem, the Minimum Vertex Cover Problem, in which…

  14. On decoding of multi-level MPSK modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  15. Using optimal transport theory to estimate transition probabilities in metapopulation dynamics

    USGS Publications Warehouse

    Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.

    2017-01-01

    This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.

  16. From Alpha To Omega

    NASA Astrophysics Data System (ADS)

    Castellano, Doc

    2002-08-01

    Galileo, the Father of Modern Science, put forth the first significant Modern Scientific Era/Philosophy. Best represented per: x' = x (+/-) vt. Locating/defining the dynamic x' in an Euclidean, fixed frame Universe. Einstein, the popularized relativist, utilizing Lorentz's transformation equations: x' = (x - vt)/square root [ 1- (v squared/c squared)], c the velocity of light. Arbitrarily decreed that c must be the ultimate, universal velocity. Thus, Reporters, the general Public and Scientists consider/considered, Einstein's OPINION of our Universe, 'The Omega Concept'. Castellano, since 1954, has PROVEN the "C Transformation Equations": X' = (X - vt)/square root [ 1 - (v squared/C squared)], Capital C = or greater than c; IS THE OMEGA CONCEPT. And "MAPHICS", combining the Philosophy of Mathematics with the Philosophy of Physics is "THE OMEGA PHILOSOPHY". Sufficient PROOFS & details are at: http://hometown.aol.com/phdco/myhomepage/index/html ----- Thank you for your interest. My sincere appreciation for deserved acknowledgements.

  17. Fully polynomial-time approximation scheme for a special case of a quadratic Euclidean 2-clustering problem

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Khandeev, V. I.

    2016-02-01

    The strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters of given sizes (cardinalities) minimizing the sum (over both clusters) of the intracluster sums of squared distances from the elements of the clusters to their centers is considered. It is assumed that the center of one of the sought clusters is specified at the desired (arbitrary) point of space (without loss of generality, at the origin), while the center of the other one is unknown and determined as the mean value over all elements of this cluster. It is shown that unless P = NP, there is no fully polynomial-time approximation scheme for this problem, and such a scheme is substantiated in the case of a fixed space dimension.

  18. From SL(5, ℝ) Yang-Mills theory to induced gravity

    NASA Astrophysics Data System (ADS)

    Assimos, T. S.; Pereira, A. D.; Santos, T. R. S.; Sobreiro, R. F.; Tomaz, A. A.; Otoya, V. J. Vasquez

    From pure Yang-Mills action for the SL(5, ℝ) group in four Euclidean dimensions we obtain a gravity theory in the first order formalism. Besides the Einstein-Hilbert term, the effective gravity has a cosmological constant term, a curvature squared term, a torsion squared term and a matter sector. To obtain such geometrodynamical theory, asymptotic freedom and the Gribov parameter (soft BRST symmetry breaking) are crucial. Particularly, Newton and cosmological constant are related to these parameters and they also run as functions of the energy scale. One-loop computations are performed and the results are interpreted.

  19. Solution for a bipartite Euclidean traveling-salesman problem in one dimension

    NASA Astrophysics Data System (ADS)

    Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.

    2018-05-01

    The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.

  20. Solution for a bipartite Euclidean traveling-salesman problem in one dimension.

    PubMed

    Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M

    2018-05-01

    The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.

  1. Complex networks in the Euclidean space of communicability distances

    NASA Astrophysics Data System (ADS)

    Estrada, Ernesto

    2012-06-01

    We study the properties of complex networks embedded in a Euclidean space of communicability distances. The communicability distance between two nodes is defined as the difference between the weighted sum of walks self-returning to the nodes and the weighted sum of walks going from one node to the other. We give some indications that the communicability distance identifies the least crowded routes in networks where simultaneous submission of packages is taking place. We define an index Q based on communicability and shortest path distances, which allows reinterpreting the “small-world” phenomenon as the region of minimum Q in the Watts-Strogatz model. It also allows the classification and analysis of networks with different efficiency of spatial uses. Consequently, the communicability distance displays unique features for the analysis of complex networks in different scenarios.

  2. Optimal Control of Fully Routed Air Traffic in the Presence of Uncertainty and Kinodynamic Constraints

    DTIC Science & Technology

    2014-09-18

    Operations and Developing Issues . . . . . . . . . . . . . . . . . . 6 2.1.2 Next-Generation Air Transportation System (NextGen...Air Traffic Management ESP Euclidean Shortest Path FAA Federal Aviation Administration FCFS First-Come-First-Served HCS Hybrid Control System KKT...Karush-Kuhn-Tucker LGR Legendre-Gauss-Radau MLD Minimum Lateral Distance NAS National Airspace System NASA National Aeronautics and Space Administration

  3. Geometric comparison of popular mixture-model distances.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Scott A.

    2010-09-01

    Statistical Latent Dirichlet Analysis produces mixture model data that are geometrically equivalent to points lying on a regular simplex in moderate to high dimensions. Numerous other statistical models and techniques also produce data in this geometric category, even though the meaning of the axes and coordinate values differs significantly. A distance function is used to further analyze these points, for example to cluster them. Several different distance functions are popular amongst statisticians; which distance function is chosen is usually driven by the historical preference of the application domain, information-theoretic considerations, or by the desirability of the clustering results. Relatively littlemore » consideration is usually given to how distance functions geometrically transform data, or the distances algebraic properties. Here we take a look at these issues, in the hope of providing complementary insight and inspiring further geometric thought. Several popular distances, {chi}{sup 2}, Jensen - Shannon divergence, and the square of the Hellinger distance, are shown to be nearly equivalent; in terms of functional forms after transformations, factorizations, and series expansions; and in terms of the shape and proximity of constant-value contours. This is somewhat surprising given that their original functional forms look quite different. Cosine similarity is the square of the Euclidean distance, and a similar geometric relationship is shown with Hellinger and another cosine. We suggest a geodesic variation of Hellinger. The square-root projection that arises in Hellinger distance is briefly compared to standard normalization for Euclidean distance. We include detailed derivations of some ratio and difference bounds for illustrative purposes. We provide some constructions that nearly achieve the worst-case ratios, relevant for contours.« less

  4. Dictionary Learning on the Manifold of Square Root Densities and Application to Reconstruction of Diffusion Propagator Fields*

    PubMed Central

    Sun, Jiaqi; Xie, Yuchen; Ye, Wenxing; Ho, Jeffrey; Entezari, Alireza; Blackband, Stephen J.

    2013-01-01

    In this paper, we present a novel dictionary learning framework for data lying on the manifold of square root densities and apply it to the reconstruction of diffusion propagator (DP) fields given a multi-shell diffusion MRI data set. Unlike most of the existing dictionary learning algorithms which rely on the assumption that the data points are vectors in some Euclidean space, our dictionary learning algorithm is designed to incorporate the intrinsic geometric structure of manifolds and performs better than traditional dictionary learning approaches when applied to data lying on the manifold of square root densities. Non-negativity as well as smoothness across the whole field of the reconstructed DPs is guaranteed in our approach. We demonstrate the advantage of our approach by comparing it with an existing dictionary based reconstruction method on synthetic and real multi-shell MRI data. PMID:24684004

  5. Multivariate Spectral Analysis to Extract Materials from Multispectral Data

    DTIC Science & Technology

    1993-09-01

    Euclidean minimum distance and conventional Bayesian classifier suggest some fundamental instabilities. Two candidate sources are (1) inadequate...Coacete Water 2 TOTAL Cetu¢t1te 0 0 0 0 34 0 0 34 TZC10 0 0 0 0 0 26 0 26 hpem ~d I 0 0 to 0 0 0 0 60 Seb~ s 0 0 0 0 4 24 0 28 Mwal 0 0 0 0 33 29 0 62 Ihwid

  6. Evaluation of procedures for prediction of unconventional gas in the presence of geologic trends

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.

    2009-01-01

    This study extends the application of local spatial nonparametric prediction models to the estimation of recoverable gas volumes in continuous-type gas plays to regimes where there is a single geologic trend. A transformation is presented, originally proposed by Tomczak, that offsets the distortions caused by the trend. This article reports on numerical experiments that compare predictive and classification performance of the local nonparametric prediction models based on the transformation with models based on Euclidean distance. The transformation offers improvement in average root mean square error when the trend is not severely misspecified. Because of the local nature of the models, even those based on Euclidean distance in the presence of trends are reasonably robust. The tests based on other model performance metrics such as prediction error associated with the high-grade tracts and the ability of the models to identify sites with the largest gas volumes also demonstrate the robustness of both local modeling approaches. ?? International Association for Mathematical Geology 2009.

  7. Buckling transition and boundary layer in non-Euclidean plates.

    PubMed

    Efrati, Efi; Sharon, Eran; Kupferman, Raz

    2009-07-01

    Non-Euclidean plates are thin elastic bodies having no stress-free configuration, hence exhibiting residual stresses in the absence of external constraints. These bodies are endowed with a three-dimensional reference metric, which may not necessarily be immersible in physical space. Here, based on a recently developed theory for such bodies, we characterize the transition from flat to buckled equilibrium configurations at a critical value of the plate thickness. Depending on the reference metric, the buckling transition may be either continuous or discontinuous. In the infinitely thin plate limit, under the assumption that a limiting configuration exists, we show that the limit is a configuration that minimizes the bending content, among all configurations with zero stretching content (isometric immersions of the midsurface). For small but finite plate thickness, we show the formation of a boundary layer, whose size scales with the square root of the plate thickness and whose shape is determined by a balance between stretching and bending energies.

  8. Scheduling quality of precise form sets which consist of tasks of circular type in GRID systems

    NASA Astrophysics Data System (ADS)

    Saak, A. E.; Kureichik, V. V.; Kravchenko, Y. A.

    2018-05-01

    Users’ demand in computer power and rise of technology favour the arrival of Grid systems. The quality of Grid systems’ performance depends on computer and time resources scheduling. Grid systems with a centralized structure of the scheduling system and user’s task are modeled by resource quadrant and re-source rectangle accordingly. A Non-Euclidean heuristic measure, which takes into consideration both the area and the form of an occupied resource region, is used to estimate scheduling quality of heuristic algorithms. The authors use sets, which are induced by the elements of square squaring, as an example of studying the adapt-ability of a level polynomial algorithm with an excess and the one with minimal deviation.

  9. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.

  10. On the complexity and approximability of some Euclidean optimal summing problems

    NASA Astrophysics Data System (ADS)

    Eremeev, A. V.; Kel'manov, A. V.; Pyatkin, A. V.

    2016-10-01

    The complexity status of several well-known discrete optimization problems with the direction of optimization switching from maximum to minimum is analyzed. The task is to find a subset of a finite set of Euclidean points (vectors). In these problems, the objective functions depend either only on the norm of the sum of the elements from the subset or on this norm and the cardinality of the subset. It is proved that, if the dimension of the space is a part of the input, then all these problems are strongly NP-hard. Additionally, it is shown that, if the space dimension is fixed, then all the problems are NP-hard even for dimension 2 (on a plane) and there are no approximation algorithms with a guaranteed accuracy bound for them unless P = NP. It is shown that, if the coordinates of the input points are integer, then all the problems can be solved in pseudopolynomial time in the case of a fixed space dimension.

  11. The Development of Euclidean and Non-Euclidean Cosmologies

    ERIC Educational Resources Information Center

    Norman, P. D.

    1975-01-01

    Discusses early Euclidean cosmologies, inadequacies in classical Euclidean cosmology, and the development of non-Euclidean cosmologies. Explains the present state of the theory of cosmology including the work of Dirac, Sandage, and Gott. (CP)

  12. Euclidean commute time distance embedding and its application to spectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Albano, James A.; Messinger, David W.

    2012-06-01

    Spectral image analysis problems often begin by performing a preprocessing step composed of applying a transformation that generates an alternative representation of the spectral data. In this paper, a transformation based on a Markov-chain model of a random walk on a graph is introduced. More precisely, we quantify the random walk using a quantity known as the average commute time distance and find a nonlinear transformation that embeds the nodes of a graph in a Euclidean space where the separation between them is equal to the square root of this quantity. This has been referred to as the Commute Time Distance (CTD) transformation and it has the important characteristic of increasing when the number of paths between two nodes decreases and/or the lengths of those paths increase. Remarkably, a closed form solution exists for computing the average commute time distance that avoids running an iterative process and is found by simply performing an eigendecomposition on the graph Laplacian matrix. Contained in this paper is a discussion of the particular graph constructed on the spectral data for which the commute time distance is then calculated from, an introduction of some important properties of the graph Laplacian matrix, and a subspace projection that approximately preserves the maximal variance of the square root commute time distance. Finally, RX anomaly detection and Topological Anomaly Detection (TAD) algorithms will be applied to the CTD subspace followed by a discussion of their results.

  13. Collective charge excitations and the metal-insulator transition in the square lattice Hubbard-Coulomb model

    DOE PAGES

    Ulybyshev, Maksim; Winterowd, Christopher; Zafeiropoulos, Savvas

    2017-11-09

    Here in this article, we discuss the nontrivial collective charge excitations (plasmons) of the extended square lattice Hubbard model. Using a fully nonperturbative approach, we employ the hybrid Monte Carlo algorithm to simulate the system at half-filling. A modified Backus-Gilbert method is introduced to obtain the spectral functions via numerical analytic continuation. We directly compute the single-particle density of states which demonstrates the formation of Hubbard bands in the strongly correlated phase. The momentum-resolved charge susceptibility also is computed on the basis of the Euclidean charge-density-density correlator. In agreement with previous extended dynamical mean-field theory studies, we find that, atmore » high strength of the electron-electron interaction, the plasmon dispersion develops two branches.« less

  14. Collective charge excitations and the metal-insulator transition in the square lattice Hubbard-Coulomb model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulybyshev, Maksim; Winterowd, Christopher; Zafeiropoulos, Savvas

    Here in this article, we discuss the nontrivial collective charge excitations (plasmons) of the extended square lattice Hubbard model. Using a fully nonperturbative approach, we employ the hybrid Monte Carlo algorithm to simulate the system at half-filling. A modified Backus-Gilbert method is introduced to obtain the spectral functions via numerical analytic continuation. We directly compute the single-particle density of states which demonstrates the formation of Hubbard bands in the strongly correlated phase. The momentum-resolved charge susceptibility also is computed on the basis of the Euclidean charge-density-density correlator. In agreement with previous extended dynamical mean-field theory studies, we find that, atmore » high strength of the electron-electron interaction, the plasmon dispersion develops two branches.« less

  15. Hyperspectral feature mapping classification based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Li, Junwei; Wang, Guangping; Wu, Jingli

    2016-03-01

    This paper proposed a hyperspectral feature mapping classification algorithm based on mathematical morphology. Without the priori information such as spectral library etc., the spectral and spatial information can be used to realize the hyperspectral feature mapping classification. The mathematical morphological erosion and dilation operations are performed respectively to extract endmembers. The spectral feature mapping algorithm is used to carry on hyperspectral image classification. The hyperspectral image collected by AVIRIS is applied to evaluate the proposed algorithm. The proposed algorithm is compared with minimum Euclidean distance mapping algorithm, minimum Mahalanobis distance mapping algorithm, SAM algorithm and binary encoding mapping algorithm. From the results of the experiments, it is illuminated that the proposed algorithm's performance is better than that of the other algorithms under the same condition and has higher classification accuracy.

  16. An Inverse Square Law Variation for Hubble's Constant

    NASA Astrophysics Data System (ADS)

    Day, Orville W., Jr.

    1999-11-01

    The solution to Einstein's gravitational field equations is examined, using a Robertson-Walker metric with positive curvature, when Hubble's parameter, H_0, is taken to be a constant divided by R^2. R is the cosmic scale factor for the universe treated as a three-dimensional hypersphere in a four-dimensional Euclidean space. This solution produces a self-energy of the universe, W^(0)_self, proportional to the square of the total mass, times the universal gravitational constant divided by the cosmic scale factor, R. This result is totally analogous to the self-energy of the electromagnetic field of a charged particle, W^(0)_self = ke^2/2r, where the total charge e is squared, k is the universal electric constant and r is the scale factor, usually identified as the radius of the particle. It is shown that this choice for H0 leads to physically meaningful results for the average mass density and pressure, and a deacceleration parameter q_0=1.

  17. Distance-Based Phylogenetic Methods Around a Polytomy.

    PubMed

    Davidson, Ruth; Sullivant, Seth

    2014-01-01

    Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny.

  18. Multivariate Welch t-test on distances

    PubMed Central

    2016-01-01

    Motivation: Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. Results: We develop a solution in the form of a distance-based Welch t-test, TW2, for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and TW2 in reanalysis of two existing microbiome datasets, where the methodology has originated. Availability and Implementation: The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2. Further guidance on application of these methods can be obtained from the author. Contact: alekseye@musc.edu PMID:27515741

  19. Multivariate Welch t-test on distances.

    PubMed

    Alekseyenko, Alexander V

    2016-12-01

    Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. We develop a solution in the form of a distance-based Welch t-test, [Formula: see text], for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and [Formula: see text] in reanalysis of two existing microbiome datasets, where the methodology has originated. The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2 Further guidance on application of these methods can be obtained from the author. alekseye@musc.edu. © The Author 2016. Published by Oxford University Press.

  20. HLA-A, -B, -C, -DQB1, and -DRB1,3,4,5 allele and haplotype frequencies in the Costa Rica Central Valley Population and its relationship to worldwide populations.

    PubMed

    Arrieta-Bolaños, Esteban; Maldonado-Torres, Hazael; Dimitriu, Oana; Hoddinott, Michael A; Fowles, Finnuala; Shah, Anila; Orlich-Pérez, Priscilla; McWhinnie, Alasdair J; Alfaro-Bourrouet, Wilbert; Buján-Boza, Willem; Little, Ann-Margaret; Salazar-Sánchez, Lizbeth; Madrigal, J Alejandro

    2011-01-01

    The human leukocyte antigen (HLA) system is the most polymorphic in humans. Its allele, genotype, and haplotype frequencies vary significantly among different populations. Molecular typing data on HLA are necessary for the development of stem cell donor registries, cord blood banks, HLA-disease association studies, and anthropology studies. The Costa Rica Central Valley Population (CCVP) is the major population in this country. No previous study has characterized HLA frequencies in this population. Allele group and haplotype frequencies of HLA genes in the CCVP were determined by means of molecular typing in a sample of 130 unrelated blood donors from one of the country's major hospitals. A comparison between these frequencies and those of 126 populations worldwide was also carried out. A minimum variance dendrogram based on squared Euclidean distances was constructed to assess the relationship between the CCVP sample and populations from all over the world. Allele group and haplotype frequencies observed in this study are consistent with a profile of a dynamic and diverse population, with a hybrid ethnic origin, predominantly Caucasian-Amerindian. Results showed that populations genetically closest to the CCVP are a Mestizo urban population from Venezuela, and another one from Guadalajara, Mexico. Copyright © 2011 American Society for Histocompatibility and Immunogenetics. All rights reserved.

  1. Robustness of mission plans for unmanned aircraft

    NASA Astrophysics Data System (ADS)

    Niendorf, Moritz

    This thesis studies the robustness of optimal mission plans for unmanned aircraft. Mission planning typically involves tactical planning and path planning. Tactical planning refers to task scheduling and in multi aircraft scenarios also includes establishing a communication topology. Path planning refers to computing a feasible and collision-free trajectory. For a prototypical mission planning problem, the traveling salesman problem on a weighted graph, the robustness of an optimal tour is analyzed with respect to changes to the edge costs. Specifically, the stability region of an optimal tour is obtained, i.e., the set of all edge cost perturbations for which that tour is optimal. The exact stability region of solutions to variants of the traveling salesman problems is obtained from a linear programming relaxation of an auxiliary problem. Edge cost tolerances and edge criticalities are derived from the stability region. For Euclidean traveling salesman problems, robustness with respect to perturbations to vertex locations is considered and safe radii and vertex criticalities are introduced. For weighted-sum multi-objective problems, stability regions with respect to changes in the objectives, weights, and simultaneous changes are given. Most critical weight perturbations are derived. Computing exact stability regions is intractable for large instances. Therefore, tractable approximations are desirable. The stability region of solutions to relaxations of the traveling salesman problem give under approximations and sets of tours give over approximations. The application of these results to the two-neighborhood and the minimum 1-tree relaxation are discussed. Bounds on edge cost tolerances and approximate criticalities are obtainable likewise. A minimum spanning tree is an optimal communication topology for minimizing the cumulative transmission power in multi aircraft missions. The stability region of a minimum spanning tree is given and tolerances, stability balls, and criticalities are derived. This analysis is extended to Euclidean minimum spanning trees. This thesis aims at enabling increased mission performance by providing means of assessing the robustness and optimality of a mission and methods for identifying critical elements. Examples of the application to mission planning in contested environments, cargo aircraft mission planning, multi-objective mission planning, and planning optimal communication topologies for teams of unmanned aircraft are given.

  2. An algorithm for calculating minimum Euclidean distance between two geographic features

    NASA Astrophysics Data System (ADS)

    Peuquet, Donna J.

    1992-09-01

    An efficient algorithm is presented for determining the shortest Euclidean distance between two features of arbitrary shape that are represented in quadtree form. These features may be disjoint point sets, lines, or polygons. It is assumed that the features do not overlap. Features also may be intertwined and polygons may be complex (i.e. have holes). Utilizing a spatial divide-and-conquer approach inherent in the quadtree data model, the basic rationale is to narrow-in on portions of each feature quickly that are on a facing edge relative to the other feature, and to minimize the number of point-to-point Euclidean distance calculations that must be performed. Besides offering an efficient, grid-based alternative solution, another unique and useful aspect of the current algorithm is that is can be used for rapidly calculating distance approximations at coarser levels of resolution. The overall process can be viewed as a top-down parallel search. Using one list of leafcode addresses for each of the two features as input, the algorithm is implemented by successively dividing these lists into four sublists for each descendant quadrant. The algorithm consists of two primary phases. The first determines facing adjacent quadrant pairs where part or all of the two features are separated between the two quadrants, respectively. The second phase then determines the closest pixel-level subquadrant pairs within each facing quadrant pair at the lowest level. The key element of the second phase is a quick estimate distance heuristic for further elimination of locations that are not as near as neighboring locations.

  3. Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values

    DTIC Science & Technology

    2016-12-01

    UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square  error (MMSE)  estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem.                     3    Introduction      Minimum mean‐ square  error (MMSE) estimation is applied to target imaging with synthetic aperture

  4. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1990-01-01

    An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.

  5. A Novel Quantitative Prediction Approach for Astringency Level of Herbs Based on an Electronic Tongue

    PubMed Central

    Han, Xue; Jiang, Hong; Zhang, Dingkun; Zhang, Yingying; Xiong, Xi; Jiao, Jiaojiao; Xu, Runchun; Yang, Ming; Han, Li; Lin, Junzhi

    2017-01-01

    Background: The current astringency evaluation for herbs has become dissatisfied with the requirement of pharmaceutical process. It needed a new method to accurately assess astringency. Methods: First, quinine, sucrose, citric acid, sodium chloride, monosodium glutamate, and tannic acid (TA) were analyzed by electronic tongue (e-tongue) to determine the approximate region of astringency in partial least square (PLS) map. Second, different concentrations of TA were detected to define the standard curve of astringency. Meanwhile, coordinate-concentration relationship could be obtained by fitting the PLS abscissa of standard curve and corresponding concentration. Third, Chebulae Fructus (CF), Yuganzi throat tablets (YGZTT), and Sanlejiang oral liquid (SLJOL) were tested to define the region in PLS map. Finally, the astringent intensities of samples were calculated combining with the standard coordinate-concentration relationship and expressed by concentrations of TA. Then, Euclidean distance (Ed) analysis and human sensory test were processed to verify the results. Results: The fitting equation between concentration and abscissa of TA was Y = 0.00498 × e(−X/0.51035) + 0.10905 (r = 0.999). The astringency of 1, 0.1 mg/mL CF was predicted at 0.28, 0.12 mg/mL TA; 2, 0.2 mg/mL YGZTTs was predicted at 0.18, 0.11 mg/mL TA; 0.002, 0.0002 mg/mL SLJOL was predicted at 0.15, 0.10 mg/mL TA. The validation results showed that the predicted astringency of e-tongue was basically consistent to human sensory and was more accuracy than Ed analysis. Conclusion: The study indicated the established method was objective and feasible. It provided a new quantitative method for astringency of herbs. SUMMARY The astringency of Chebulae Fructus, Yuganzi throat tablets, and Sanlejiang oral liquid was predicted by electronic tongueEuclidean distance analysis and human sensory test verified the resultsA new strategy which was objective, simple, and sensitive to compare astringent intensity of herbs and preparations was provided. Abbreviations used: CF: Chebulae Fructus; E-tongue: Electronic tongue; Ed: Euclidean distance; PLS: Partial least square; PCA: Principal component analysis; SLJOL: Sanlejiang oral liquid; TA: Tannic acid; VAS: Visual analog scale; YGZTT: Yuganzi throat tablets. PMID:28839378

  6. The impact of Nordic walking training on the gait of the elderly.

    PubMed

    Ben Mansour, Khaireddine; Gorce, Philippe; Rezzoug, Nasser

    2018-03-27

    The purpose of the current study was to define the impact of regular practice of Nordic walking on the gait of the elderly. Thereby, we aimed to determine whether the gait characteristics of active elderly persons practicing Nordic walking are more similar to healthy adults than that of the sedentary elderly. Comparison was made based on parameters computed from three inertial sensors during walking at a freely chosen velocity. Results showed differences in gait pattern in terms of the amplitude computed from acceleration and angular velocity at the lumbar region (root mean square), the distribution (Skewness) quantified from the vertical and Euclidean norm of the lumbar acceleration, the complexity (Sample Entropy) of the mediolateral component of lumbar angular velocity and the Euclidean norm of the shank acceleration and angular velocity, the regularity of the lower limbs, the spatiotemporal parameters and the variability (standard deviation) of stance and stride durations. These findings reveal that the pattern of active elderly differs significantly from sedentary elderly of the same age while similarity was observed between the active elderly and healthy adults. These results advance that regular physical activity such as Nordic walking may counteract the deterioration of gait quality that occurs with aging.

  7. Statistical analysis of multivariate atmospheric variables. [cloud cover

    NASA Technical Reports Server (NTRS)

    Tubbs, J. D.

    1979-01-01

    Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.

  8. Enjoyment of Euclidean Planar Triangles

    ERIC Educational Resources Information Center

    Srinivasan, V. K.

    2013-01-01

    This article adopts the following classification for a Euclidean planar [triangle]ABC, purely based on angles alone. A Euclidean planar triangle is said to be acute angled if all the three angles of the Euclidean planar [triangle]ABC are acute angles. It is said to be right angled at a specific vertex, say B, if the angle ?ABC is a right angle…

  9. Gifted Mathematicians Constructing Their Own Geometries--Changes in Knowledge and Attitude.

    ERIC Educational Resources Information Center

    Shillor, Irith

    1997-01-01

    Using Taxi-Cab Geometry (a non-Euclidean geometry program) as the starting point, 14 mathematically gifted British secondary students (ages 12-14) were asked to consider the differences between Euclidean and Non-Euclidean geometries, then to construct their own geometry and to consider the non-Euclidean elements within it. The positive effects of…

  10. Minimal Paths in the City Block: Human Performance on Euclidean and Non-Euclidean Traveling Salesperson Problems

    ERIC Educational Resources Information Center

    Walwyn, Amy L.; Navarro, Daniel J.

    2010-01-01

    An experiment is reported comparing human performance on two kinds of visually presented traveling salesperson problems (TSPs), those reliant on Euclidean geometry and those reliant on city block geometry. Across multiple array sizes, human performance was near-optimal in both geometries, but was slightly better in the Euclidean format. Even so,…

  11. Dynamic hyperbolic geometry: building intuition and understanding mediated by a Euclidean model

    NASA Astrophysics Data System (ADS)

    Moreno-Armella, Luis; Brady, Corey; Elizondo-Ramirez, Rubén

    2018-05-01

    This paper explores a deep transformation in mathematical epistemology and its consequences for teaching and learning. With the advent of non-Euclidean geometries, direct, iconic correspondences between physical space and the deductive structures of mathematical inquiry were broken. For non-Euclidean ideas even to become thinkable the mathematical community needed to accumulate over twenty centuries of reflection and effort: a precious instance of distributed intelligence at the cultural level. In geometry education after this crisis, relations between intuitions and geometrical reasoning must be established philosophically, rather than taken for granted. One approach seeks intuitive supports only for Euclidean explorations, viewing non-Euclidean inquiry as fundamentally non-intuitive in nature. We argue for moving beyond such an impoverished approach, using dynamic geometry environments to develop new intuitions even in the extremely challenging setting of hyperbolic geometry. Our efforts reverse the typical direction, using formal structures as a source for a new family of intuitions that emerge from exploring a digital model of hyperbolic geometry. This digital model is elaborated within a Euclidean dynamic geometry environment, enabling a conceptual dance that re-configures Euclidean knowledge as a support for building intuitions in hyperbolic space-intuitions based not directly on physical experience but on analogies extending Euclidean concepts.

  12. 49 CFR 172.315 - Limited quantities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... accordance with the white square-on-point limited quantity marking as follows: (1) The limited quantity... forming the square-on-point must be at least 2 mm and the minimum dimension of each side must be 100 mm... top and bottom portions of the square-on-point and the border forming the square-on-point must be...

  13. New Techniques in Time-Frequency Analysis: Adaptive Band, Ultra-Wide Band and Multi-Rate Signal Processing

    DTIC Science & Technology

    2016-03-02

    Nyquist tiles and sampling groups in Euclidean geometry, and discussed the extension of these concepts to hyperbolic and spherical geometry and...hyperbolic or spherical spaces. We look to develop a structure for the tiling of frequency spaces in both Euclidean and non-Euclidean domains. In particular...we establish Nyquist tiles and sampling groups in Euclidean geometry, and discuss the extension of these concepts to hyperbolic and spherical geometry

  14. 49 CFR 172.315 - Limited quantities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... and bottom portions of the square-on-point and the border forming the square-on-point must be black and the center white or of a suitable contrasting background as follows: ER30DE11.004 (2) The square... the square-on-point must be at least 2 mm and the minimum dimension of each side must be 100 mm unless...

  15. 49 CFR 172.315 - Limited quantities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... and bottom portions of the square-on-point and the border forming the square-on-point must be black and the center white or of a suitable contrasting background as follows: ER30DE11.004 (2) The square... the square-on-point must be at least 2 mm and the minimum dimension of each side must be 100 mm unless...

  16. 49 CFR 172.315 - Limited quantities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... and bottom portions of the square-on-point and the border forming the square-on-point must be black and the center white or of a suitable contrasting background as follows: ER30DE11.004 (2) The square... the square-on-point must be at least 2 mm and the minimum dimension of each side must be 100 mm unless...

  17. Minimum Wage Effects on Educational Enrollments in New Zealand

    ERIC Educational Resources Information Center

    Pacheco, Gail A.; Cruickshank, Amy A.

    2007-01-01

    This paper empirically examines the impact of minimum wages on educational enrollments in New Zealand. A significant reform to the youth minimum wage since 2000 has resulted in some age groups undergoing a 91% rise in their real minimum wage over the last 10 years. Three panel least squares multivariate models are estimated from a national sample…

  18. Complexity and approximability for a problem of intersecting of proximity graphs with minimum number of equal disks

    NASA Astrophysics Data System (ADS)

    Kobylkin, Konstantin

    2016-10-01

    Computational complexity and approximability are studied for the problem of intersecting of a set of straight line segments with the smallest cardinality set of disks of fixed radii r > 0 where the set of segments forms straight line embedding of possibly non-planar geometric graph. This problem arises in physical network security analysis for telecommunication, wireless and road networks represented by specific geometric graphs defined by Euclidean distances between their vertices (proximity graphs). It can be formulated in a form of known Hitting Set problem over a set of Euclidean r-neighbourhoods of segments. Being of interest computational complexity and approximability of Hitting Set over so structured sets of geometric objects did not get much focus in the literature. Strong NP-hardness of the problem is reported over special classes of proximity graphs namely of Delaunay triangulations, some of their connected subgraphs, half-θ6 graphs and non-planar unit disk graphs as well as APX-hardness is given for non-planar geometric graphs at different scales of r with respect to the longest graph edge length. Simple constant factor approximation algorithm is presented for the case where r is at the same scale as the longest edge length.

  19. Shaping of arm configuration space by prescription of non-Euclidean metrics with applications to human motor control

    NASA Astrophysics Data System (ADS)

    Biess, Armin

    2013-01-01

    The study of the kinematic and dynamic features of human arm movements provides insights into the computational strategies underlying human motor control. In this paper a differential geometric approach to movement control is taken by endowing arm configuration space with different non-Euclidean metric structures to study the predictions of the generalized minimum-jerk (MJ) model in the resulting Riemannian manifold for different types of human arm movements. For each metric space the solution of the generalized MJ model is given by reparametrized geodesic paths. This geodesic model is applied to a variety of motor tasks ranging from three-dimensional unconstrained movements of a four degree of freedom arm between pointlike targets to constrained movements where the hand location is confined to a surface (e.g., a sphere) or a curve (e.g., an ellipse). For the latter speed-curvature relations are derived depending on the boundary conditions imposed (periodic or nonperiodic) and the compatibility with the empirical one-third power law is shown. Based on these theoretical studies and recent experimental findings, I argue that geodesics may be an emergent property of the motor system and that the sensorimotor system may shape arm configuration space by learning metric structures through sensorimotor feedback.

  20. Methods for computing color anaglyphs

    NASA Astrophysics Data System (ADS)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  1. Dynamic Hyperbolic Geometry: Building Intuition and Understanding Mediated by a Euclidean Model

    ERIC Educational Resources Information Center

    Moreno-Armella, Luis; Brady, Corey; Elizondo-Ramirez, Rubén

    2018-01-01

    This paper explores a deep transformation in mathematical epistemology and its consequences for teaching and learning. With the advent of non-Euclidean geometries, direct, iconic correspondences between physical space and the deductive structures of mathematical inquiry were broken. For non-Euclidean ideas even to become "thinkable" the…

  2. Can A "Hyperspace" Really Exist?

    NASA Technical Reports Server (NTRS)

    Zampino, Edward J.

    1999-01-01

    The idea of "hyperspace" is suggested as a possible approach to faster-than-light (FTL) motion. A brief summary of a 1986 study on the Euclidean representation of space-time by the author is presented. Some new calculations on the relativistic momentum and energy of a free particle in Euclidean "hyperspace" are now added and discussed. The superimposed Energy-Momentum curves for subluminal particles, tachyons, and particles in Euclidean "hyperspace" are presented. It is shown that in Euclidean "hyperspace", instead of a relativistic time dilation there is a time "compression" effect. Some fundamental questions are presented,

  3. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Kotob, S.

    1975-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  4. A fast least-squares algorithm for population inference

    PubMed Central

    2013-01-01

    Background Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual’s genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. Results We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. Conclusions The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate. PMID:23343408

  5. A fast least-squares algorithm for population inference.

    PubMed

    Parry, R Mitchell; Wang, May D

    2013-01-23

    Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual's genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate.

  6. A Comparison of Heuristic Procedures for Minimum within-Cluster Sums of Squares Partitioning

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Steinley, Douglas

    2007-01-01

    Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical…

  7. Students Discovering Spherical Geometry Using Dynamic Geometry Software

    ERIC Educational Resources Information Center

    Guven, Bulent; Karatas, Ilhan

    2009-01-01

    Dynamic geometry software (DGS) such as Cabri and Geometers' Sketchpad has been regularly used worldwide for teaching and learning Euclidean geometry for a long time. The DGS with its inductive nature allows students to learn Euclidean geometry via explorations. However, with respect to non-Euclidean geometries, do we need to introduce them to…

  8. A Case Example of Insect Gymnastics: How Is Non-Euclidean Geometry Learned?

    ERIC Educational Resources Information Center

    Junius, Premalatha

    2008-01-01

    The focus of the article is on the complex cognitive process involved in learning the concept of "straightness" in Non-Euclidean geometry. Learning new material is viewed through a conflict resolution framework, as a student questions familiar assumptions understood in Euclidean geometry. A case study reveals how mathematization of the straight…

  9. Single and multiple object tracking using log-euclidean Riemannian subspace and block-division appearance model.

    PubMed

    Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei

    2012-12-01

    Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.

  10. A Kohonen-like decomposition method for the Euclidean traveling salesman problem-KNIES/spl I.bar/DECOMPOSE.

    PubMed

    Aras, N; Altinel, I K; Oommen, J

    2003-01-01

    In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.

  11. Euclidean distance can identify the mannitol level that produces the most remarkable integral effect on sugarcane micropropagation in temporary immersion bioreactors.

    PubMed

    Gómez, Daviel; Hernández, L Ázaro; Yabor, Lourdes; Beemster, Gerrit T S; Tebbe, Christoph C; Papenbrock, Jutta; Lorenzo, José Carlos

    2018-03-15

    Plant scientists usually record several indicators in their abiotic factor experiments. The common statistical management involves univariate analyses. Such analyses generally create a split picture of the effects of experimental treatments since each indicator is addressed independently. The Euclidean distance combined with the information of the control treatment could have potential as an integrating indicator. The Euclidean distance has demonstrated its usefulness in many scientific fields but, as far as we know, it has not yet been employed for plant experimental analyses. To exemplify the use of the Euclidean distance in this field, we performed an experiment focused on the effects of mannitol on sugarcane micropropagation in temporary immersion bioreactors. Five mannitol concentrations were compared: 0, 50, 100, 150 and 200 mM. As dependent variables we recorded shoot multiplication rate, fresh weight, and levels of aldehydes, chlorophylls, carotenoids and phenolics. The statistical protocol which we then carried out integrated all dependent variables to easily identify the mannitol concentration that produced the most remarkable integral effect. Results provided by the Euclidean distance demonstrate a gradually increasing distance from the control in function of increasing mannitol concentrations. 200 mM mannitol caused the most significant alteration of sugarcane biochemistry and physiology under the experimental conditions described here. This treatment showed the longest statistically significant Euclidean distance to the control treatment (2.38). In contrast, 50 and 100 mM mannitol showed the lowest Euclidean distances (0.61 and 0.84, respectively) and thus poor integrated effects of mannitol. The analysis shown here indicates that the use of the Euclidean distance can contribute to establishing a more integrated evaluation of the contrasting mannitol treatments.

  12. The Common Evolution of Geometry and Architecture from a Geodetic Point of View

    NASA Astrophysics Data System (ADS)

    Bellone, T.; Fiermonte, F.; Mussio, L.

    2017-05-01

    Throughout history the link between geometry and architecture has been strong and while architects have used mathematics to construct their buildings, geometry has always been the essential tool allowing them to choose spatial shapes which are aesthetically appropriate. Sometimes it is geometry which drives architectural choices, but at other times it is architectural innovation which facilitates the emergence of new ideas in geometry. Among the best known types of geometry (Euclidean, projective, analytical, Topology, descriptive, fractal,…) those most frequently employed in architectural design are: - Euclidean Geometry - Projective Geometry - The non-Euclidean geometries. Entire architectural periods are linked to specific types of geometry. Euclidean geometry, for example, was the basis for architectural styles from Antiquity through to the Romanesque period. Perspective and Projective geometry, for their part, were important from the Gothic period through the Renaissance and into the Baroque and Neo-classical eras, while non-Euclidean geometries characterize modern architecture.

  13. 49 CFR 172.527 - Background requirements for certain placards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... (a) Except for size and color, the square background required by § 172.510(a) for certain placards on... requirements of § 172.519 for minimum durability and strength, the square background must consist of a white square measuring 141/4 inches (362.0 mm.) on each side surrounded by a black border extending to 151/4...

  14. 49 CFR 172.527 - Background requirements for certain placards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... (a) Except for size and color, the square background required by § 172.510(a) for certain placards on... requirements of § 172.519 for minimum durability and strength, the square background must consist of a white square measuring 141/4 inches (362.0 mm.) on each side surrounded by a black border extending to 151/4...

  15. 49 CFR 172.527 - Background requirements for certain placards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... (a) Except for size and color, the square background required by § 172.510(a) for certain placards on... requirements of § 172.519 for minimum durability and strength, the square background must consist of a white square measuring 141/4 inches (362.0 mm.) on each side surrounded by a black border extending to 151/4...

  16. 49 CFR 172.527 - Background requirements for certain placards.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... (a) Except for size and color, the square background required by § 172.510(a) for certain placards on... requirements of § 172.519 for minimum durability and strength, the square background must consist of a white square measuring 141/4 inches (362.0 mm.) on each side surrounded by a black border extending to 151/4...

  17. 49 CFR 172.527 - Background requirements for certain placards.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... (a) Except for size and color, the square background required by § 172.510(a) for certain placards on... requirements of § 172.519 for minimum durability and strength, the square background must consist of a white square measuring 141/4 inches (362.0 mm.) on each side surrounded by a black border extending to 151/4...

  18. Force and Directional Force Modulation Effects on Accuracy and Variability in Low-Level Pinch Force Tracking.

    PubMed

    Park, Sangsoo; Spirduso, Waneen; Eakin, Tim; Abraham, Lawrence

    2018-01-01

    The authors investigated how varying the required low-level forces and the direction of force change affect accuracy and variability of force production in a cyclic isometric pinch force tracking task. Eighteen healthy right-handed adult volunteers performed the tracking task over 3 different force ranges. Root mean square error and coefficient of variation were higher at lower force levels and during minimum reversals compared with maximum reversals. Overall, the thumb showed greater root mean square error and coefficient of variation scores than did the index finger during maximum reversals, but not during minimum reversals. The observed impaired performance during minimum reversals might originate from history-dependent mechanisms of force production and highly coupled 2-digit performance.

  19. 29 CFR 1917.121 - Spiral stairways.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... minimum dimensions of Figure F-1; EC21OC91.020 Spiral Stairway—Minimum Dimensions A (half-tread width) B... 26.67 cm) in height; (3) Minimum loading capability shall be 100 pounds per square foot (4.79kN), and... least 6 feet, 6 inches (1.98 m) above the top step. (c) Maintenance. Spiral stairways shall be...

  20. 29 CFR 1917.121 - Spiral stairways.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... minimum dimensions of Figure F-1; EC21OC91.020 Spiral Stairway—Minimum Dimensions A (half-tread width) B... 26.67 cm) in height; (3) Minimum loading capability shall be 100 pounds per square foot (4.79kN), and... least 6 feet, 6 inches (1.98 m) above the top step. (c) Maintenance. Spiral stairways shall be...

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias

    A Randon Geometric Graph (RGG) is constructed by distributing n nodes uniformly at random in the unit square and connecting two nodes if their Euclidean distance is at most r, for some prescribed r. They analyze the following randomized broadcast algorithm on RGGs. At the beginning, there is only one informed node. Then in each round, each informed node chooses a neighbor uniformly at random and informs it. They prove that this algorithm informs every node in the largest component of a RGG in {Omicron}({radical}n/r) rounds with high probability. This holds for any value of r larger than the criticalmore » value for the emergence of a giant component. In particular, the result implies that the diameter of the giant component is {Theta}({radical}n/r).« less

  2. Orientation estimation of anatomical structures in medical images for object recognition

    NASA Astrophysics Data System (ADS)

    Bağci, Ulaş; Udupa, Jayaram K.; Chen, Xinjian

    2011-03-01

    Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.

  3. Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing

    2017-04-20

    The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.

  4. A QSAR study of integrase strand transfer inhibitors based on a large set of pyrimidine, pyrimidone, and pyridopyrazine carboxamide derivatives

    NASA Astrophysics Data System (ADS)

    de Campos, Luana Janaína; de Melo, Eduardo Borges

    2017-08-01

    In the present study, 199 compounds derived from pyrimidine, pyrimidone and pyridopyrazine carboxamides with inhibitory activity against HIV-1 integrase were modeled. Subsequently, a multivariate QSAR study was conducted with 54 molecules employed by Ordered Predictors Selection (OPS) and Partial Least Squares (PLS) for the selection of variables and model construction, respectively. Topological, electrotopological, geometric, and molecular descriptors were used. The selected real model was robust and free from chance correlation; in addition, it demonstrated favorable internal and external statistical quality. Once statistically validated, the training model was used to predict the activity of a second data set (n = 145). The root mean square deviation (RMSD) between observed and predicted values was 0.698. Although it is a value outside of the standards, only 15 (10.34%) of the samples exhibited higher residual values than 1 log unit, a result considered acceptable. Results of Williams and Euclidean applicability domains relative to the prediction showed that the predictions did not occur by extrapolation and that the model is representative of the chemical space of test compounds.

  5. Preservice Mathematics Teachers' Perceptions of Using a Web 2.0 Technology as a Supportive Teaching-Learning Tool in a College Euclidean Geometry Course

    ERIC Educational Resources Information Center

    Hossain, Md. Mokter

    2012-01-01

    This mixed methods study examined preservice secondary mathematics teachers' perceptions of a blogging activity used as a supportive teaching-learning tool in a college Euclidean Geometry course. The effect of a 12-week blogging activity that was a standard component of a college Euclidean Geometry course offered for preservice secondary…

  6. 16 CFR Appendix to Part 460 - Exemptions

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... relationship between R-value and density or weight per square foot are exempted from the requirements in §§ 460.12(b)(2) and 460.13(c)(1) that they disclose minimum weight per square foot for R-values listed on... sheets of the maximum weight per square foot for each R-value required to be listed. 46 FR 22179 (1981...

  7. 16 CFR Appendix to Part 460 - Exemptions

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... relationship between R-value and density or weight per square foot are exempted from the requirements in §§ 460.12(b)(2) and 460.13(c)(1) that they disclose minimum weight per square foot for R-values listed on... sheets of the maximum weight per square foot for each R-value required to be listed. 46 FR 22179 (1981...

  8. 29 CFR 1917.121 - Spiral stairways.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 26.67 cm) in height; (3) Minimum loading capability shall be 100 pounds per square foot (4.79kN), and... shall be a minimum of 11/4 inches (3.18 cm) in outside diameter; and (5) Vertical clearance shall be at...

  9. Multi-resolution analysis for ear recognition using wavelet features

    NASA Astrophysics Data System (ADS)

    Shoaib, M.; Basit, A.; Faye, I.

    2016-11-01

    Security is very important and in order to avoid any physical contact, identification of human when they are moving is necessary. Ear biometric is one of the methods by which a person can be identified using surveillance cameras. Various techniques have been proposed to increase the ear based recognition systems. In this work, a feature extraction method for human ear recognition based on wavelet transforms is proposed. The proposed features are approximation coefficients and specific details of level two after applying various types of wavelet transforms. Different wavelet transforms are applied to find the suitable wavelet. Minimum Euclidean distance is used as a matching criterion. Results achieved by the proposed method are promising and can be used in real time ear recognition system.

  10. Code Samples Used for Complexity and Control

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  11. Low Density Parity Check Codes: Bandwidth Efficient Channel Coding

    NASA Technical Reports Server (NTRS)

    Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu

    2003-01-01

    Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.

  12. A root-mean-square approach for predicting fatigue crack growth under random loading

    NASA Technical Reports Server (NTRS)

    Hudson, C. M.

    1981-01-01

    A method for predicting fatigue crack growth under random loading which employs the concept of Barsom (1976) is presented. In accordance with this method, the loading history for each specimen is analyzed to determine the root-mean-square maximum and minimum stresses, and the predictions are made by assuming the tests have been conducted under constant-amplitude loading at the root-mean-square maximum and minimum levels. The procedure requires a simple computer program and a desk-top computer. For the eleven predictions made, the ratios of the predicted lives to the test lives ranged from 2.13 to 0.82, which is a good result, considering that the normal scatter in the fatigue-crack-growth rates may range from a factor of two to four under identical loading conditions.

  13. A Vector Approach to Euclidean Geometry: Inner Product Spaces, Euclidean Geometry and Trigonometry, Volume 2. Teacher's Edition.

    ERIC Educational Resources Information Center

    Vaughan, Herbert E.; Szabo, Steven

    This is the teacher's edition of a text for the second year of a two-year high school geometry course. The course bases plane and solid geometry and trigonometry on the fact that the translations of a Euclidean space constitute a vector space which has an inner product. Congruence is a geometric topic reserved for Volume 2. Volume 2 opens with an…

  14. Collector Size or Range Independence of SNR in Fixed-Focus Remote Raman Spectrometry.

    PubMed

    Hirschfeld, T

    1974-07-01

    When sensitivity allows, remote Raman spectrometers can be operated at a fixed focus with purely electronic (easily multiplexable) range gating. To keep the background small, the system etendue must be minimized. For a maximum range larger than the hyperfocal one, this is done by focusing the system at roughly twice the minimum range at which etendue matching is still required. Under these conditions the etendue varies as the fourth power of the collector diameter, causing the background shot noise to vary as its square. As the signal also varies with the same power, and background noise is usually limiting in this type instrument, the SNR becomes independent of the collector size. Below this minimum etendue-matched range, the transmission at the limiting aperture grows with the square of the range, canceling the inverse square loss of signal with range. The SNR is thus range independent below the minimum etendue matched range and collector size independent above it, with the location of transition being determined by the system etendue and collector diameter. The range of validity of these outrageousstatements is discussed.

  15. Anomalously soft non-Euclidean spring

    NASA Astrophysics Data System (ADS)

    Levin, Ido; Sharon, Eran

    In this work we study the mechanical properties of a frustrated elastic ribbon spring - the non-Euclidean minimal spring. This spring belongs to the family of non-Euclidean plates: it has no spontaneous curvature, but its lateral intrinsic geometry is described by a non-Euclidean reference metric. The reference metric of the minimal spring is hyperbolic, and can be embedded as a minimal surface. We argue that the existence of a continuous set of such isometric minimal surfaces with different extensions leads to a complete degeneracy of the bulk elastic energy of the minimal spring under elongation. This degeneracy is removed only by boundary layer effects. As a result, the mechanical properties of the minimal spring are unusual: the spring is ultra-soft with rigidity that depends on the thickness, t , as t raise 0 . 7 ex 7 7 2 lower 0 . 7 ex 2, and does not explicitly depend on the ribbon's width. These predictions are confirmed by a numerical study of a constrained spring. This work is the first to address the unusual mechanical properties of constrained non-Euclidean elastic objects. We also present a novel experimental system that is capable of constructing such objects, along with many other non-Euclidean plates.

  16. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    PubMed

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.

  17. Current pulse amplifier transmits detector signals with minimum distortion and attenuation

    NASA Technical Reports Server (NTRS)

    Bush, N. E.

    1967-01-01

    Amplifier translates the square pulses generated by a boron-trifluoride neutron sensitive detector located adjacent to a nuclear reactor to slower, long exponential decay pulses. These pulses are transmitted over long coaxial cables with minimum distortion and loss of frequency.

  18. 42 CFR 84.148 - Type C supplied-air respirator, continuous flow class; minimum requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the hose connection shall not exceed 863 kN/m.2 (125 pounds per square inch gage). (c) Where the pressure at any point in the supply system exceeds 863 kN/m.2 (125 pounds per square inch gage), the... connection from exceeding 863 kN/m.2 (125 pounds per square inch gage) under any conditions. ...

  19. 42 CFR 84.148 - Type C supplied-air respirator, continuous flow class; minimum requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the hose connection shall not exceed 863 kN/m.2 (125 pounds per square inch gage). (c) Where the pressure at any point in the supply system exceeds 863 kN/m.2 (125 pounds per square inch gage), the... connection from exceeding 863 kN/m.2 (125 pounds per square inch gage) under any conditions. ...

  20. 42 CFR 84.149 - Type C supplied-air respirator, demand and pressure demand class; minimum requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... per square inch) with from 6 to 76 m. (15 to 250 feet) of air-supply hose. (c) The specified air... pounds per square inch gage). (d)(1) Where the pressure in the air-supply system exceeds 863 kN/m.2 (125 pounds per square inch gage), the respirator shall be equipped with a pressure-release mechanism that...

  1. 42 CFR 84.148 - Type C supplied-air respirator, continuous flow class; minimum requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the hose connection shall not exceed 863 kN/m.2 (125 pounds per square inch gage). (c) Where the pressure at any point in the supply system exceeds 863 kN/m.2 (125 pounds per square inch gage), the... connection from exceeding 863 kN/m.2 (125 pounds per square inch gage) under any conditions. ...

  2. 42 CFR 84.149 - Type C supplied-air respirator, demand and pressure demand class; minimum requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... per square inch) with from 6 to 76 m. (15 to 250 feet) of air-supply hose. (c) The specified air... pounds per square inch gage). (d)(1) Where the pressure in the air-supply system exceeds 863 kN/m.2 (125 pounds per square inch gage), the respirator shall be equipped with a pressure-release mechanism that...

  3. 42 CFR 84.148 - Type C supplied-air respirator, continuous flow class; minimum requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the hose connection shall not exceed 863 kN/m.2 (125 pounds per square inch gage). (c) Where the pressure at any point in the supply system exceeds 863 kN/m.2 (125 pounds per square inch gage), the... connection from exceeding 863 kN/m.2 (125 pounds per square inch gage) under any conditions. ...

  4. 42 CFR 84.149 - Type C supplied-air respirator, demand and pressure demand class; minimum requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... per square inch) with from 6 to 76 m. (15 to 250 feet) of air-supply hose. (c) The specified air... pounds per square inch gage). (d)(1) Where the pressure in the air-supply system exceeds 863 kN/m.2 (125 pounds per square inch gage), the respirator shall be equipped with a pressure-release mechanism that...

  5. 42 CFR 84.149 - Type C supplied-air respirator, demand and pressure demand class; minimum requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... per square inch) with from 6 to 76 m. (15 to 250 feet) of air-supply hose. (c) The specified air... pounds per square inch gage). (d)(1) Where the pressure in the air-supply system exceeds 863 kN/m.2 (125 pounds per square inch gage), the respirator shall be equipped with a pressure-release mechanism that...

  6. 42 CFR 84.148 - Type C supplied-air respirator, continuous flow class; minimum requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... the hose connection shall not exceed 863 kN/m.2 (125 pounds per square inch gage). (c) Where the pressure at any point in the supply system exceeds 863 kN/m.2 (125 pounds per square inch gage), the... connection from exceeding 863 kN/m.2 (125 pounds per square inch gage) under any conditions. ...

  7. 42 CFR 84.149 - Type C supplied-air respirator, demand and pressure demand class; minimum requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... per square inch) with from 6 to 76 m. (15 to 250 feet) of air-supply hose. (c) The specified air... pounds per square inch gage). (d)(1) Where the pressure in the air-supply system exceeds 863 kN/m.2 (125 pounds per square inch gage), the respirator shall be equipped with a pressure-release mechanism that...

  8. Improving Bandwidth Utilization in a 1 Tbps Airborne MIMO Communications Downlink

    DTIC Science & Technology

    2013-03-21

    number of transmitters). C = log2 ∣∣∣∣∣INr + EsNtN0 HHH ∣∣∣∣∣ (2.32) In the signal to noise ratio, Es represents the total energy from all transmitters...channel matrix pseudo-inverse is computed by (2.36) [6, p. 970] 31 H+ = ( HHH )−1HH. (2.36) 2.6.5 Minimum Mean-Squared Error Detection. Minimum Mean Squared...H† = ( HHH + Nt SNR I )−1 HH . (3.14) Equation (3.14) was defined in [2] as an implementation of a MMSE equalizer, and was applied to the received

  9. 78 FR 16661 - Determination Under the Textile and Apparel Commercial Availability Provision of the Dominican...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... fabric). Overall weight: 287-351 grams per square meter. Overall width: Selvedge: 150.4-154.4 cm; Minimum... per cm x 43-45 picks per cm Weight: 121.5-148.5 grams per square meter Width: Selvedge: 150.4-154.4 cm... yarns: filament Knitting gauge: 27-29 Weight: 140.4-171.6 grams per square meter Width: Selvedge: 150.4...

  10. MINIMUM AREAS FOR ELEMENTARY SCHOOL BUILDING FACILITIES.

    ERIC Educational Resources Information Center

    Pennsylvania State Dept. of Public Instruction, Harrisburg.

    MINIMUM AREA SPACE REQUIREMENTS IN SQUARE FOOTAGE FOR ELEMENTARY SCHOOL BUILDING FACILITIES ARE PRESENTED, INCLUDING FACILITIES FOR INSTRUCTIONAL USE, GENERAL USE, AND SERVICE USE. LIBRARY, CAFETERIA, KITCHEN, STORAGE, AND MULTIPURPOSE ROOMS SHOULD BE SIZED FOR THE PROJECTED ENROLLMENT OF THE BUILDING IN ACCORDANCE WITH THE PROJECTION UNDER THE…

  11. Role of the Euclidean signature in lattice calculations of quasidistributions and other nonlocal matrix elements

    NASA Astrophysics Data System (ADS)

    Briceño, Raúl A.; Hansen, Maxwell T.; Monahan, Christopher J.

    2017-07-01

    Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate that the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Finally we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.

  12. Role of the Euclidean signature in lattice calculations of quasidistributions and other nonlocal matrix elements

    DOE PAGES

    Briceno, Raul A.; Hansen, Maxwell T.; Monahan, Christopher J.

    2017-07-11

    Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate thatmore » the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Lastly, we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.« less

  13. Anomalously Soft Non-Euclidean Springs

    NASA Astrophysics Data System (ADS)

    Levin, Ido; Sharon, Eran

    2016-01-01

    In this work we study the mechanical properties of a frustrated elastic ribbon spring—the non-Euclidean minimal spring. This spring belongs to the family of non-Euclidean plates: it has no spontaneous curvature, but its lateral intrinsic geometry is described by a non-Euclidean reference metric. The reference metric of the minimal spring is hyperbolic, and can be embedded as a minimal surface. We argue that the existence of a continuous set of such isometric minimal surfaces with different extensions leads to a complete degeneracy of the bulk elastic energy of the minimal spring under elongation. This degeneracy is removed only by boundary layer effects. As a result, the mechanical properties of the minimal spring are unusual: the spring is ultrasoft with a rigidity that depends on the thickness t as t7 /2 and does not explicitly depend on the ribbon's width. Moreover, we show that as the ribbon is widened, the rigidity may even decrease. These predictions are confirmed by a numerical study of a constrained spring. This work is the first to address the unusual mechanical properties of constrained non-Euclidean elastic objects.

  14. Euclideanization of Maxwell-Chern-Simons theory

    NASA Astrophysics Data System (ADS)

    Bowman, Daniel Alan

    We quantize the theory of electromagnetism in 2 + 1-spacetime dimensions with the addition of the topological Chern-Simons term using an indefinite metric formalism. In the process, we also quantize the Proca and pure Maxwell theories, which are shown to be related to the Maxwell-Chern-Simons theory. Next, we Euclideanize these three theories, obtaining path space formulae and investigating Osterwalder-Schrader positivity in each case. Finally, we obtain a characterization of those Euclidean states that correspond to physical states in the relativistic theories.

  15. Optimal control of multiplicative control systems arising from cancer therapy

    NASA Technical Reports Server (NTRS)

    Bahrami, K.; Kim, M.

    1975-01-01

    This study deals with ways of curtailing the rapid growth of cancer cell populations. The performance functional that measures the size of the population at the terminal time as well as the control effort is devised. With use of the discrete maximum principle, the Hamiltonian for this problem is determined and the condition for optimal solutions are developed. The optimal strategy is shown to be a bang-bang control. It is shown that the optimal control for this problem must be on the vertices of an N-dimensional cube contained in the N-dimensional Euclidean space. An algorithm for obtaining a local minimum of the performance function in an orderly fashion is developed. Application of the algorithm to the design of antitumor drug and X-irradiation schedule is discussed.

  16. Effective diagnosis of Alzheimer’s disease by means of large margin-based methodology

    PubMed Central

    2012-01-01

    Background Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer’s Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. Methods It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. Results Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. Conclusions All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET). PMID:22849649

  17. Effective diagnosis of Alzheimer's disease by means of large margin-based methodology.

    PubMed

    Chaves, Rosa; Ramírez, Javier; Górriz, Juan M; Illán, Ignacio A; Gómez-Río, Manuel; Carnero, Cristobal

    2012-07-31

    Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer's Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET).

  18. A New Computational Method to Fit the Weighted Euclidean Distance Model.

    ERIC Educational Resources Information Center

    De Leeuw, Jan; Pruzansky, Sandra

    1978-01-01

    A computational method for weighted euclidean distance scaling (a method of multidimensional scaling) which combines aspects of an "analytic" solution with an approach using loss functions is presented. (Author/JKS)

  19. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  20. Euclidean supergravity

    NASA Astrophysics Data System (ADS)

    de Wit, Bernard; Reys, Valentin

    2017-12-01

    Supergravity with eight supercharges in a four-dimensional Euclidean space is constructed at the full non-linear level by performing an off-shell time-like reduction of five-dimensional supergravity. The resulting four-dimensional theory is realized off-shell with the Weyl, vector and tensor supermultiplets and a corresponding multiplet calculus. Hypermultiplets are included as well, but they are themselves only realized with on-shell supersymmetry. We also briefly discuss the non-linear supermultiplet. The off-shell reduction leads to a full understanding of the Euclidean theory. A complete multiplet calculus is presented along the lines of the Minkowskian theory. Unlike in Minkowski space, chiral and anti-chiral multiplets are real and supersymmetric actions are generally unbounded from below. Precisely as in the Minkowski case, where one has different formulations of Poincaré supergravity upon introducing different compensating supermultiplets, one can also obtain different versions of Euclidean supergravity.

  1. Flexible intuitions of Euclidean geometry in an Amazonian indigene group

    PubMed Central

    Izard, Véronique; Pica, Pierre; Spelke, Elizabeth S.; Dehaene, Stanislas

    2011-01-01

    Kant argued that Euclidean geometry is synthesized on the basis of an a priori intuition of space. This proposal inspired much behavioral research probing whether spatial navigation in humans and animals conforms to the predictions of Euclidean geometry. However, Euclidean geometry also includes concepts that transcend the perceptible, such as objects that are infinitely small or infinitely large, or statements of necessity and impossibility. We tested the hypothesis that certain aspects of nonperceptible Euclidian geometry map onto intuitions of space that are present in all humans, even in the absence of formal mathematical education. Our tests probed intuitions of points, lines, and surfaces in participants from an indigene group in the Amazon, the Mundurucu, as well as adults and age-matched children controls from the United States and France and younger US children without education in geometry. The responses of Mundurucu adults and children converged with that of mathematically educated adults and children and revealed an intuitive understanding of essential properties of Euclidean geometry. For instance, on a surface described to them as perfectly planar, the Mundurucu's estimations of the internal angles of triangles added up to ∼180 degrees, and when asked explicitly, they stated that there exists one single parallel line to any given line through a given point. These intuitions were also partially in place in the group of younger US participants. We conclude that, during childhood, humans develop geometrical intuitions that spontaneously accord with the principles of Euclidean geometry, even in the absence of training in mathematics. PMID:21606377

  2. Optimization and Prediction of Angular Distortion and Weldment Characteristics of TIG Square Butt Joints

    NASA Astrophysics Data System (ADS)

    Narang, H. K.; Mahapatra, M. M.; Jha, P. K.; Biswas, P.

    2014-05-01

    Autogenous arc welds with minimum upper weld bead depression and lower weld bead bulging are desired as such welds do not require a second welding pass for filling up the upper bead depressions (UBDs) and characterized with minimum angular distortion. The present paper describes optimization and prediction of angular distortion and weldment characteristics such as upper weld bead depression and lower weld bead bulging of TIG-welded structural steel square butt joints. Full factorial design of experiment was utilized for selecting the combinations of welding process parameter to produce the square butts. A mathematical model was developed to establish the relationship between TIG welding process parameters and responses such as upper bead width, lower bead width, UBD, lower bead height (bulging), weld cross-sectional area, and angular distortions. The optimal welding condition to minimize UBD and lower bead bulging of the TIG butt joints was identified.

  3. Correction of Motion Artifacts From Shuttle Mode Computed Tomography Acquisitions for Body Perfusion Imaging Applications.

    PubMed

    Ghosh, Payel; Chandler, Adam G; Altinmakas, Emre; Rong, John; Ng, Chaan S

    2016-01-01

    The aim of this study was to investigate the feasibility of shuttle-mode computed tomography (CT) technology for body perfusion applications by quantitatively assessing and correcting motion artifacts. Noncontrast shuttle-mode CT scans (10 phases, 2 nonoverlapping bed locations) were acquired from 4 patients on a GE 750HD CT scanner. Shuttling effects were quantified using Euclidean distances (between-phase and between-bed locations) of corresponding fiducial points on the shuttle and reference phase scans (prior to shuttle mode). Motion correction with nonrigid registration was evaluated using sum-of-squares differences and distances between centers of segmented volumes of interest on shuttle and references images. Fiducial point analysis showed an average shuttling motion of 0.85 ± 1.05 mm (between-bed) and 1.18 ± 1.46 mm (between-phase), respectively. The volume-of-interest analysis of the nonrigid registration results showed improved sum-of-squares differences from 2950 to 597, between-bed distance from 1.64 to 1.20 mm, and between-phase distance from 2.64 to 1.33 mm, respectively, averaged over all cases. Shuttling effects introduced during shuttle-mode CT acquisitions can be computationally corrected for body perfusion applications.

  4. Clifford coherent state transforms on spheres

    NASA Astrophysics Data System (ADS)

    Dang, Pei; Mourão, José; Nunes, João P.; Qian, Tao

    2018-01-01

    We introduce a one-parameter family of transforms, U(m)t , t > 0, from the Hilbert space of Clifford algebra valued square integrable functions on the m-dimensional sphere, L2(Sm , dσm) ⊗Cm+1, to the Hilbert spaces, ML2(R m + 1 ∖ { 0 } , dμt) , of solutions of the Euclidean Dirac equation on R m + 1 ∖ { 0 } which are square integrable with respect to appropriate measures, dμt. We prove that these transforms are unitary isomorphisms of the Hilbert spaces and are extensions of the Segal-Bargman coherent state transform, U(1) :L2(S1 , dσ1) ⟶ HL2(C ∖ { 0 } , dμ) , to higher dimensional spheres in the context of Clifford analysis. In Clifford analysis it is natural to replace the analytic continuation from Sm to SCm as in (Hall, 1994; Stenzel, 1999; Hall and Mitchell, 2002) by the Cauchy-Kowalewski extension from Sm to R m + 1 ∖ { 0 } . One then obtains a unitary isomorphism from an L2-Hilbert space to a Hilbert space of solutions of the Dirac equation, that is to a Hilbert space of monogenic functions.

  5. 30 CFR 75.1107-7 - Water spray devices; capacity; water supply; minimum requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Water spray devices; capacity; water supply... Water spray devices; capacity; water supply; minimum requirements. (a) Where water spray devices are... square foot over the top surface area of the equipment and the supply of water shall be adequate to...

  6. Euclidean black hole vortices

    NASA Technical Reports Server (NTRS)

    Dowker, Fay; Gregory, Ruth; Traschen, Jennie

    1991-01-01

    We argue the existence of solutions of the Euclidean Einstein equations that correspond to a vortex sitting at the horizon of a black hole. We find the asymptotic behaviors, at the horizon and at infinity, of vortex solutions for the gauge and scalar fields in an abelian Higgs model on a Euclidean Schwarzschild background and interpolate between them by integrating the equations numerically. Calculating the backreaction shows that the effect of the vortex is to cut a slice out of the Schwarzschild geometry. Consequences of these solutions for black hole thermodynamics are discussed.

  7. Authenticating concealed private data while maintaining concealment

    DOEpatents

    Thomas, Edward V [Albuquerque, NM; Draelos, Timothy J [Albuquerque, NM

    2007-06-26

    A method of and system for authenticating concealed and statistically varying multi-dimensional data comprising: acquiring an initial measurement of an item, wherein the initial measurement is subject to measurement error; applying a transformation to the initial measurement to generate reference template data; acquiring a subsequent measurement of an item, wherein the subsequent measurement is subject to measurement error; applying the transformation to the subsequent measurement; and calculating a Euclidean distance metric between the transformed measurements; wherein the calculated Euclidean distance metric is identical to a Euclidean distance metric between the measurement prior to transformation.

  8. Multilevel assessment of fish species traits to evaluate habitat degradation in streams of the upper midwest

    USGS Publications Warehouse

    Goldstein, R.M.; Meador, M.R.

    2005-01-01

    We used species traits to examine the variation in fish assemblages for 21 streams in the Northern Lakes and Forests Ecoregion along a gradient of habitat disturbance. Fish species were classified based on five species trait-classes (trophic ecology, substrate preference, geomorphic preference, locomotion morphology, and reproductive strategy) and 29 categories within those classes. We used a habitat quality index to define a reference stream and then calculated Euclidean distances between the reference and each of the other sites for the five traits. Three levels of species trait analyses were conducted: (1) a composite measure (the sum of Euclidean distances across all five species traits), (2) Euclidean distances for the five individual species trait-classes, and (3) frequencies of occurrence of individual trait categories. The composite Euclidean distance was significantly correlated to the habitat index (r = -0.81; P = 0.001), as were the Euclidean distances for four of the five individual species traits (substrate preference: r = -0.70, P = 0.001; geomorphic preference: r = -0.69, P = 0.001; trophic ecology: r = -0.73, P = 0.001; and reproductive strategy: r = -0.64, P = 0.002). Although Euclidean distances for locomotion morphology were not significantly correlated to habitat index scores (r = -0.21; P = 0.368), analysis of variance and principal components analysis indicated that Euclidean distances for locomotion morphology contributed to significant variation in the fish assemblages among sites. Examination of trait categories indicated that low habitat index scores (degraded streams) were associated with changes in frequency of occurrence within the categories of all five of the species traits. Though the objectives and spatial scale of a study will dictate the level of species trait information required, our results suggest that species traits can provide critical information at multiple levels of data analysis. ?? Copyright by the American Fisheries Society 2005.

  9. Wavelet-based 3D reconstruction of microcalcification clusters from two mammographic views: new evidence that fractal tumors are malignant and Euclidean tumors are benign.

    PubMed

    Batchelder, Kendra A; Tanenbaum, Aaron B; Albert, Seth; Guimond, Lyne; Kestener, Pierre; Arneodo, Alain; Khalil, Andre

    2014-01-01

    The 2D Wavelet-Transform Modulus Maxima (WTMM) method was used to detect microcalcifications (MC) in human breast tissue seen in mammograms and to characterize the fractal geometry of benign and malignant MC clusters. This was done in the context of a preliminary analysis of a small dataset, via a novel way to partition the wavelet-transform space-scale skeleton. For the first time, the estimated 3D fractal structure of a breast lesion was inferred by pairing the information from two separate 2D projected mammographic views of the same breast, i.e. the cranial-caudal (CC) and mediolateral-oblique (MLO) views. As a novelty, we define the "CC-MLO fractal dimension plot", where a "fractal zone" and "Euclidean zones" (non-fractal) are defined. 118 images (59 cases, 25 malignant and 34 benign) obtained from a digital databank of mammograms with known radiologist diagnostics were analyzed to determine which cases would be plotted in the fractal zone and which cases would fall in the Euclidean zones. 92% of malignant breast lesions studied (23 out of 25 cases) were in the fractal zone while 88% of the benign lesions were in the Euclidean zones (30 out of 34 cases). Furthermore, a Bayesian statistical analysis shows that, with 95% credibility, the probability that fractal breast lesions are malignant is between 74% and 98%. Alternatively, with 95% credibility, the probability that Euclidean breast lesions are benign is between 76% and 96%. These results support the notion that the fractal structure of malignant tumors is more likely to be associated with an invasive behavior into the surrounding tissue compared to the less invasive, Euclidean structure of benign tumors. Finally, based on indirect 3D reconstructions from the 2D views, we conjecture that all breast tumors considered in this study, benign and malignant, fractal or Euclidean, restrict their growth to 2-dimensional manifolds within the breast tissue.

  10. Fuzzy Euclidean wormholes in de Sitter space

    NASA Astrophysics Data System (ADS)

    Chen, Pisin; Hu, Yao-Chieh; Yeom, Dong-han

    2017-07-01

    We investigate Euclidean wormholes in Einstein gravity with a massless scalar field in de Sitter space. Euclidean wormholes are possible due to the analytic continuation of the time as well as complexification of fields, where we need to impose the classicality after the Wick-rotation to the Lorentzian signatures. For some parameters, wormholes are preferred than Hawking-Moss instantons, and hence wormholes can be more fundamental than Hawking-Moss type instantons. Euclidean wormholes can be interpreted in three ways: (1) classical big bounce, (2) either tunneling from a small to a large universe or a creation of a collapsing and an expanding universe from nothing, and (3) either a transition from a contracting to a bouncing phase or a creation of two expanding universes from nothing. These various interpretations shed some light on challenges of singularities. In addition, these will help to understand tensions between various kinds of quantum gravity theories.

  11. Fuzzy Euclidean wormholes in de Sitter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pisin; Hu, Yao-Chieh; Yeom, Dong-han, E-mail: pisinchen@phys.ntu.edu.tw, E-mail: r04244003@ntu.edu.tw, E-mail: innocent.yeom@gmail.com

    We investigate Euclidean wormholes in Einstein gravity with a massless scalar field in de Sitter space. Euclidean wormholes are possible due to the analytic continuation of the time as well as complexification of fields, where we need to impose the classicality after the Wick-rotation to the Lorentzian signatures. For some parameters, wormholes are preferred than Hawking-Moss instantons, and hence wormholes can be more fundamental than Hawking-Moss type instantons. Euclidean wormholes can be interpreted in three ways: (1) classical big bounce, (2) either tunneling from a small to a large universe or a creation of a collapsing and an expanding universemore » from nothing, and (3) either a transition from a contracting to a bouncing phase or a creation of two expanding universes from nothing. These various interpretations shed some light on challenges of singularities. In addition, these will help to understand tensions between various kinds of quantum gravity theories.« less

  12. Fuzzy Euclidean wormholes in de Sitter space

    DOE PAGES

    Chen, Pisin; Hu, Yao-Chieh; Yeom, Dong-han

    2017-07-03

    Here, we investigate Euclidean wormholes in Einstein gravity with a massless scalar field in de Sitter space. Euclidean wormholes are possible due to the analytic continuation of the time as well as complexification of fields, where we need to impose the classicality after the Wick-rotation to the Lorentzian signatures. Furthermore, we prefer wormholes for some parameters, rather than Hawking-Moss instantons, and hence wormholes can be more fundamental than Hawking-Moss type instantons. Euclidean wormholes can be interpreted in three ways: (1) classical big bounce, (2) either tunneling from a small to a large universe or a creation of a collapsing andmore » an expanding universe from nothing, and (3) either a transition from a contracting to a bouncing phase or a creation of two expanding universes from nothing. These various interpretations shed some light on challenges of singularities. In addition, these will help to understand tensions between various kinds of quantum gravity theories.« less

  13. Fuzzy Euclidean wormholes in de Sitter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pisin; Hu, Yao-Chieh; Yeom, Dong-han

    Here, we investigate Euclidean wormholes in Einstein gravity with a massless scalar field in de Sitter space. Euclidean wormholes are possible due to the analytic continuation of the time as well as complexification of fields, where we need to impose the classicality after the Wick-rotation to the Lorentzian signatures. Furthermore, we prefer wormholes for some parameters, rather than Hawking-Moss instantons, and hence wormholes can be more fundamental than Hawking-Moss type instantons. Euclidean wormholes can be interpreted in three ways: (1) classical big bounce, (2) either tunneling from a small to a large universe or a creation of a collapsing andmore » an expanding universe from nothing, and (3) either a transition from a contracting to a bouncing phase or a creation of two expanding universes from nothing. These various interpretations shed some light on challenges of singularities. In addition, these will help to understand tensions between various kinds of quantum gravity theories.« less

  14. Contracted time and expanded space: The impact of circumnavigation on judgements of space and time.

    PubMed

    Brunec, Iva K; Javadi, Amir-Homayoun; Zisch, Fiona E L; Spiers, Hugo J

    2017-09-01

    The ability to estimate distance and time to spatial goals is fundamental for survival. In cases where a region of space must be navigated around to reach a location (circumnavigation), the distance along the path is greater than the straight-line Euclidean distance. To explore how such circumnavigation impacts on estimates of distance and time, we tested participants on their ability to estimate travel time and Euclidean distance to learned destinations in a virtual town. Estimates for approximately linear routes were compared with estimates for routes requiring circumnavigation. For all routes, travel times were significantly underestimated, and Euclidean distances overestimated. For routes requiring circumnavigation, travel time was further underestimated and the Euclidean distance further overestimated. Thus, circumnavigation appears to enhance existing biases in representations of travel time and distance. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  15. Topology Property and Dynamic Behavior of a Growing Spatial Network

    NASA Astrophysics Data System (ADS)

    Cao, Xian-Bin; Du, Wen-Bo; Hu, Mao-Bin; Rong, Zhi-Hai; Sun, Peng; Chen, Cai-Long

    In this paper, we propose a growing spatial network (GSN) model and investigate its topology properties and dynamical behaviors. The model is generated by adding one node i with m links into a square lattice at each time step and the new node i is connected to the existing nodes with probabilities proportional to: ({kj})α /dij2, where kj is the degree of node j, α is the tunable parameter and dij is the Euclidean distance between i and j. It is found that both the degree heterogeneity and the clustering coefficient monotonously increase with the increment of α, while the average shortest path length monotonously decreases. Moreover, the evolutionary game dynamics and network traffic dynamics are investigated. Simulation results show that the value of α can also greatly influence the dynamic behaviors.

  16. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    PubMed

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  17. Combined-probability space and certainty or uncertainty relations for a finite-level quantum system

    NASA Astrophysics Data System (ADS)

    Sehrawat, Arun

    2017-08-01

    The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.

  18. Exact and Approximate Stability of Solutions to Traveling Salesman Problems.

    PubMed

    Niendorf, Moritz; Girard, Anouck R

    2018-02-01

    This paper presents the stability analysis of an optimal tour for the symmetric traveling salesman problem (TSP) by obtaining stability regions. The stability region of an optimal tour is the set of all cost changes for which that solution remains optimal and can be understood as the margin of optimality for a solution with respect to perturbations in the problem data. It is known that it is not possible to test in polynomial time whether an optimal tour remains optimal after the cost of an arbitrary set of edges changes. Therefore, this paper develops tractable methods to obtain under and over approximations of stability regions based on neighborhoods and relaxations. The application of the results to the two-neighborhood and the minimum 1 tree (M1T) relaxation are discussed in detail. For Euclidean TSPs, stability regions with respect to vertex location perturbations and the notion of safe radii and location criticalities are introduced. Benefits of this paper include insight into robustness properties of tours, minimum spanning trees, M1Ts, and fast methods to evaluate optimality after perturbations occur. Numerical examples are given to demonstrate the methods and achievable approximation quality.

  19. A Simulation-Based Comparison of Several Stochastic Linear Regression Methods in the Presence of Outliers.

    ERIC Educational Resources Information Center

    Rule, David L.

    Several regression methods were examined within the framework of weighted structural regression (WSR), comparing their regression weight stability and score estimation accuracy in the presence of outlier contamination. The methods compared are: (1) ordinary least squares; (2) WSR ridge regression; (3) minimum risk regression; (4) minimum risk 2;…

  20. DIF Detection Using Multiple-Group Categorical CFA with Minimum Free Baseline Approach

    ERIC Educational Resources Information Center

    Chang, Yu-Wei; Huang, Wei-Kang; Tsai, Rung-Ching

    2015-01-01

    The aim of this study is to assess the efficiency of using the multiple-group categorical confirmatory factor analysis (MCCFA) and the robust chi-square difference test in differential item functioning (DIF) detection for polytomous items under the minimum free baseline strategy. While testing for DIF items, despite the strong assumption that all…

  1. 30 CFR 75.1107-4 - Automatic fire sensors and manual actuators; installation; minimum requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Automatic fire sensors and manual actuators... § 75.1107-4 Automatic fire sensors and manual actuators; installation; minimum requirements. (a)(1... sensors or equivalent shall be installed for each 50 square feet of top surface area, or fraction thereof...

  2. The Effective Dynamics of the Volume Preserving Mean Curvature Flow

    NASA Astrophysics Data System (ADS)

    Chenn, Ilias; Fournodavlos, G.; Sigal, I. M.

    2018-04-01

    We consider the dynamics of small closed submanifolds (`bubbles') under the volume preserving mean curvature flow. We construct a map from (n+1 )-dimensional Euclidean space into a given (n+1 )-dimensional Riemannian manifold which characterizes the existence, stability and dynamics of constant mean curvature submanifolds. This is done in terms of a reduced area function on the Euclidean space, which is given constructively and can be computed perturbatively. This allows us to derive adiabatic and effective dynamics of the bubbles. The results can be mapped by rescaling to the dynamics of fixed size bubbles in almost Euclidean Riemannian manifolds.

  3. How Are Mate Preferences Linked with Actual Mate Selection? Tests of Mate Preference Integration Algorithms Using Computer Simulations and Actual Mating Couples

    PubMed Central

    Conroy-Beam, Daniel; Buss, David M.

    2016-01-01

    Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294) we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection. PMID:27276030

  4. How Are Mate Preferences Linked with Actual Mate Selection? Tests of Mate Preference Integration Algorithms Using Computer Simulations and Actual Mating Couples.

    PubMed

    Conroy-Beam, Daniel; Buss, David M

    2016-01-01

    Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294) we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection.

  5. Orthogonal Array Testing for Transmit Precoding based Codebooks in Space Shift Keying Systems

    NASA Astrophysics Data System (ADS)

    Al-Ansi, Mohammed; Alwee Aljunid, Syed; Sourour, Essam; Mat Safar, Anuar; Rashidi, C. B. M.

    2018-03-01

    In Space Shift Keying (SSK) systems, transmit precoding based codebook approaches have been proposed to improve the performance in limited feedback channels. The receiver performs an exhaustive search in a predefined Full-Combination (FC) codebook to select the optimal codeword that maximizes the Minimum Euclidean Distance (MED) between the received constellations. This research aims to reduce the codebook size with the purpose of minimizing the selection time and the number of feedback bits. Therefore, we propose to construct the codebooks based on Orthogonal Array Testing (OAT) methods due to their powerful inherent properties. These methods allow to acquire a short codebook where the codewords are sufficient to cover almost all the possible effects included in the FC codebook. Numerical results show the effectiveness of the proposed OAT codebooks in terms of the system performance and complexity.

  6. Door Security using Face Detection and Raspberry Pi

    NASA Astrophysics Data System (ADS)

    Bhutra, Venkatesh; Kumar, Harshav; Jangid, Santosh; Solanki, L.

    2018-03-01

    With the world moving towards advanced technologies, security forms a crucial part in daily life. Among the many techniques used for this purpose, Face Recognition stands as effective means of authentication and security. This paper deals with the user of principal component and security. PCA is a statistical approach used to simplify a data set. The minimum Euclidean distance found from the PCA technique is used to recognize the face. Raspberry Pi a low cost ARM based computer on a small circuit board, controls the servo motor and other sensors. The servo-motor is in turn attached to the doors of home and opens up when the face is recognized. The proposed work has been done using a self-made training database of students from B.K. Birla Institute of Engineering and Technology, Pilani, Rajasthan, India.

  7. Optimization of numerical weather/wave prediction models based on information geometry and computational techniques

    NASA Astrophysics Data System (ADS)

    Galanis, George; Famelis, Ioannis; Kalogeri, Christina

    2014-10-01

    The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.

  8. A Stochastic Total Least Squares Solution of Adaptive Filtering Problem

    PubMed Central

    Ahmad, Noor Atinah

    2014-01-01

    An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs. PMID:24688412

  9. The depth estimation of 3D face from single 2D picture based on manifold learning constraints

    NASA Astrophysics Data System (ADS)

    Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia

    2018-04-01

    The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.

  10. A simplified procedure for correcting both errors and erasures of a Reed-Solomon code using the Euclidean algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.

    1987-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.

  11. Approximability of the d-dimensional Euclidean capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Khachay, Michael; Dubinin, Roman

    2016-10-01

    Capacitated Vehicle Routing Problem (CVRP) is the well known intractable combinatorial optimization problem, which remains NP-hard even in the Euclidean plane. Since the introduction of this problem in the middle of the 20th century, many researchers were involved into the study of its approximability. Most of the results obtained in this field are based on the well known Iterated Tour Partition heuristic proposed by M. Haimovich and A. Rinnoy Kan in their celebrated paper, where they construct the first Polynomial Time Approximation Scheme (PTAS) for the single depot CVRP in ℝ2. For decades, this result was extended by many authors to numerous useful modifications of the problem taking into account multiple depots, pick up and delivery options, time window restrictions, etc. But, to the best of our knowledge, almost none of these results go beyond the Euclidean plane. In this paper, we try to bridge this gap and propose a EPTAS for the Euclidean CVRP for any fixed dimension.

  12. Can rodents conceive hyperbolic spaces?

    PubMed Central

    Urdapilleta, Eugenio; Troiani, Francesca; Stella, Federico; Treves, Alessandro

    2015-01-01

    The grid cells discovered in the rodent medial entorhinal cortex have been proposed to provide a metric for Euclidean space, possibly even hardwired in the embryo. Yet, one class of models describing the formation of grid unit selectivity is entirely based on developmental self-organization, and as such it predicts that the metric it expresses should reflect the environment to which the animal has adapted. We show that, according to self-organizing models, if raised in a non-Euclidean hyperbolic cage rats should be able to form hyperbolic grids. For a given range of grid spacing relative to the radius of negative curvature of the hyperbolic surface, such grids are predicted to appear as multi-peaked firing maps, in which each peak has seven neighbours instead of the Euclidean six, a prediction that can be tested in experiments. We thus demonstrate that a useful universal neuronal metric, in the sense of a multi-scale ruler and compass that remain unaltered when changing environments, can be extended to other than the standard Euclidean plane. PMID:25948611

  13. Antipodal correlation on the meron wormhole and a bang-crunch universe

    NASA Astrophysics Data System (ADS)

    Betzios, Panagiotis; Gaddam, Nava; Papadoulaki, Olga

    2018-06-01

    We present a covariant Euclidean wormhole solution to Einstein Yang-Mills system and study scalar perturbations analytically. The fluctuation operator has a positive definite spectrum. We compute the Euclidean Green's function, which displays maximal antipodal correlation on the smallest three sphere at the center of the throat. Upon analytic continuation, it corresponds to the Feynman propagator on a compact bang-crunch universe. We present the connection matrix that relates past and future modes. We thoroughly discuss the physical implications of the antipodal map in both the Euclidean and Lorentzian geometries and give arguments on how to assign a physical probability to such solutions.

  14. What if? Exploring the multiverse through Euclidean wormholes.

    PubMed

    Bouhmadi-López, Mariam; Krämer, Manuel; Morais, João; Robles-Pérez, Salvador

    2017-01-01

    We present Euclidean wormhole solutions describing possible bridges within the multiverse. The study is carried out in the framework of third quantisation. The matter content is modelled through a scalar field which supports the existence of a whole collection of universes. The instanton solutions describe Euclidean solutions that connect baby universes with asymptotically de Sitter universes. We compute the tunnelling probability of these processes. Considering the current bounds on the energy scale of inflation and assuming that all the baby universes are nucleated with the same probability, we draw some conclusions about which universes are more likely to tunnel and therefore undergo a standard inflationary era.

  15. What if? Exploring the multiverse through Euclidean wormholes

    NASA Astrophysics Data System (ADS)

    Bouhmadi-López, Mariam; Krämer, Manuel; Morais, João; Robles-Pérez, Salvador

    2017-10-01

    We present Euclidean wormhole solutions describing possible bridges within the multiverse. The study is carried out in the framework of third quantisation. The matter content is modelled through a scalar field which supports the existence of a whole collection of universes. The instanton solutions describe Euclidean solutions that connect baby universes with asymptotically de Sitter universes. We compute the tunnelling probability of these processes. Considering the current bounds on the energy scale of inflation and assuming that all the baby universes are nucleated with the same probability, we draw some conclusions about which universes are more likely to tunnel and therefore undergo a standard inflationary era.

  16. Using BMDP and SPSS for a Q factor analysis.

    PubMed

    Tanner, B A; Koning, S M

    1980-12-01

    While Euclidean distances and Q factor analysis may sometimes be preferred to correlation coefficients and cluster analysis for developing a typology, commercially available software does not always facilitate their use. Commands are provided for using BMDP and SPSS in a Q factor analysis with Euclidean distances.

  17. Exploring New Geometric Worlds

    ERIC Educational Resources Information Center

    Nirode, Wayne

    2015-01-01

    When students work with a non-Euclidean distance formula, geometric objects such as circles and segment bisectors can look very different from their Euclidean counterparts. Students and even teachers can experience the thrill of creative discovery when investigating these differences among geometric worlds. In this article, the author describes a…

  18. Euclidean, Spherical, and Hyperbolic Shadows

    ERIC Educational Resources Information Center

    Hoban, Ryan

    2013-01-01

    Many classical problems in elementary calculus use Euclidean geometry. This article takes such a problem and solves it in hyperbolic and in spherical geometry instead. The solution requires only the ability to compute distances and intersections of points in these geometries. The dramatically different results we obtain illustrate the effect…

  19. Laterally structured ripple and square phases with one and two dimensional thickness modulations in a model bilayer system.

    PubMed

    Debnath, Ananya; Thakkar, Foram M; Maiti, Prabal K; Kumaran, V; Ayappa, K G

    2014-10-14

    Molecular dynamics simulations of bilayers in a surfactant/co-surfactant/water system with explicit solvent molecules show formation of topologically distinct gel phases depending upon the bilayer composition. At low temperatures, the bilayers transform from the tilted gel phase, Lβ', to the one dimensional (1D) rippled, Pβ' phase as the surfactant concentration is increased. More interestingly, we observe a two dimensional (2D) square phase at higher surfactant concentration which, upon heating, transforms to the gel Lβ' phase. The thickness modulations in the 1D rippled and square phases are asymmetric in two surfactant leaflets and the bilayer thickness varies by a factor of ∼2 between maximum and minimum. The 1D ripple consists of a thinner interdigitated region of smaller extent alternating with a thicker non-interdigitated region. The 2D ripple phase is made up of two superimposed square lattices of maximum and minimum thicknesses with molecules of high tilt forming a square lattice translated from the lattice formed with the thickness minima. Using Voronoi diagrams we analyze the intricate interplay between the area-per-head-group, height modulations and chain tilt for the different ripple symmetries. Our simulations indicate that composition plays an important role in controlling the formation of low temperature gel phase symmetries and rippling accommodates the increased area-per-head-group of the surfactant molecules.

  20. Large-Scale Structure Studies with the REFLEX Cluster Survey

    NASA Astrophysics Data System (ADS)

    Schuecker, P.; Bohringer, H.; Guzzo, L.; Collins, C.; Neumann, D. M.; Schindler, S.; Voges, W.

    1998-12-01

    First preliminary results of the ROSAT ESO Flux-Limited X-Ray (REFLEX) Cluster Survey are described. The survey covers 13,924 square degrees of the southern hemisphere. The present sample consists of about 470 rich clusters (1/3 non Abell/ACO clusters) with X-ray fluxes S >= 3.0 times 10^{-12} erg s^{-1} cm^{-2} (0.1-2.4 keV) and redshifts z <= 0.3. In contrast to other low-redshift surveys, the cumulative flux-number counts have an almost Euclidean slope. Comoving cluster number densities are found to be almost redshift-independent throughout the total survey volume. The X-ray luminosity function is well described by a Schechter function. The power spectrum of the number density fluctuations could be measured on scales up to 400 h^{-1} Mpc. A deeper survey with about 800 galaxy clusters in the same area is in progress.

  1. Diffeomorphic Sulcal Shape Analysis on the Cortex

    PubMed Central

    Joshi, Shantanu H.; Cabeen, Ryan P.; Joshi, Anand A.; Sun, Bo; Dinov, Ivo; Narr, Katherine L.; Toga, Arthur W.; Woods, Roger P.

    2014-01-01

    We present a diffeomorphic approach for constructing intrinsic shape atlases of sulci on the human cortex. Sulci are represented as square-root velocity functions of continuous open curves in ℝ3, and their shapes are studied as functional representations of an infinite-dimensional sphere. This spherical manifold has some advantageous properties – it is equipped with a Riemannian metric on the tangent space and facilitates computational analyses and correspondences between sulcal shapes. Sulcal shape mapping is achieved by computing geodesics in the quotient space of shapes modulo scales, translations, rigid rotations and reparameterizations. The resulting sulcal shape atlas preserves important local geometry inherently present in the sample population. The sulcal shape atlas is integrated in a cortical registration framework and exhibits better geometric matching compared to the conventional euclidean method. We demonstrate experimental results for sulcal shape mapping, cortical surface registration, and sulcal classification for two different surface extraction protocols for separate subject populations. PMID:22328177

  2. Extremal functions for singular Trudinger-Moser inequalities in the entire Euclidean space

    NASA Astrophysics Data System (ADS)

    Li, Xiaomeng; Yang, Yunyan

    2018-04-01

    In a previous work (Adimurthi and Yang, 2010 [2]), Adimurthi-Yang proved a singular Trudinger-Moser inequality in the entire Euclidean space RN (N ≥ 2). Precisely, if 0 ≤ β < 1 and 0 < γ ≤ 1 - β, then there holds for any τ > 0,

  3. Teaching Activity-Based Taxicab Geometry

    ERIC Educational Resources Information Center

    Ada, Tuba

    2013-01-01

    This study aimed on the process of teaching taxicab geometry, a non-Euclidean geometry that is easy to understand and similar to Euclidean geometry with its axiomatic structure. In this regard, several teaching activities were designed such as measuring taxicab distance, defining a taxicab circle, finding a geometric locus in taxicab geometry, and…

  4. Project-Based Learning to Explore Taxicab Geometry

    ERIC Educational Resources Information Center

    Ada, Tuba; Kurtulus, Aytac

    2012-01-01

    In Turkey, the content of the geometry course in the Primary School Mathematics Education, which is developed by The Council of Higher Education (YOK), comprises Euclidean and non-Euclidean types of geometry. In this study, primary mathematics teacher candidates compared these two geometries by focusing on Taxicab geometry among non-Euclidean…

  5. A Latent Class Approach to Fitting the Weighted Euclidean Model, CLASCAL.

    ERIC Educational Resources Information Center

    Winsberg, Suzanne; De Soete, Geert

    1993-01-01

    A weighted Euclidean distance model is proposed that incorporates a latent class approach (CLASCAL). The contribution to the distance function between two stimuli is per dimension weighted identically by all subjects in the same latent class. A model selection strategy is proposed and illustrated. (SLD)

  6. 78 FR 16662 - Determination Under the Textile and Apparel Commercial Availability Provision of the United...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ...% polyester/4-10% spandex (includes both face and backer fabric). Overall weight: 287-351 grams per square... spandex (filament) Thread count: 49-52 picks per cm x 43-45 picks per cm Weight: 121.5-148.5 grams per... grams per square meter Width: Selvedge: 150.4-154.4 cm; Minimum cuttable: 145.3-149.3 cm Coloration...

  7. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  8. Surface Design Based on Discrete Conformal Transformations

    NASA Astrophysics Data System (ADS)

    Duque, Carlos; Santangelo, Christian; Vouga, Etienne

    Conformal transformations are angle-preserving maps from one domain to another. Although angles are preserved, the lengths between arbitrary points are not generally conserved. As a consequence there is always a given amount of distortion associated to any conformal map. Different uses of such transformations can be found in various fields, but have been used by us to program non-uniformly swellable gel sheets to buckle into prescribed three dimensional shapes. In this work we apply circle packings as a kind of discrete conformal map in order to find conformal maps from the sphere to the plane that can be used as nearly uniform swelling patterns to program non-Euclidean sheets to buckle into spheres. We explore the possibility of tuning the area distortion to fit the experimental range of minimum and maximum swelling by modifying the boundary of the planar domain through the introduction of different cutting schemes.

  9. Lattice corrections to the quark quasidistribution at one loop

    DOE PAGES

    Carlson, Carl E.; Freid, Michael

    2017-05-12

    Here, we calculate radiative corrections to the quark quasidistribution in lattice perturbation theory at one loop to leading orders in the lattice spacing. We also consider one-loop corrections in continuum Euclidean space. We find that the infrared behavior of the corrections in Euclidean and Minkowski space are different. Furthermore, we explore features of momentum loop integrals and demonstrate why loop corrections from the lattice perturbation theory and Euclidean continuum do not correspond with their Minkowski brethren, and comment on a recent suggestion for transcending the differences in the results. Finally, we examine the role of the lattice spacing a andmore » of the r parameter in the Wilson action in these radiative corrections.« less

  10. Lattice corrections to the quark quasidistribution at one loop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, Carl E.; Freid, Michael

    Here, we calculate radiative corrections to the quark quasidistribution in lattice perturbation theory at one loop to leading orders in the lattice spacing. We also consider one-loop corrections in continuum Euclidean space. We find that the infrared behavior of the corrections in Euclidean and Minkowski space are different. Furthermore, we explore features of momentum loop integrals and demonstrate why loop corrections from the lattice perturbation theory and Euclidean continuum do not correspond with their Minkowski brethren, and comment on a recent suggestion for transcending the differences in the results. Finally, we examine the role of the lattice spacing a andmore » of the r parameter in the Wilson action in these radiative corrections.« less

  11. Late-time structure of the Bunch-Davies FRW wavefunction

    NASA Astrophysics Data System (ADS)

    Konstantinidis, George; Mahajan, Raghu; Shaghoulian, Edgar

    2016-10-01

    In this short note we organize a perturbation theory for the Bunch-Davies wavefunction in flat, accelerating cosmologies. The calculational technique avoids the in-in formalism and instead uses an analytic continuation from Euclidean signature. We will consider both massless and conformally coupled self-interacting scalars. These calculations explicitly illustrate two facts. The first is that IR divergences get sharper as the acceleration slows. The second is that UV-divergent contact terms in the Euclidean computation can contribute to the absolute value of the wavefunction in Lorentzian signature. Here UV divergent refers to terms involving inverse powers of the radial cutoff in the Euclidean computation. In Lorentzian signature such terms encode physical time dependence of the wavefunction.

  12. Neural self-tuning adaptive control of non-minimum phase system

    NASA Technical Reports Server (NTRS)

    Ho, Long T.; Bialasiewicz, Jan T.; Ho, Hai T.

    1993-01-01

    The motivation of this research came about when a neural network direct adaptive control scheme was applied to control the tip position of a flexible robotic arm. Satisfactory control performance was not attainable due to the inherent non-minimum phase characteristics of the flexible robotic arm tip. Most of the existing neural network control algorithms are based on the direct method and exhibit very high sensitivity, if not unstable, closed-loop behavior. Therefore, a neural self-tuning control (NSTC) algorithm is developed and applied to this problem and showed promising results. Simulation results of the NSTC scheme and the conventional self-tuning (STR) control scheme are used to examine performance factors such as control tracking mean square error, estimation mean square error, transient response, and steady state response.

  13. Teaching Geometry According to Euclid.

    ERIC Educational Resources Information Center

    Hartshorne, Robin

    2000-01-01

    This essay contains some reflections and questions arising from encounters with the text of Euclid's Elements. The reflections arise out of the teaching of a course in Euclidean and non-Euclidean geometry to undergraduates. It is concluded that teachers of such courses should read Euclid and ask questions, then teach a course on Euclid and later…

  14. Peripatetic and Euclidean theories of the visual ray.

    PubMed

    Jones, A

    1994-01-01

    The visual ray of Euclid's Optica is endowed with properties that reveal the concept to be an abstraction of a specific physical account of vision. The evolution of a physical theory of vision compatible with the Euclidean model can be traced in Peripatetic writings of the late fourth and third centuries B.C.

  15. Nearest Neighbor Classification Using a Density Sensitive Distance Measurement

    DTIC Science & Technology

    2009-09-01

    both the proposed density sensitive distance measurement and Euclidean distance are compared on the Wisconsin Diagnostic Breast Cancer dataset and...proposed density sensitive distance measurement and Euclidean distance are compared on the Wisconsin Diagnostic Breast Cancer dataset and the MNIST...35 1. The Wisconsin Diagnostic Breast Cancer (WDBC) Dataset..........35 2. The

  16. The Role of Structure in Learning Non-Euclidean Geometry

    ERIC Educational Resources Information Center

    Asmuth, Jennifer A.

    2009-01-01

    How do people learn novel mathematical information that contradicts prior knowledge? The focus of this thesis is the role of structure in the acquisition of knowledge about hyperbolic geometry, a non-Euclidean geometry. In a series of three experiments, I contrast a more holistic structure--training based on closed figures--with a mathematically…

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briceno, Raul A.; Hansen, Maxwell T.; Monahan, Christopher J.

    Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate thatmore » the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Lastly, we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.« less

  18. Fixed-topology Lorentzian triangulations: Quantum Regge Calculus in the Lorentzian domain

    NASA Astrophysics Data System (ADS)

    Tate, Kyle; Visser, Matt

    2011-11-01

    A key insight used in developing the theory of Causal Dynamical Triangu-lations (CDTs) is to use the causal (or light-cone) structure of Lorentzian manifolds to restrict the class of geometries appearing in the Quantum Gravity (QG) path integral. By exploiting this structure the models developed in CDTs differ from the analogous models developed in the Euclidean domain, models of (Euclidean) Dynamical Triangulations (DT), and the corresponding Lorentzian results are in many ways more "physical". In this paper we use this insight to formulate a Lorentzian signature model that is anal-ogous to the Quantum Regge Calculus (QRC) approach to Euclidean Quantum Gravity. We exploit another crucial fact about the structure of Lorentzian manifolds, namely that certain simplices are not constrained by the triangle inequalities present in Euclidean signa-ture. We show that this model is not related to QRC by a naive Wick rotation; this serves as another demonstration that the sum over Lorentzian geometries is not simply related to the sum over Euclidean geometries. By removing the triangle inequality constraints, there is more freedom to perform analytical calculations, and in addition numerical simulations are more computationally efficient. We first formulate the model in 1 + 1 dimensions, and derive scaling relations for the pure gravity path integral on the torus using two different measures. It appears relatively easy to generate "large" universes, both in spatial and temporal extent. In addition, loopto-loop amplitudes are discussed, and a transfer matrix is derived. We then also discuss the model in higher dimensions.

  19. INFORMATION-THEORETIC INEQUALITIES ON UNIMODULAR LIE GROUPS

    PubMed Central

    Chirikjian, Gregory S.

    2010-01-01

    Classical inequalities used in information theory such as those of de Bruijn, Fisher, Cramér, Rao, and Kullback carry over in a natural way from Euclidean space to unimodular Lie groups. These are groups that possess an integration measure that is simultaneously invariant under left and right shifts. All commutative groups are unimodular. And even in noncommutative cases unimodular Lie groups share many of the useful features of Euclidean space. The rotation and Euclidean motion groups, which are perhaps the most relevant Lie groups to problems in geometric mechanics, are unimodular, as are the unitary groups that play important roles in quantum computing. The extension of core information theoretic inequalities defined in the setting of Euclidean space to this broad class of Lie groups is potentially relevant to a number of problems relating to information gathering in mobile robotics, satellite attitude control, tomographic image reconstruction, biomolecular structure determination, and quantum information theory. In this paper, several definitions are extended from the Euclidean setting to that of Lie groups (including entropy and the Fisher information matrix), and inequalities analogous to those in classical information theory are derived and stated in the form of fifteen small theorems. In all such inequalities, addition of random variables is replaced with the group product, and the appropriate generalization of convolution of probability densities is employed. An example from the field of robotics demonstrates how several of these results can be applied to quantify the amount of information gained by pooling different sensory inputs. PMID:21113416

  20. Interplanetary Scintillation studies with the Murchison Wide-field Array III: Comparison of source counts and densities for radio sources and their sub-arcsecond components at 162 MHz

    NASA Astrophysics Data System (ADS)

    Chhetri, R.; Ekers, R. D.; Morgan, J.; Macquart, J.-P.; Franzen, T. M. O.

    2018-06-01

    We use Murchison Widefield Array observations of interplanetary scintillation (IPS) to determine the source counts of point (<0.3 arcsecond extent) sources and of all sources with some subarcsecond structure, at 162 MHz. We have developed the methodology to derive these counts directly from the IPS observables, while taking into account changes in sensitivity across the survey area. The counts of sources with compact structure follow the behaviour of the dominant source population above ˜3 Jy but below this they show Euclidean behaviour. We compare our counts to those predicted by simulations and find a good agreement for our counts of sources with compact structure, but significant disagreement for point source counts. Using low radio frequency SEDs from the GLEAM survey, we classify point sources as Compact Steep-Spectrum (CSS), flat spectrum, or peaked. If we consider the CSS sources to be the more evolved counterparts of the peaked sources, the two categories combined comprise approximately 80% of the point source population. We calculate densities of potential calibrators brighter than 0.4 Jy at low frequencies and find 0.2 sources per square degrees for point sources, rising to 0.7 sources per square degree if sources with more complex arcsecond structure are included. We extrapolate to estimate 4.6 sources per square degrees at 0.04 Jy. We find that a peaked spectrum is an excellent predictor for compactness at low frequencies, increasing the number of good calibrators by a factor of three compared to the usual flat spectrum criterion.

  1. Random vibrations of quadratic damping systems. [optimum damping analysis for automobile suspension system

    NASA Technical Reports Server (NTRS)

    Sireteanu, T.

    1974-01-01

    An oscillating system with quadratic damping subjected to white noise excitation is replaced by a nonlinear, statistically equivalent system for which the associated Fokker-Planck equation can be exactly solved. The mean square responses are calculated and the optimum damping coefficient is determined with respect to the minimum mean square acceleration criteria. An application of these results to the optimization of automobile suspension damping is given.

  2. Measuring the Hall weighting function for square and cloverleaf geometries

    NASA Astrophysics Data System (ADS)

    Scherschligt, Julia K.; Koon, Daniel W.

    2000-02-01

    We have directly measured the Hall weighting function—the sensitivity of a four-wire Hall measurement to the position of macroscopic inhomogeneities in Hall angle—for both a square shaped and a cloverleaf specimen. Comparison with the measured resistivity weighting function for a square geometry [D. W. Koon and W. K. Chan, Rev. Sci. Instrum. 69, 12 (1998)] proves that the two measurements sample the same specimen differently. For Hall measurements on both a square and a cloverleaf, the function is nonnegative with its maximum in the center and its minimum of zero at the edges of the square. Converting a square into a cloverleaf is shown to dramatically focus the measurement process onto a much smaller portion of the specimen. While our results agree qualitatively with theory, details are washed out, owing to the finite size of the magnetic probe used.

  3. Application of quadratic optimization to supersonic inlet control.

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Zeller, J. R.

    1972-01-01

    This paper describes the application of linear stochastic optimal control theory to the design of the control system for the air intake, the inlet, of a supersonic air-breathing propulsion system. The controls must maintain a stable inlet shock position in the presence of random airflow disturbances and prevent inlet unstart. Two different linear time invariant controllers are developed. One is designed to minimize a nonquadratic index, the expected frequency of inlet unstart, and the other is designed to minimize the mean square value of inlet shock motion. The quadratic equivalence principle is used to obtain a linear controller that minimizes the nonquadratic index. The two controllers are compared on the basis of unstart prevention, control effort requirements, and frequency response. It is concluded that while controls designed to minimize unstarts are desirable in that the index minimized is physically meaningful, computation time required is longer than for the minimum mean square shock position approach. The simpler minimum mean square shock position solution produced expected unstart frequency values which were not significantly larger than those of the nonquadratic solution.

  4. Least square neural network model of the crude oil blending process.

    PubMed

    Rubio, José de Jesús

    2016-06-01

    In this paper, the recursive least square algorithm is designed for the big data learning of a feedforward neural network. The proposed method as the combination of the recursive least square and feedforward neural network obtains four advantages over the alone algorithms: it requires less number of regressors, it is fast, it has the learning ability, and it is more compact. Stability, convergence, boundedness of parameters, and local minimum avoidance of the proposed technique are guaranteed. The introduced strategy is applied for the modeling of the crude oil blending process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Technical Note: Using k-means clustering to determine the number and position of isocenters in MLC-based multiple target intracranial radiosurgery.

    PubMed

    Yock, Adam D; Kim, Gwe-Ya

    2017-09-01

    To present the k-means clustering algorithm as a tool to address treatment planning considerations characteristic of stereotactic radiosurgery using a single isocenter for multiple targets. For 30 patients treated with stereotactic radiosurgery for multiple brain metastases, the geometric centroids and radii of each met were determined from the treatment planning system. In-house software used this as well as weighted and unweighted versions of the k-means clustering algorithm to group the targets to be treated with a single isocenter, and to position each isocenter. The algorithm results were evaluated using within-cluster sum of squares as well as a minimum target coverage metric that considered the effect of target size. Both versions of the algorithm were applied to an example patient to demonstrate the prospective determination of the appropriate number and location of isocenters. Both weighted and unweighted versions of the k-means algorithm were applied successfully to determine the number and position of isocenters. Comparing the two, both the within-cluster sum of squares metric and the minimum target coverage metric resulting from the unweighted version were less than those from the weighted version. The average magnitudes of the differences were small (-0.2 cm 2 and 0.1% for the within cluster sum of squares and minimum target coverage, respectively) but statistically significant (Wilcoxon signed-rank test, P < 0.01). The differences between the versions of the k-means clustering algorithm represented an advantage of the unweighted version for the within-cluster sum of squares metric, and an advantage of the weighted version for the minimum target coverage metric. While additional treatment planning considerations have a large influence on the final treatment plan quality, both versions of the k-means algorithm provide automatic, consistent, quantitative, and objective solutions to the tasks associated with SRS treatment planning using a single isocenter for multiple targets. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  6. Land use mapping from CBERS-2 images with open source tools by applying different classification algorithms

    NASA Astrophysics Data System (ADS)

    Sanhouse-García, Antonio J.; Rangel-Peraza, Jesús Gabriel; Bustos-Terrones, Yaneth; García-Ferrer, Alfonso; Mesas-Carrascosa, Francisco J.

    2016-02-01

    Land cover classification is often based on different characteristics between their classes, but with great homogeneity within each one of them. This cover is obtained through field work or by mean of processing satellite images. Field work involves high costs; therefore, digital image processing techniques have become an important alternative to perform this task. However, in some developing countries and particularly in Casacoima municipality in Venezuela, there is a lack of geographic information systems due to the lack of updated information and high costs in software license acquisition. This research proposes a low cost methodology to develop thematic mapping of local land use and types of coverage in areas with scarce resources. Thematic mapping was developed from CBERS-2 images and spatial information available on the network using open source tools. The supervised classification method per pixel and per region was applied using different classification algorithms and comparing them among themselves. Classification method per pixel was based on Maxver algorithms (maximum likelihood) and Euclidean distance (minimum distance), while per region classification was based on the Bhattacharya algorithm. Satisfactory results were obtained from per region classification, where overall reliability of 83.93% and kappa index of 0.81% were observed. Maxver algorithm showed a reliability value of 73.36% and kappa index 0.69%, while Euclidean distance obtained values of 67.17% and 0.61% for reliability and kappa index, respectively. It was demonstrated that the proposed methodology was very useful in cartographic processing and updating, which in turn serve as a support to develop management plans and land management. Hence, open source tools showed to be an economically viable alternative not only for forestry organizations, but for the general public, allowing them to develop projects in economically depressed and/or environmentally threatened areas.

  7. Discrimination of different sub-basins on Tajo River based on water influence factor

    NASA Astrophysics Data System (ADS)

    Bermudez, R.; Gascó, J. M.; Tarquis, A. M.; Saa-Requejo, A.

    2009-04-01

    Numeric taxonomy has been applied to classify Tajo basin water (Spain) till Portugal border. Several stations, a total of 52, that estimate 15 water variables have been used in this study. The different groups have been obtained applying a Euclidean distance among stations (distance classification) and a Euclidean distance between each station and the centroid estimated among them (centroid classification), varying the number of parameters and with or without variable typification. In order to compare the classification a log-log relation has been established, between number of groups created and distances, to select the best one. It has been observed that centroid classification is more appropriate following in a more logic way the natural constrictions than the minimum distance among stations. Variable typification doesn't improve the classification except when the centroid method is applied. Taking in consideration the ions and the sum of them as variables, the classification improved. Stations are grouped based on electric conductivity (CE), total anions (TA), total cations (TC) and ions ratio (Na/Ca and Mg/Ca). For a given classification and comparing the different groups created a certain variation in ions concentration and ions ratio are observed. However, the variation in each ion among groups is different depending on the case. For the last group, regardless the classification, the increase in all ions is general. Comparing the dendrograms, and groups that originated, Tajo river basin can be sub dived in five sub-basins differentiated by the main influence on water: 1. With a higher ombrogenic influence (rain fed). 2. With ombrogenic and pedogenic influence (rain and groundwater fed). 3. With pedogenic influence. 4. With lithogenic influence (geological bedrock). 5. With a higher ombrogenic and lithogenic influence added.

  8. Using P-Stat, BMDP and SPSS for a cross-products factor analysis.

    PubMed

    Tanner, B A; Leiman, J M

    1983-06-01

    The major disadvantage of the Q factor analysis with Euclidean distances described by Tanner and Koning [Comput. Progr. Biomed. 12 (1980) 201-202] is the considerable editing required. An alternative procedure with commercially distributed software, and with cross-products in place of Euclidean distances is described. This procedure does not require any editing.

  9. Usability Evaluation of an Augmented Reality System for Teaching Euclidean Vectors

    ERIC Educational Resources Information Center

    Martin-Gonzalez, Anabel; Chi-Poot, Angel; Uc-Cetina, Victor

    2016-01-01

    Augmented reality (AR) is one of the emerging technologies that has demonstrated to be an efficient technological tool to enhance learning techniques. In this paper, we describe the development and evaluation of an AR system for teaching Euclidean vectors in physics and mathematics. The goal of this pedagogical tool is to facilitate user's…

  10. Improved Pedagogy for Linear Differential Equations by Reconsidering How We Measure the Size of Solutions

    ERIC Educational Resources Information Center

    Tisdell, Christopher C.

    2017-01-01

    For over 50 years, the learning of teaching of "a priori" bounds on solutions to linear differential equations has involved a Euclidean approach to measuring the size of a solution. While the Euclidean approach to "a priori" bounds on solutions is somewhat manageable in the learning and teaching of the proofs involving…

  11. Fusion And Inference From Multiple And Massive Disparate Distributed Dynamic Data Sets

    DTIC Science & Technology

    2017-07-01

    principled methodology for two-sample graph testing; designed a provably almost-surely perfect vertex clustering algorithm for block model graphs; proved...3.7 Semi-Supervised Clustering Methodology ...................................................................... 9 3.8 Robust Hypothesis Testing...dimensional Euclidean space – allows the full arsenal of statistical and machine learning methodology for multivariate Euclidean data to be deployed for

  12. In a Class with Klein: Generating a Model of the Hyperbolic Plane

    ERIC Educational Resources Information Center

    Otten, Samuel; Zin, Christopher

    2012-01-01

    The emergence of non-Euclidean geometries in the 19th century rocked the foundations of mathematical knowledge and certainty. The tremors can still be felt in undergraduate mathematics today where encounters with non-Euclidean geometry are novel and often shocking to students. Because of its divergence from ordinary and comfortable notions of…

  13. Complex networks: Effect of subtle changes in nature of randomness

    NASA Astrophysics Data System (ADS)

    Goswami, Sanchari; Biswas, Soham; Sen, Parongama

    2011-03-01

    In two different classes of network models, namely, the Watts Strogatz type and the Euclidean type, subtle changes have been introduced in the randomness. In the Watts Strogatz type network, rewiring has been done in different ways and although the qualitative results remain the same, finite differences in the exponents are observed. In the Euclidean type networks, where at least one finite phase transition occurs, two models differing in a similar way have been considered. The results show a possible shift in one of the phase transition points but no change in the values of the exponents. The WS and Euclidean type models are equivalent for extreme values of the parameters; we compare their behaviour for intermediate values.

  14. Variational submanifolds of Euclidean spaces

    NASA Astrophysics Data System (ADS)

    Krupka, D.; Urban, Z.; Volná, J.

    2018-03-01

    Systems of ordinary differential equations (or dynamical forms in Lagrangian mechanics), induced by embeddings of smooth fibered manifolds over one-dimensional basis, are considered in the class of variational equations. For a given non-variational system, conditions assuring variationality (the Helmholtz conditions) of the induced system with respect to a submanifold of a Euclidean space are studied, and the problem of existence of these "variational submanifolds" is formulated in general and solved for second-order systems. The variational sequence theory on sheaves of differential forms is employed as a main tool for the analysis of local and global aspects (variationality and variational triviality). The theory is illustrated by examples of holonomic constraints (submanifolds of a configuration Euclidean space) which are variational submanifolds in geometry and mechanics.

  15. Balancing Newtonian gravity and spin to create localized structures

    NASA Astrophysics Data System (ADS)

    Bush, Michael; Lindner, John

    2015-03-01

    Using geometry and Newtonian physics, we design localized structures that do not require electromagnetic or other forces to resist implosion or explosion. In two-dimensional Euclidean space, we find an equilibrium configuration of a rotating ring of massive dust whose inward gravity is the centripetal force that spins it. We find similar solutions in three-dimensional Euclidean and hyperbolic spaces, but only in the limit of vanishing mass. Finally, in three-dimensional Euclidean space, we generalize the two-dimensional result by finding an equilibrium configuration of a spherical shell of massive dust that supports itself against gravitational collapse by spinning isoclinically in four dimensions so its three-dimensional acceleration is everywhere inward. These Newtonian ``atoms'' illuminate classical physics and geometry.

  16. 14 CFR 151.39 - Project eligibility.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... located; and (ii) Adequate replacement housing that is open to all persons, regardless of race, color...) Bituminous resurfacing of pavements with a minimum of 100 pounds of plant-mixed material for each square yard...

  17. 14 CFR 151.39 - Project eligibility.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... located; and (ii) Adequate replacement housing that is open to all persons, regardless of race, color...) Bituminous resurfacing of pavements with a minimum of 100 pounds of plant-mixed material for each square yard...

  18. 36 CFR 28.12 - Development standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... significant harm to the natural resources of the Seashore. (c) Minimum lot size is 4,000 square feet. A... allowable accessory structure and is calculated in measuring lot occupancy. (h) No sign may be self...

  19. 36 CFR 28.12 - Development standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... significant harm to the natural resources of the Seashore. (c) Minimum lot size is 4,000 square feet. A... allowable accessory structure and is calculated in measuring lot occupancy. (h) No sign may be self...

  20. 36 CFR 28.12 - Development standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... significant harm to the natural resources of the Seashore. (c) Minimum lot size is 4,000 square feet. A... allowable accessory structure and is calculated in measuring lot occupancy. (h) No sign may be self...

  1. Metameric MIMO-OOK transmission scheme using multiple RGB LEDs.

    PubMed

    Bui, Thai-Chien; Cusani, Roberto; Scarano, Gaetano; Biagi, Mauro

    2018-05-28

    In this work, we propose a novel visible light communication (VLC) scheme utilizing multiple different red green and blue triplets each with a different emission spectrum of red, green and blue for mitigating the effect of interference due to different colors using spatial multiplexing. On-off keying modulation is considered and its effect on light emission in terms of flickering, dimming and color rendering is discussed so as to demonstrate how metameric properties have been considered. At the receiver, multiple photodiodes with color filter-tuned on each transmit light emitting diode (LED) are employed. Three different detection mechanisms of color zero forcing, minimum mean square error estimation and minimum mean square error equalization are then proposed. The system performance of the proposed scheme is evaluated both with computer simulations and tests with an Arduino board implementation.

  2. A generalised optimal linear quadratic tracker with universal applications. Part 2: discrete-time systems

    NASA Astrophysics Data System (ADS)

    Ebrahimzadeh, Faezeh; Tsai, Jason Sheng-Hong; Chung, Min-Ching; Liao, Ying Ting; Guo, Shu-Mei; Shieh, Leang-San; Wang, Li

    2017-01-01

    Contrastive to Part 1, Part 2 presents a generalised optimal linear quadratic digital tracker (LQDT) with universal applications for the discrete-time (DT) systems. This includes (1) a generalised optimal LQDT design for the system with the pre-specified trajectories of the output and the control input and additionally with both the input-to-output direct-feedthrough term and known/estimated system disturbances or extra input/output signals; (2) a new optimal filter-shaped proportional plus integral state-feedback LQDT design for non-square non-minimum phase DT systems to achieve a minimum-phase-like tracking performance; (3) a new approach for computing the control zeros of the given non-square DT systems; and (4) a one-learning-epoch input-constrained iterative learning LQDT design for the repetitive DT systems.

  3. Application of Twin Beams in Mach-Zehnder Interferometer

    NASA Technical Reports Server (NTRS)

    Zhang, J. X.; Xie, C. D.; Peng, K. C.

    1996-01-01

    Using the twin beams generated from parametric amplifier to drive the two port of a Mach-Zehnder interferometer, it is shown that the minimum detectable optical phase shift can be largly reduced to the Heisenberg limit(1/n) which is far below the Shot Noise Limit(1/square root of n) the large gain limit. The dependence of the minimum detectable phase shift on parametric gain and the inefficient photodetectors has been discussed.

  4. The remapping of space in motor learning and human-machine interfaces

    PubMed Central

    Mussa-Ivaldi, F.A.; Danziger, Z.

    2009-01-01

    Studies of motor adaptation to patterns of deterministic forces have revealed the ability of the motor control system to form and use predictive representations of the environment. One of the most fundamental elements of our environment is space itself. This article focuses on the notion of Euclidean space as it applies to common sensory motor experiences. Starting from the assumption that we interact with the world through a system of neural signals, we observe that these signals are not inherently endowed with metric properties of the ordinary Euclidean space. The ability of the nervous system to represent these properties depends on adaptive mechanisms that reconstruct the Euclidean metric from signals that are not Euclidean. Gaining access to these mechanisms will reveal the process by which the nervous system handles novel sophisticated coordinate transformation tasks, thus highlighting possible avenues to create functional human-machine interfaces that can make that task much easier. A set of experiments is presented that demonstrate the ability of the sensory-motor system to reorganize coordination in novel geometrical environments. In these environments multiple degrees of freedom of body motions are used to control the coordinates of a point in a two-dimensional Euclidean space. We discuss how practice leads to the acquisition of the metric properties of the controlled space. Methods of machine learning based on the reduction of reaching errors are tested as a means to facilitate learning by adaptively changing he map from body motions to controlled device. We discuss the relevance of the results to the development of adaptive human machine interfaces and optimal control. PMID:19665553

  5. Studies in Mathematics, Volume II. Euclidean Geometry Based on Ruler and Protractor Axioms. Second Revised Edition.

    ERIC Educational Resources Information Center

    Curtis, Charles W.; And Others

    These materials were developed to help high school teachers to become familiar with the approach to tenth-grade Euclidean geometry which was adopted by the School Mathematics Study Group (SMSG). It is emphasized that the materials are unsuitable as a high school textbook. Each document contains material too difficult for most high school students.…

  6. Feature Extraction of High-Dimensional Structures for Exploratory Analytics

    DTIC Science & Technology

    2013-04-01

    Comparison of Euclidean vs. geodesic distance. LDRs use metric based on the Euclidean distance between two points, while the NLDRs are based on...geodesic distance. An NLDR successfully unrolls the curved manifold, whereas an LDR fails. ...........................3 1 1. Introduction An...and classical metric multidimensional scaling, are a linear DR ( LDR ). An LDR is based on a linear combination of

  7. Euclidean Wilson loops and minimal area surfaces in lorentzian AdS 3

    DOE PAGES

    Irrgang, Andrew; Kruczenski, Martin

    2015-12-14

    The AdS/CFT correspondence relates Wilson loops in N=4 SYM theory to minimal area surfaces in AdS 5 × S 5 space. If the Wilson loop is Euclidean and confined to a plane (t, x) then the dual surface is Euclidean and lives in Lorentzian AdS 3 c AdS 5. In this paper we study such minimal area surfaces generalizing previous results obtained in the Euclidean case. Since the surfaces we consider have the topology of a disk, the holonomy of the flat current vanishes which is equivalent to the condition that a certain boundary Schrödinger equation has all its solutionsmore » anti-periodic. If the potential for that Schrödinger equation is found then reconstructing the surface and finding the area become simpler. In particular we write a formula for the Area in terms of the Schwarzian derivative of the contour. Finally an infinite parameter family of analytical solutions using Riemann Theta functions is described. In this case, both the area and the shape of the surface are given analytically and used to check the previous results.« less

  8. Principal Curves on Riemannian Manifolds.

    PubMed

    Hauberg, Soren

    2016-09-01

    Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

  9. Gravity dual for a model of perception

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakayama, Yu, E-mail: nakayama@berkeley.edu

    2011-01-15

    One of the salient features of human perception is its invariance under dilatation in addition to the Euclidean group, but its non-invariance under special conformal transformation. We investigate a holographic approach to the information processing in image discrimination with this feature. We claim that a strongly coupled analogue of the statistical model proposed by Bialek and Zee can be holographically realized in scale invariant but non-conformal Euclidean geometries. We identify the Bayesian probability distribution of our generalized Bialek-Zee model with the GKPW partition function of the dual gravitational system. We provide a concrete example of the geometric configuration based onmore » a vector condensation model coupled with the Euclidean Einstein-Hilbert action. From the proposed geometry, we study sample correlation functions to compute the Bayesian probability distribution.« less

  10. Euclidean distance and Kolmogorov-Smirnov analyses of multi-day auditory event-related potentials: a longitudinal stability study

    NASA Astrophysics Data System (ADS)

    Durato, M. V.; Albano, A. M.; Rapp, P. E.; Nawang, S. A.

    2015-06-01

    The validity of ERPs as indices of stable neurophysiological traits is partially dependent on their stability over time. Previous studies on ERP stability, however, have reported diverse stability estimates despite using the same component scoring methods. This present study explores a novel approach in investigating the longitudinal stability of average ERPs—that is, by treating the ERP waveform as a time series and then applying Euclidean Distance and Kolmogorov-Smirnov analyses to evaluate the similarity or dissimilarity between the ERP time series of different sessions or run pairs. Nonlinear dynamical analysis show that in the absence of a change in medical condition, the average ERPs of healthy human adults are highly longitudinally stable—as evaluated by both the Euclidean distance and the Kolmogorov-Smirnov test.

  11. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  12. Effects of Selected Filmic Coding Elements of TV on the Development of the Euclidean Concepts of Horizontality and Verticality in Adolescents.

    ERIC Educational Resources Information Center

    Lynch, Beth Eloise

    This study was conducted to determine whether the filmic coding elements of split screen, slow motion, generated line cues, the zoom of a camera, and rotation could aid in the development of the Euclidean space concepts of horizontality and verticality, and to explore presence and development of spatial skills involving these two concepts in…

  13. Factorization approach to superintegrable systems: Formalism and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballesteros, Á., E-mail: angelb@ubu.es; Herranz, F. J., E-mail: fjherranz@ubu.es; Kuru, Ş., E-mail: kuru@science.ankara.edu.tr

    2017-03-15

    The factorization technique for superintegrable Hamiltonian systems is revisited and applied in order to obtain additional (higher-order) constants of the motion. In particular, the factorization approach to the classical anisotropic oscillator on the Euclidean plane is reviewed, and new classical (super) integrable anisotropic oscillators on the sphere are constructed. The Tremblay–Turbiner–Winternitz system on the Euclidean plane is also studied from this viewpoint.

  14. Speckle noise removal applied to ultrasound image of carotid artery based on total least squares model.

    PubMed

    Yang, Lei; Lu, Jun; Dai, Ming; Ren, Li-Jie; Liu, Wei-Zong; Li, Zhen-Zhou; Gong, Xue-Hao

    2016-10-06

    An ultrasonic image speckle noise removal method by using total least squares model is proposed and applied onto images of cardiovascular structures such as the carotid artery. On the basis of the least squares principle, the related principle of minimum square method is applied to cardiac ultrasound image speckle noise removal process to establish the model of total least squares, orthogonal projection transformation processing is utilized for the output of the model, and the denoising processing for the cardiac ultrasound image speckle noise is realized. Experimental results show that the improved algorithm can greatly improve the resolution of the image, and meet the needs of clinical medical diagnosis and treatment of the cardiovascular system for the head and neck. Furthermore, the success in imaging of carotid arteries has strong implications in neurological complications such as stroke.

  15. Elmo bumpy square plasma confinement device

    DOEpatents

    Owen, L.W.

    1985-01-01

    The invention is an Elmo bumpy type plasma confinement device having a polygonal configuration of closed magnet field lines for improved plasma confinement. In the preferred embodiment, the device is of a square configuration which is referred to as an Elmo bumpy square (EBS). The EBS is formed by four linear magnetic mirror sections each comprising a plurality of axisymmetric assemblies connected in series and linked by 90/sup 0/ sections of a high magnetic field toroidal solenoid type field generating coils. These coils provide corner confinement with a minimum of radial dispersion of the confined plasma to minimize the detrimental effects of the toroidal curvature of the magnetic field. Each corner is formed by a plurality of circular or elliptical coils aligned about the corner radius to provide maximum continuity in the closing of the magnetic field lines about the square configuration confining the plasma within a vacuum vessel located within the various coils forming the square configuration confinement geometry.

  16. The geometry of expertise

    PubMed Central

    Leone, María J.; Fernandez Slezak, Diego; Cecchi, Guillermo A.; Sigman, Mariano

    2014-01-01

    Theories of expertise based on the acquisition of chunk and templates suggest a differential geometric organization of perception between experts and novices. It is implied that expert representation is less anchored by spatial (Euclidean) proximity and may instead be dictated by the intrinsic relation in the structure and grammar of the specific domain of expertise. Here we set out to examine this hypothesis. We used the domain of chess which has been widely used as a tool to study human expertise. We reasoned that the movement of an opponent piece to a specific square constitutes an external cue and the reaction of the player to this “perturbation” should reveal his internal representation of proximity. We hypothesized that novice players will tend to respond by moving a piece in closer squares than experts. Similarly, but now in terms of object representations, we hypothesized weak players will more likely focus on a specific piece and hence produce sequence of actions repeating movements of the same piece. We capitalized on a large corpus of data obtained from internet chess servers. Results showed that, relative to experts, weaker players tend to (1) produce consecutive moves in proximal board locations, (2) move more often the same piece and (3) reduce the number of remaining pieces more rapidly, most likely to decrease cognitive load and mental effort. These three principles might reflect the effect of expertise on human actions in complex setups. PMID:24550869

  17. Personalised news filtering and recommendation system using Chi-square statistics-based K-nearest neighbour (χ2SB-KNN) model

    NASA Astrophysics Data System (ADS)

    Adeniyi, D. A.; Wei, Z.; Yang, Y.

    2017-10-01

    Recommendation problem has been extensively studied by researchers in the field of data mining, database and information retrieval. This study presents the design and realisation of an automated, personalised news recommendations system based on Chi-square statistics-based K-nearest neighbour (χ2SB-KNN) model. The proposed χ2SB-KNN model has the potential to overcome computational complexity and information overloading problems, reduces runtime and speeds up execution process through the use of critical value of χ2 distribution. The proposed recommendation engine can alleviate scalability challenges through combined online pattern discovery and pattern matching for real-time recommendations. This work also showcases the development of a novel method of feature selection referred to as Data Discretisation-Based feature selection method. This is used for selecting the best features for the proposed χ2SB-KNN algorithm at the preprocessing stage of the classification procedures. The implementation of the proposed χ2SB-KNN model is achieved through the use of a developed in-house Java program on an experimental website called OUC newsreaders' website. Finally, we compared the performance of our system with two baseline methods which are traditional Euclidean distance K-nearest neighbour and Naive Bayesian techniques. The result shows a significant improvement of our method over the baseline methods studied.

  18. Maximum correntropy square-root cubature Kalman filter with application to SINS/GPS integrated systems.

    PubMed

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng

    2018-05-31

    For a nonlinear system, the cubature Kalman filter (CKF) and its square-root version are useful methods to solve the state estimation problems, and both can obtain good performance in Gaussian noises. However, their performances often degrade significantly in the face of non-Gaussian noises, particularly when the measurements are contaminated by some heavy-tailed impulsive noises. By utilizing the maximum correntropy criterion (MCC) to improve the robust performance instead of traditional minimum mean square error (MMSE) criterion, a new square-root nonlinear filter is proposed in this study, named as the maximum correntropy square-root cubature Kalman filter (MCSCKF). The new filter not only retains the advantage of square-root cubature Kalman filter (SCKF), but also exhibits robust performance against heavy-tailed non-Gaussian noises. A judgment condition that avoids numerical problem is also given. The results of two illustrative examples, especially the SINS/GPS integrated systems, demonstrate the desirable performance of the proposed filter. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Global minimum profile error (GMPE) - a least-squares-based approach for extracting macroscopic rate coefficients for complex gas-phase chemical reactions.

    PubMed

    Duong, Minh V; Nguyen, Hieu T; Mai, Tam V-T; Huynh, Lam K

    2018-01-03

    Master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) has shown to be a powerful framework for modeling kinetic and dynamic behaviors of a complex gas-phase chemical system on a complicated multiple-species and multiple-channel potential energy surface (PES) for a wide range of temperatures and pressures. Derived from the ME time-resolved species profiles, the macroscopic or phenomenological rate coefficients are essential for many reaction engineering applications including those in combustion and atmospheric chemistry. Therefore, in this study, a least-squares-based approach named Global Minimum Profile Error (GMPE) was proposed and implemented in the MultiSpecies-MultiChannel (MSMC) code (Int. J. Chem. Kinet., 2015, 47, 564) to extract macroscopic rate coefficients for such a complicated system. The capability and limitations of the new approach were discussed in several well-defined test cases.

  20. Searching for minimum in dependence of squared speed-of-sound on collision energy

    DOE PAGES

    Liu, Fu -Hu; Gao, Li -Na; Lacey, Roy A.

    2016-01-01

    Experimore » mental results of the rapidity distributions of negatively charged pions produced in proton-proton ( p - p ) and beryllium-beryllium (Be-Be) collisions at different beam momentums, measured by the NA61/SHINE Collaboration at the super proton synchrotron (SPS), are described by a revised (three-source) Landau hydrodynamic model. The squared speed-of-sound parameter c s 2 is then extracted from the width of rapidity distribution. There is a local minimum (knee point) which indicates a softest point in the equation of state (EoS) appearing at about 40 A  GeV/ c (or 8.8 GeV) in c s 2 excitation function (the dependence of c s 2 on incident beam momentum (or center-of-mass energy)). This knee point should be related to the searching for the onset of quark deconfinement and the critical point of quark-gluon plasma (QGP) phase transition.« less

  1. Four-Dimensional Golden Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fenimore, Edward E.

    2015-02-25

    The Golden search technique is a method to search a multiple-dimension space to find the minimum. It basically subdivides the possible ranges of parameters until it brackets, to within an arbitrarily small distance, the minimum. It has the advantages that (1) the function to be minimized can be non-linear, (2) it does not require derivatives of the function, (3) the convergence criterion does not depend on the magnitude of the function. Thus, if the function is a goodness of fit parameter such as chi-square, the convergence does not depend on the noise being correctly estimated or the function correctly followingmore » the chi-square statistic. And, (4) the convergence criterion does not depend on the shape of the function. Thus, long shallow surfaces can be searched without the problem of premature convergence. As with many methods, the Golden search technique can be confused by surfaces with multiple minima.« less

  2. Vacuum solutions of five dimensional Einstein equations generated by inverse scattering method. II. Production of the black ring solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomizawa, Shinya; Nozawa, Masato

    2006-06-15

    We study vacuum solutions of five-dimensional Einstein equations generated by the inverse scattering method. We reproduce the black ring solution which was found by Emparan and Reall by taking the Euclidean Levi-Civita metric plus one-dimensional flat space as a seed. This transformation consists of two successive processes; the first step is to perform the three-solitonic transformation of the Euclidean Levi-Civita metric with one-dimensional flat space as a seed. The resulting metric is the Euclidean C-metric with extra one-dimensional flat space. The second is to perform the two-solitonic transformation by taking it as a new seed. Our result may serve asmore » a stepping stone to find new exact solutions in higher dimensions.« less

  3. Tackling higher derivative ghosts with the Euclidean path integral

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontanini, Michele; Department of Physics, Syracuse University, Syracuse, New York 13244; Trodden, Mark

    2011-05-15

    An alternative to the effective field theory approach to treat ghosts in higher derivative theories is to attempt to integrate them out via the Euclidean path integral formalism. It has been suggested that this method could provide a consistent framework within which we might tolerate the ghost degrees of freedom that plague, among other theories, the higher derivative gravity models that have been proposed to explain cosmic acceleration. We consider the extension of this idea to treating a class of terms with order six derivatives, and find that for a general term the Euclidean path integral approach works in themore » most trivial background, Minkowski. Moreover we see that even in de Sitter background, despite some difficulties, it is possible to define a probability distribution for tensorial perturbations of the metric.« less

  4. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.

    PubMed

    Revathy, M; Saravanan, R

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.

  5. Space-time topology and quantum gravity.

    NASA Astrophysics Data System (ADS)

    Friedman, J. L.

    Characteristic features are discussed of a theory of quantum gravity that allows space-time with a non-Euclidean topology. The review begins with a summary of the manifolds that can occur as classical vacuum space-times and as space-times with positive energy. Local structures with non-Euclidean topology - topological geons - collapse, and one may conjecture that in asymptotically flat space-times non-Euclidean topology is hiden from view. In the quantum theory, large diffeos can act nontrivially on the space of states, leading to state vectors that transform as representations of the corresponding symmetry group π0(Diff). In particular, in a quantum theory that, at energies E < EPlanck, is a theory of the metric alone, there appear to be ground states with half-integral spin, and in higher-dimensional gravity, with the kinematical quantum numbers of fundamental fermions.

  6. Retrieving air humidity, global solar radiation, and reference evapotranspiration from daily temperatures: development and validation of new methods for Mexico. Part I: humidity

    NASA Astrophysics Data System (ADS)

    Lobit, P.; López Pérez, L.; Lhomme, J. P.; Gómez Tagle, A.

    2017-07-01

    This study evaluates the dew point method (Allen et al. 1998) to estimate atmospheric vapor pressure from minimum temperature, and proposes an improved model to estimate it from maximum and minimum temperature. Both methods were evaluated on 786 weather stations in Mexico. The dew point method induced positive bias in dry areas but also negative bias in coastal areas, and its average root mean square error for all evaluated stations was 0.38 kPa. The improved model assumed a bi-linear relation between estimated vapor pressure deficit (difference between saturated vapor pressure at minimum and average temperature) and measured vapor pressure deficit. The parameters of these relations were estimated from historical annual median values of relative humidity. This model removed bias and allowed for a root mean square error of 0.31 kPa. When no historical measurements of relative humidity were available, empirical relations were proposed to estimate it from latitude and altitude, with only a slight degradation on the model accuracy (RMSE = 0.33 kPa, bias = -0.07 kPa). The applicability of the method to other environments is discussed.

  7. Descriptive Statistics and Cluster Analysis for Extreme Rainfall in Java Island

    NASA Astrophysics Data System (ADS)

    E Komalasari, K.; Pawitan, H.; Faqih, A.

    2017-03-01

    This study aims to describe regional pattern of extreme rainfall based on maximum daily rainfall for period 1983 to 2012 in Java Island. Descriptive statistics analysis was performed to obtain centralization, variation and distribution of maximum precipitation data. Mean and median are utilized to measure central tendency data while Inter Quartile Range (IQR) and standard deviation are utilized to measure variation of data. In addition, skewness and kurtosis used to obtain shape the distribution of rainfall data. Cluster analysis using squared euclidean distance and ward method is applied to perform regional grouping. Result of this study show that mean (average) of maximum daily rainfall in Java Region during period 1983-2012 is around 80-181mm with median between 75-160mm and standard deviation between 17 to 82. Cluster analysis produces four clusters and show that western area of Java tent to have a higher annual maxima of daily rainfall than northern area, and have more variety of annual maximum value.

  8. Study of degenerate four-quark states with SU(2) lattice Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Green, A. M.; Lukkarinen, J.; Pennanen, P.; Michael, C.

    1996-01-01

    The energies of four-quark states are calculated for geometries in which the quarks are situated on the corners of a series of tetrahedra and also for geometries that correspond to gradually distorting these tetrahedra into a plane. The interest in tetrahedra arises because they are composed of three degenerate partitions of the four quarks into two two-quark color singlets. This is an extension of earlier work showing that geometries with two degenerate partitions (e.g., squares) experience a large binding energy. It is now found that even larger binding energies do not result, but that for the tetrahedra the ground and first excited states become degenerate in energy. The calculation is carried out using SU(2) for static quarks in the quenched approximation with β=2.4 on a 163×32 lattice. The results are analyzed using the correlation matrix between different Euclidean times and the implications of these results are discussed for a model based on two-quark potentials.

  9. Källén-Lehmann spectroscopy for (un)physical degrees of freedom

    NASA Astrophysics Data System (ADS)

    Dudal, David; Oliveira, Orlando; Silva, Paulo J.

    2014-01-01

    We consider the problem of "measuring" the Källén-Lehmann spectral density of a particle (be it elementary or bound state) propagator by means of 4D lattice data. As the latter are obtained from operations at (Euclidean momentum squared) p2≥0, we are facing the generically ill-posed problem of converting a limited data set over the positive real axis to an integral representation, extending over the whole complex p2 plane. We employ a linear regularization strategy, commonly known as the Tikhonov method with the Morozov discrepancy principle, with suitable adaptations to realistic data, e.g. with an unknown threshold. An important virtue over the (standard) maximum entropy method is the possibility to also probe unphysical spectral densities, for example, of a confined gluon. We apply our proposal here to "physical" mock spectral data as a litmus test and then to the lattice SU(3) Landau gauge gluon at zero temperature.

  10. A swarm-trained k-nearest prototypes adaptive classifier with automatic feature selection for interval data.

    PubMed

    Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C

    2016-08-01

    Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. CLUSTERING OF INTERICTAL SPIKES BY DYNAMIC TIME WARPING AND AFFINITY PROPAGATION

    PubMed Central

    Thomas, John; Jin, Jing; Dauwels, Justin; Cash, Sydney S.; Westover, M. Brandon

    2018-01-01

    Epilepsy is often associated with the presence of spikes in electroencephalograms (EEGs). The spike waveforms vary vastly among epilepsy patients, and also for the same patient across time. In order to develop semi-automated and automated methods for detecting spikes, it is crucial to obtain a better understanding of the various spike shapes. In this paper, we develop several approaches to extract exemplars of spikes. We generate spike exemplars by applying clustering algorithms to a database of spikes from 12 patients. As similarity measures for clustering, we consider the Euclidean distance and Dynamic Time Warping (DTW). We assess two clustering algorithms, namely, K-means clustering and affinity propagation. The clustering methods are compared based on the mean squared error, and the similarity measures are assessed based on the number of generated spike clusters. Affinity propagation with DTW is shown to be the best combination for clustering epileptic spikes, since it generates fewer spike templates and does not require to pre-specify the number of spike templates. PMID:29527130

  12. Approximation algorithm for the problem of partitioning a sequence into clusters

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Mikhailova, L. V.; Khamidullin, S. A.; Khandeev, V. I.

    2017-08-01

    We consider the problem of partitioning a finite sequence of Euclidean points into a given number of clusters (subsequences) using the criterion of the minimal sum (over all clusters) of intercluster sums of squared distances from the elements of the clusters to their centers. It is assumed that the center of one of the desired clusters is at the origin, while the center of each of the other clusters is unknown and determined as the mean value over all elements in this cluster. Additionally, the partition obeys two structural constraints on the indices of sequence elements contained in the clusters with unknown centers: (1) the concatenation of the indices of elements in these clusters is an increasing sequence, and (2) the difference between an index and the preceding one is bounded above and below by prescribed constants. It is shown that this problem is strongly NP-hard. A 2-approximation algorithm is constructed that is polynomial-time for a fixed number of clusters.

  13. Parton physics on a Euclidean lattice.

    PubMed

    Ji, Xiangdong

    2013-06-28

    I show that the parton physics related to correlations of quarks and gluons on the light cone can be studied through the matrix elements of frame-dependent, equal-time correlators in the large momentum limit. This observation allows practical calculations of parton properties on a Euclidean lattice. As an example, I demonstrate how to recover the leading-twist quark distribution by boosting an equal-time correlator to a large momentum.

  14. Investigations into Novel Multi-Band Antenna Designs

    DTIC Science & Technology

    2006-08-01

    endeavouring to modify the designs to incorporate dual polarisation , building the antennas, as well as experimental work that will use the manufactured...based on the Koch, Minkowski and Hilbert curves. The merit in this approach is that non -Euclidean designs (i.e. fractals) are compared with Euclidean... polarisation . A number of possible changes to the current design need to be explored towards achieving the above objectives. Some of the suggested

  15. Slow diffusion by Markov random flights

    NASA Astrophysics Data System (ADS)

    Kolesnik, Alexander D.

    2018-06-01

    We present a conception of the slow diffusion processes in the Euclidean spaces Rm , m ≥ 1, based on the theory of random flights with small constant speed that are driven by a homogeneous Poisson process of small rate. The slow diffusion condition that, on long time intervals, leads to the stationary distributions, is given. The stationary distributions of slow diffusion processes in some Euclidean spaces of low dimensions, are presented.

  16. Quadratic String Method for Locating Instantons in Tunneling Splitting Calculations.

    PubMed

    Cvitaš, Marko T

    2018-03-13

    The ring-polymer instanton (RPI) method is an efficient technique for calculating approximate tunneling splittings in high-dimensional molecular systems. In the RPI method, tunneling splitting is evaluated from the properties of the minimum action path (MAP) connecting the symmetric wells, whereby the extensive sampling of the full potential energy surface of the exact quantum-dynamics methods is avoided. Nevertheless, the search for the MAP is usually the most time-consuming step in the standard numerical procedures. Recently, nudged elastic band (NEB) and string methods, originaly developed for locating minimum energy paths (MEPs), were adapted for the purpose of MAP finding with great efficiency gains [ J. Chem. Theory Comput. 2016 , 12 , 787 ]. In this work, we develop a new quadratic string method for locating instantons. The Euclidean action is minimized by propagating the initial guess (a path connecting two wells) over the quadratic potential energy surface approximated by means of updated Hessians. This allows the algorithm to take many minimization steps between the potential/gradient calls with further reductions in the computational effort, exploiting the smoothness of potential energy surface. The approach is general, as it uses Cartesian coordinates, and widely applicable, with computational effort of finding the instanton usually lower than that of determining the MEP. It can be combined with expensive potential energy surfaces or on-the-fly electronic-structure methods to explore a wide variety of molecular systems.

  17. New MYC IHC Classifier Integrating Quantitative Architecture Parameters to Predict MYC Gene Translocation in Diffuse Large B-Cell Lymphoma

    PubMed Central

    Dong, Wei-Feng; Canil, Sarah; Lai, Raymond; Morel, Didier; Swanson, Paul E.; Izevbaye, Iyare

    2018-01-01

    A new automated MYC IHC classifier based on bivariate logistic regression is presented. The predictor relies on image analysis developed with the open-source ImageJ platform. From a histologic section immunostained for MYC protein, 2 dimensionless quantitative variables are extracted: (a) relative distance between nuclei positive for MYC IHC based on euclidean minimum spanning tree graph and (b) coefficient of variation of the MYC IHC stain intensity among MYC IHC-positive nuclei. Distance between positive nuclei is suggested to inversely correlate MYC gene rearrangement status, whereas coefficient of variation is suggested to inversely correlate physiological regulation of MYC protein expression. The bivariate classifier was compared with 2 other MYC IHC classifiers (based on percentage of MYC IHC positive nuclei), all tested on 113 lymphomas including mostly diffuse large B-cell lymphomas with known MYC fluorescent in situ hybridization (FISH) status. The bivariate classifier strongly outperformed the “percentage of MYC IHC-positive nuclei” methods to predict MYC+ FISH status with 100% sensitivity (95% confidence interval, 94-100) associated with 80% specificity. The test is rapidly performed and might at a minimum provide primary IHC screening for MYC gene rearrangement status in diffuse large B-cell lymphomas. Furthermore, as this bivariate classifier actually predicts “permanent overexpressed MYC protein status,” it might identify nontranslocation-related chromosomal anomalies missed by FISH. PMID:27093450

  18. 40 CFR 65.84 - Operating requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-tight means that the pressure in a truck or railcar tank will not drop more than 750 pascals (0.11 pound per square inch) within 5 minutes after it is pressurized to a minimum of 4,500 pascals (0.65 pound...

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the jointmore » velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.« less

  20. Improving the Cost Efficiency and Readiness of MC-130 Aircrew Training: A Case Study

    DTIC Science & Technology

    2015-01-01

    51 Jiang, Changbing, "A reliable solver of Euclidean traveling salesman problems with Microsoft excel add-in tools for small-size systems...DisplayPage.aspx?DocType=Reference&ItemId=+++1 343364&Pubabbrev=JAWA 124 Jiang, Changbing, "A Reliable Solver of Euclidean Traveling Salesman Problems with...49 Figure 4.5 Training Resources Locations Traveling Salesperson Problem In order to participate in training, aircrews must fly to the

  1. Combinatorial construction of tilings by barycentric simplex orbits (D symbols) and their realizations in Euclidean and other homogeneous spaces.

    PubMed

    Molnár, Emil

    2005-11-01

    A new method, developed in previous works by the author (partly with co-authors), is presented which decides algorithmically, in principle by computer, whether a combinatorial space tiling (Tau, Gamma) is realizable in the d-dimensional Euclidean space E(d) (think of d = 2, 3, 4) or in other homogeneous spaces, e.g. in Thurston's 3-geometries: E(3), S(3), H(3), S(2) x R, H(2) x R, SL(2)R, Nil, Sol. Then our group Gamma will be an isometry group of a projective metric 3-sphere PiS(3) (R, < , >), acting discontinuously on its above tiling Tau. The method is illustrated by a plane example and by the well known rhombohedron tiling (Tau, Gamma), where Gamma = R3m is the Euclidean space group No. 166 in International Tables for Crystallography.

  2. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications

    PubMed Central

    Revathy, M.; Saravanan, R.

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures. PMID:26065017

  3. Noncommutative products of Euclidean spaces

    NASA Astrophysics Data System (ADS)

    Dubois-Violette, Michel; Landi, Giovanni

    2018-05-01

    We present natural families of coordinate algebras on noncommutative products of Euclidean spaces R^{N_1} × _R R^{N_2} . These coordinate algebras are quadratic ones associated with an R -matrix which is involutive and satisfies the Yang-Baxter equations. As a consequence, they enjoy a list of nice properties, being regular of finite global dimension. Notably, we have eight-dimensional noncommutative euclidean spaces R4 × _R R4 . Among these, particularly well behaved ones have deformation parameter u \\in S^2 . Quotients include seven spheres S7_u as well as noncommutative quaternionic tori TH_u = S^3 × _u S^3 . There is invariance for an action of {{SU}}(2) × {{SU}}(2) on the torus TH_u in parallel with the action of U(1) × U(1) on a `complex' noncommutative torus T^2_θ which allows one to construct quaternionic toric noncommutative manifolds. Additional classes of solutions are disjoint from the classical case.

  4. Fabrication process of superconducting integrated circuits with submicron Nb/AlOx/Nb junctions using electron-beam direct writing technique

    NASA Astrophysics Data System (ADS)

    Aoyagi, Masahiro; Nakagawa, Hiroshi

    1997-07-01

    For enhancing operating speed of a superconducting integrated circuit (IC), the device size must be reduced into the submicron level. For this purpose, we have introduced electron beam (EB) direct writing technique into the fabrication process of a Nb/AlOx/Nb Josephson IC. A two-layer (PMMA/(alpha) M-CMS) resist method called the portable conformable mask (PCM) method was utilized for having a high aspect ratio. The electron cyclotron resonance (ECR) plasma etching technique was utilized. We have fabricated micron or submicron-size Nb/AlOx/Nb Josephson junctions, where the size of the junction was varied from 2 micrometer to 0.5 micrometer at 0.1 micrometer intervals. These junctions were designed for evaluating the spread of the junction critical current. We achieved minimum-to-maximum Ic spread of plus or minus 13% for 0.81-micrometer-square (plus or minus 16% for 0.67-micrometer-square) 100 junctions spreading in 130- micrometer-square area. The size deviation of 0.05 micrometer was estimated from the spread values. We have successfully demonstrated a small-scale logic IC with 0.9-micrometer-square junctions having a 50 4JL OR-gate chain, where 4JL means four junctions logic family. The circuit was designed for measuring the gate delay. We obtained a preliminary result of the OR- gate logic delay, where the minimum delay was 8.6 ps/gate.

  5. The generalized Weierstrass system inducing surfaces of constant and nonconstant mean curvature in Euclidean three space

    NASA Astrophysics Data System (ADS)

    Bracken, Paul

    2007-05-01

    The generalized Weierstrass (GW) system is introduced and its correspondence with the associated two-dimensional nonlinear sigma model is reviewed. The method of symmetry reduction is systematically applied to derive several classes of invariant solutions for the GW system. The solutions can be used to induce constant mean curvature surfaces in Euclidean three space. Some properties of the system for the case of nonconstant mean curvature are introduced as well.

  6. One-dimensional Euclidean matching problem: exact solutions, correlation functions, and universality.

    PubMed

    Caracciolo, Sergio; Sicuro, Gabriele

    2014-10-01

    We discuss the equivalence relation between the Euclidean bipartite matching problem on the line and on the circumference and the Brownian bridge process on the same domains. The equivalence allows us to compute the correlation function and the optimal cost of the original combinatorial problem in the thermodynamic limit; moreover, we solve also the minimax problem on the line and on the circumference. The properties of the average cost and correlation functions are discussed.

  7. Evaluation of Image Segmentation and Object Recognition Algorithms for Image Parsing

    DTIC Science & Technology

    2013-09-01

    generation of the features from the key points. OpenCV uses Euclidean distance to match the key points and has the option to use Manhattan distance...feature vector includes polarity and intensity information. Final step is matching the key points. In OpenCV , Euclidean distance or Manhattan...the code below is one way and OpenCV offers the function radiusMatch (a pair must have a distance less than a given maximum distance). OpenCV’s

  8. Spectral asymptotics of Euclidean quantum gravity with diff-invariant boundary conditions

    NASA Astrophysics Data System (ADS)

    Esposito, Giampiero; Fucci, Guglielmo; Kamenshchik, Alexander Yu; Kirsten, Klaus

    2005-03-01

    A general method is known to exist for studying Abelian and non-Abelian gauge theories, as well as Euclidean quantum gravity, at 1-loop level on manifolds with boundary. In the latter case, boundary conditions on metric perturbations h can be chosen to be completely invariant under infinitesimal diffeomorphisms, to preserve the invariance group of the theory and BRST symmetry. In the de Donder gauge, however, the resulting boundary-value problem for the Laplace-type operator acting on h is known to be self-adjoint but not strongly elliptic. The latter is a technical condition ensuring that a unique smooth solution of the boundary-value problem exists, which implies, in turn, that the global heat-kernel asymptotics yielding 1-loop divergences and 1-loop effective action actually exists. The present paper shows that, on the Euclidean 4-ball, only the scalar part of perturbative modes for quantum gravity is affected by the lack of strong ellipticity. Further evidence for lack of strong ellipticity, from an analytic point of view, is therefore obtained. Interestingly, three sectors of the scalar-perturbation problem remain elliptic, while lack of strong ellipticity is 'confined' to the remaining fourth sector. The integral representation of the resulting ζ-function asymptotics on the Euclidean 4-ball is also obtained; this remains regular at the origin by virtue of a spectral identity here obtained for the first time.

  9. Square cants from round bolts without slabs or sawdust

    Treesearch

    Peter Koch

    1960-01-01

    For maximum efficiency a headrig for converting bark-free bolts into cants must (1) have a fast cycle time, (2) require minimum handling of bolts and refuse, and (3) convert the volume represented by slabs and kerf into a salable byproduct.

  10. Combinatorics of least-squares trees.

    PubMed

    Mihaescu, Radu; Pachter, Lior

    2008-09-09

    A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.

  11. Spectroscopic Determination of Aboveground Biomass in Grasslands Using Spectral Transformations, Support Vector Machine and Partial Least Squares Regression

    PubMed Central

    Marabel, Miguel; Alvarez-Taboada, Flor

    2013-01-01

    Aboveground biomass (AGB) is one of the strategic biophysical variables of interest in vegetation studies. The main objective of this study was to evaluate the Support Vector Machine (SVM) and Partial Least Squares Regression (PLSR) for estimating the AGB of grasslands from field spectrometer data and to find out which data pre-processing approach was the most suitable. The most accurate model to predict the total AGB involved PLSR and the Maximum Band Depth index derived from the continuum removed reflectance in the absorption features between 916–1,120 nm and 1,079–1,297 nm (R2 = 0.939, RMSE = 7.120 g/m2). Regarding the green fraction of the AGB, the Area Over the Minimum index derived from the continuum removed spectra provided the most accurate model overall (R2 = 0.939, RMSE = 3.172 g/m2). Identifying the appropriate absorption features was proved to be crucial to improve the performance of PLSR to estimate the total and green aboveground biomass, by using the indices derived from those spectral regions. Ordinary Least Square Regression could be used as a surrogate for the PLSR approach with the Area Over the Minimum index as the independent variable, although the resulting model would not be as accurate. PMID:23925082

  12. Simultaneous determination of vitamin B12 and its derivatives using some of multivariate calibration 1 (MVC1) techniques

    NASA Astrophysics Data System (ADS)

    Samadi-Maybodi, Abdolraouf; Darzi, S. K. Hassani Nejad

    2008-10-01

    Resolution of binary mixtures of vitamin B12, methylcobalamin and B12 coenzyme with minimum sample pre-treatment and without analyte separation has been successfully achieved by methods of partial least squares algorithm with one dependent variable (PLS1), orthogonal signal correction/partial least squares (OSC/PLS), principal component regression (PCR) and hybrid linear analysis (HLA). Data of analysis were obtained from UV-vis spectra. The UV-vis spectra of the vitamin B12, methylcobalamin and B12 coenzyme were recorded in the same spectral conditions. The method of central composite design was used in the ranges of 10-80 mg L -1 for vitamin B12 and methylcobalamin and 20-130 mg L -1 for B12 coenzyme. The models refinement procedure and validation were performed by cross-validation. The minimum root mean square error of prediction (RMSEP) was 2.26 mg L -1 for vitamin B12 with PLS1, 1.33 mg L -1 for methylcobalamin with OSC/PLS and 3.24 mg L -1 for B12 coenzyme with HLA techniques. Figures of merit such as selectivity, sensitivity, analytical sensitivity and LOD were determined for three compounds. The procedure was successfully applied to simultaneous determination of three compounds in synthetic mixtures and in a pharmaceutical formulation.

  13. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  14. Automated interpretation of 3D laserscanned point clouds for plant organ segmentation.

    PubMed

    Wahabzada, Mirwaes; Paulus, Stefan; Kersting, Kristian; Mahlein, Anne-Katrin

    2015-08-08

    Plant organ segmentation from 3D point clouds is a relevant task for plant phenotyping and plant growth observation. Automated solutions are required to increase the efficiency of recent high-throughput plant phenotyping pipelines. However, plant geometrical properties vary with time, among observation scales and different plant types. The main objective of the present research is to develop a fully automated, fast and reliable data driven approach for plant organ segmentation. The automated segmentation of plant organs using unsupervised, clustering methods is crucial in cases where the goal is to get fast insights into the data or no labeled data is available or costly to achieve. For this we propose and compare data driven approaches that are easy-to-realize and make the use of standard algorithms possible. Since normalized histograms, acquired from 3D point clouds, can be seen as samples from a probability simplex, we propose to map the data from the simplex space into Euclidean space using Aitchisons log ratio transformation, or into the positive quadrant of the unit sphere using square root transformation. This, in turn, paves the way to a wide range of commonly used analysis techniques that are based on measuring the similarities between data points using Euclidean distance. We investigate the performance of the resulting approaches in the practical context of grouping 3D point clouds and demonstrate empirically that they lead to clustering results with high accuracy for monocotyledonous and dicotyledonous plant species with diverse shoot architecture. An automated segmentation of 3D point clouds is demonstrated in the present work. Within seconds first insights into plant data can be deviated - even from non-labelled data. This approach is applicable to different plant species with high accuracy. The analysis cascade can be implemented in future high-throughput phenotyping scenarios and will support the evaluation of the performance of different plant genotypes exposed to stress or in different environmental scenarios.

  15. Roton Minimum as a Fingerprint of Magnon-Higgs Scattering in Ordered Quantum Antiferromagnets.

    PubMed

    Powalski, M; Uhrig, G S; Schmidt, K P

    2015-11-13

    A quantitative description of magnons in long-range ordered quantum antiferromagnets is presented which is consistent from low to high energies. It is illustrated for the generic S=1/2 Heisenberg model on the square lattice. The approach is based on a continuous similarity transformation in momentum space using the scaling dimension as the truncation criterion. Evidence is found for significant magnon-magnon attraction inducing a Higgs resonance. The high-energy roton minimum in the magnon dispersion appears to be induced by strong magnon-Higgs scattering.

  16. Minimum Bayes risk image correlation

    NASA Technical Reports Server (NTRS)

    Minter, T. C., Jr.

    1980-01-01

    In this paper, the problem of designing a matched filter for image correlation will be treated as a statistical pattern recognition problem. It is shown that, by minimizing a suitable criterion, a matched filter can be estimated which approximates the optimum Bayes discriminant function in a least-squares sense. It is well known that the use of the Bayes discriminant function in target classification minimizes the Bayes risk, which in turn directly minimizes the probability of a false fix. A fast Fourier implementation of the minimum Bayes risk correlation procedure is described.

  17. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  18. CMEs, the Tail of the Solar Wind Magnetic Field Distribution, and 11-yr Cosmic Ray Modulation at 1 AU. Revised

    NASA Technical Reports Server (NTRS)

    Cliver, E. W.; Ling, A. G.; Richardson, I. G.

    2003-01-01

    Using a recent classification of the solar wind at 1 AU into its principal components (slow solar wind, high-speed streams, and coronal mass ejections (CMEs) for 1972-2000, we show that the monthly-averaged galactic cosmic ray intensity is anti-correlated with the percentage of time that the Earth is imbedded in CME flows. We suggest that this correlation results primarily from a CME related change in the tail of the distribution function of hourly-averaged values of the solar wind magnetic field (B) between solar minimum and solar maximum. The number of high-B (square proper subset 10 nT) values increases by a factor of approx. 3 from minimum to maximum (from 5% of all hours to 17%), with about two-thirds of this increase due to CMEs. On an hour-to-hour basis, average changes of cosmic ray intensity at Earth become negative for solar wind magnetic field values square proper subset 10 nT.

  19. On the measure of conformal difference between Euclidean and Lobachevsky spaces

    NASA Astrophysics Data System (ADS)

    Zorich, Vladimir A.

    2011-12-01

    Euclidean space R^n and Lobachevsky space H^n are known to be not equivalent either conformally or quasiconformally. In this work we give exact asymptotics of the critical order of growth at infinity for the quasiconformality coefficient of a diffeomorphism f\\colon R^n\\to H^n for which such a mapping f is possible. We also consider the general case of immersions f\\colon M^n\\to N^n of conformally parabolic Riemannian manifolds. Bibliography: 17 titles.

  20. Euclidean scalar field theory in the bilocal approximation

    NASA Astrophysics Data System (ADS)

    Nagy, S.; Polonyi, J.; Steib, I.

    2018-04-01

    The blocking step of the renormalization group method is usually carried out by restricting it to fluctuations and to local blocked action. The tree-level, bilocal saddle point contribution to the blocking, defined by the infinitesimal decrease of the sharp cutoff in momentum space, is followed within the three dimensional Euclidean ϕ6 model in this work. The phase structure is changed, new phases and relevant operators are found, and certain universality classes are restricted by the bilocal saddle point.

  1. Ultrametric properties of the attractor spaces for random iterated linear function systems

    NASA Astrophysics Data System (ADS)

    Buchovets, A. G.; Moskalev, P. V.

    2018-03-01

    We investigate attractors of random iterated linear function systems as independent spaces embedded in the ordinary Euclidean space. The introduction on the set of attractor points of a metric that satisfies the strengthened triangle inequality makes this space ultrametric. Then inherent in ultrametric spaces the properties of disconnectedness and hierarchical self-similarity make it possible to define an attractor as a fractal. We note that a rigorous proof of these properties in the case of an ordinary Euclidean space is very difficult.

  2. Optimal impulsive time-fixed orbital rendezvous and interception with path constraints

    NASA Technical Reports Server (NTRS)

    Taur, D.-R.; Prussing, J. E.; Coverstone-Carroll, V.

    1990-01-01

    Minimum-fuel, impulsive, time-fixed solutions are obtained for the problem of orbital rendezvous and interception with interior path constraints. Transfers between coplanar circular orbits in an inverse-square gravitational field are considered, subject to a circular path constraint representing a minimum or maximum permissible orbital radius. Primer vector theory is extended to incorporate path constraints. The optimal number of impulses, their times and positions, and the presence of initial or final coasting arcs are determined. The existence of constraint boundary arcs and boundary points is investigated as well as the optimality of a class of singular arc solutions. To illustrate the complexities introduced by path constraints, an analysis is made of optimal rendezvous in field-free space subject to a minimum radius constraint.

  3. Gene selection for the reconstruction of stem cell differentiation trees: a linear programming approach.

    PubMed

    Ghadie, Mohamed A; Japkowicz, Nathalie; Perkins, Theodore J

    2015-08-15

    Stem cell differentiation is largely guided by master transcriptional regulators, but it also depends on the expression of other types of genes, such as cell cycle genes, signaling genes, metabolic genes, trafficking genes, etc. Traditional approaches to understanding gene expression patterns across multiple conditions, such as principal components analysis or K-means clustering, can group cell types based on gene expression, but they do so without knowledge of the differentiation hierarchy. Hierarchical clustering can organize cell types into a tree, but in general this tree is different from the differentiation hierarchy itself. Given the differentiation hierarchy and gene expression data at each node, we construct a weighted Euclidean distance metric such that the minimum spanning tree with respect to that metric is precisely the given differentiation hierarchy. We provide a set of linear constraints that are provably sufficient for the desired construction and a linear programming approach to identify sparse sets of weights, effectively identifying genes that are most relevant for discriminating different parts of the tree. We apply our method to microarray gene expression data describing 38 cell types in the hematopoiesis hierarchy, constructing a weighted Euclidean metric that uses just 175 genes. However, we find that there are many alternative sets of weights that satisfy the linear constraints. Thus, in the style of random-forest training, we also construct metrics based on random subsets of the genes and compare them to the metric of 175 genes. We then report on the selected genes and their biological functions. Our approach offers a new way to identify genes that may have important roles in stem cell differentiation. tperkins@ohri.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Polyhedra and packings from hyperbolic honeycombs.

    PubMed

    Pedersen, Martin Cramer; Hyde, Stephen T

    2018-06-20

    We derive more than 80 embeddings of 2D hyperbolic honeycombs in Euclidean 3 space, forming 3-periodic infinite polyhedra with cubic symmetry. All embeddings are "minimally frustrated," formed by removing just enough isometries of the (regular, but unphysical) 2D hyperbolic honeycombs [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] to allow embeddings in Euclidean 3 space. Nearly all of these triangulated "simplicial polyhedra" have symmetrically identical vertices, and most are chiral. The most symmetric examples include 10 infinite "deltahedra," with equilateral triangular faces, 6 of which were previously unknown and some of which can be described as packings of Platonic deltahedra. We describe also related cubic crystalline packings of equal hyperbolic discs in 3 space that are frustrated analogues of optimally dense hyperbolic disc packings. The 10-coordinated packings are the least "loosened" Euclidean embeddings, although frustration swells all of the hyperbolic disc packings to give less dense arrays than the flat penny-packing even though their unfrustrated analogues in [Formula: see text] are denser.

  5. Generalising Ward's Method for Use with Manhattan Distances.

    PubMed

    Strauss, Trudie; von Maltitz, Michael Johan

    2017-01-01

    The claim that Ward's linkage algorithm in hierarchical clustering is limited to use with Euclidean distances is investigated. In this paper, Ward's clustering algorithm is generalised to use with l1 norm or Manhattan distances. We argue that the generalisation of Ward's linkage method to incorporate Manhattan distances is theoretically sound and provide an example of where this method outperforms the method using Euclidean distances. As an application, we perform statistical analyses on languages using methods normally applied to biology and genetic classification. We aim to quantify differences in character traits between languages and use a statistical language signature based on relative bi-gram (sequence of two letters) frequencies to calculate a distance matrix between 32 Indo-European languages. We then use Ward's method of hierarchical clustering to classify the languages, using the Euclidean distance and the Manhattan distance. Results obtained from using the different distance metrics are compared to show that the Ward's algorithm characteristic of minimising intra-cluster variation and maximising inter-cluster variation is not violated when using the Manhattan metric.

  6. Shape classification of malignant lymphomas and leukemia by morphological watersheds and ARMA modeling

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet; Song, Yinglei; Ma, Limin; Zhou, Min

    2003-05-01

    A new algorithm that can be used to automatically recognize and classify malignant lymphomas and lukemia is proposed in this paper. The algorithm utilizes the morphological watershed to extract boundaries of cells from their grey-level images. It generates a sequence of Euclidean distances by selecting pixels in clockwise direction on the boundary of the cell and calculating the Euclidean distances of the selected pixels from the centroid of the cell. A feature vector associated with each cell is then obtained by applying the auto-regressive moving-average (ARMA) model to the generated sequence of Euclidean distances. The clustering measure J3=trace{inverse(Sw-1)Sm} involving the within (Sw) and mixed (Sm) class-scattering matrices is computed for both cell classes to provide an insight into the extent to which different cell classes in the training data are separated. Our test results suggest that the algorithm is highly accurate for the development of an interactive, computer-assisted diagnosis (CAD) tool.

  7. Gravitational decoupling and the Picard-Lefschetz approach

    NASA Astrophysics Data System (ADS)

    Brown, Jon; Cole, Alex; Shiu, Gary; Cottrell, William

    2018-01-01

    In this work, we consider tunneling between nonmetastable states in gravitational theories. Such processes arise in various contexts, e.g., in inflationary scenarios where the inflaton potential involves multiple fields or multiple branches. They are also relevant for bubble wall nucleation in some cosmological settings. However, we show that the transition amplitudes computed using the Euclidean method generally do not approach the corresponding field theory limit as Mp→∞ . This implies that in the Euclidean framework, there is no systematic expansion in powers of GN for such processes. Such considerations also carry over directly to no-boundary scenarios involving Hawking-Turok instantons. In this note, we illustrate this failure of decoupling in the Euclidean approach with a simple model of axion monodromy and then argue that the situation can be remedied with a Lorentzian prescription such as the Picard-Lefschetz theory. As a proof of concept, we illustrate with a simple model how tunneling transition amplitudes can be calculated using the Picard-Lefschetz approach.

  8. t-topology on the n-dimensional Minkowski space

    NASA Astrophysics Data System (ADS)

    Agrawal, Gunjan; Shrivastava, Sampada

    2009-05-01

    In this paper, a topological study of the n-dimensional Minkowski space, n >1, with t-topology, denoted by Mt, has been carried out. This topology, unlike the usual Euclidean one, is more physically appealing being defined by means of the Lorentzian metric. It shares many topological properties with similar candidate topologies and it has the advantage of being first countable. Compact sets of Mt and continuous maps into Mt are studied using the notion of Zeno sequences besides characterizing those sets that have the same subspace topologies induced from the Euclidean and t-topologies on n-dimensional Minkowski space. A necessary and sufficient condition for a compact set in the Euclidean n-space to be compact in Mt is obtained, thereby proving that the n-cube, n >1, as a subspace of Mt, is not compact, while a segment on a timelike line is compact in Mt. This study leads to the nonsimply connectedness of Mt, for n =2. Further, Minkowski space with s-topology has also been dealt with.

  9. Exact and heuristic algorithms for Space Information Flow.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng

    2018-01-01

    Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.

  10. Hydropathic self-organized criticality: a magic wand for protein physics.

    PubMed

    Phillips, J C

    2012-10-01

    Self-organized criticality (SOC) is a popular concept that has been the subject of more than 3000 articles in the last 25 years. The characteristic signature of SOC is the appearance of self-similarity (power-law scaling) in observable properties. A characteristic observable protein property that describes protein-water interactions is the water-accessible (hydropathic) interfacial area of compacted globular protein networks. Here we show that hydropathic power-law (size- or length-scale-dependent) exponents derived from SOC enable theory to connect standard Web-based (BLAST) short-range amino acid (aa) sequence similarities to long-range aa sequence hydropathic roughening form factors that hierarchically describe evolutionary trends in water - membrane protein interactions. Our method utilizes hydropathic aa exponents that define a non-Euclidean metric realistically rooted in the atomic coordinates of 5526 protein segments. These hydropathic aa exponents thereby encapsulate universal (but previously only implicit) non-Euclidean long-range differential geometrical features of the Protein Data Bank. These hydropathic aa exponents easily organize small mutated aa sequence differences between human and proximate species proteins. For rhodopsin, the most studied transmembrane signaling protein associated with night vision, analysis shows that this approach separates Euclidean short- and non-Euclidean long-range aa sequence properties, and shows that they correlate with 96% success for humans, monkeys, cats, mice and rabbits. Proper application of SOC using hydropathic aa exponents promises unprecedented simplifications of exponentially complex protein sequence-structure-function problems, both conceptual and practical.

  11. Cancelable ECG biometrics using GLRT and performance improvement using guided filter with irreversible guide signal.

    PubMed

    Kim, Hanvit; Minh Phuong Nguyen; Se Young Chun

    2017-07-01

    Biometrics such as ECG provides a convenient and powerful security tool to verify or identify an individual. However, one important drawback of biometrics is that it is irrevocable. In other words, biometrics cannot be re-used practically once it is compromised. Cancelable biometrics has been investigated to overcome this drawback. In this paper, we propose a cancelable ECG biometrics by deriving a generalized likelihood ratio test (GLRT) detector from a composite hypothesis testing in randomly projected domain. Since it is common to observe performance degradation for cancelable biometrics, we also propose a guided filtering (GF) with irreversible guide signal that is a non-invertibly transformed signal of ECG authentication template. We evaluated our proposed method using ECG-ID database with 89 subjects. Conventional Euclidean detector with original ECG template yielded 93.9% PD1 (detection probability at 1% FAR) while Euclidean detector with 10% compressed ECG (1/10 of the original data size) yielded 90.8% PD1. Our proposed GLRT detector with 10% compressed ECG yielded 91.4%, which is better than Euclidean with the same compressed ECG. GF with our proposed irreversible ECG template further improved the performance of our GLRT with 10% compressed ECG up to 94.3%, which is higher than Euclidean detector with original ECG. Lastly, we showed that our proposed cancelable ECG biometrics practically met cancelable biometrics criteria such as efficiency, re-usability, diversity and non-invertibility.

  12. Combined trellis coding with asymmetric MPSK modulation: An MSAT-X report

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    1985-01-01

    Traditionally symmetric, multiple phase-shift-keyed (MPSK) signal constellations, i.e., those with uniformly spaced signal points around the circle, have been used for both uncoded and coded systems. Although symmetric MPSK signal constellations are optimum for systems with no coding, the same is not necessarily true for coded systems. This appears to show that by designing the signal constellations to be asymmetric, one can, in many instances, obtain a significant performance improvement over the traditional symmetric MPSK constellations combined with trellis coding. The joint design of n/(n + 1) trellis codes and asymmetric 2 sup n + 1 - point MPSK is considered, which has a unity bandwidth expansion relative to uncoded 2 sup n-point symmetric MPSK. The asymptotic performance gains due to coding and asymmetry are evaluated in terms of the minimum free Euclidean distance free of the trellis. A comparison of the maximum value of this performance measure with the minimum distance d sub min of the uncoded system is an indication of the maximum reduction in required E sub b/N sub O that can be achieved for arbitrarily small system bit-error rates. It is to be emphasized that the introduction of asymmetry into the signal set does not effect the bandwidth of power requirements of the system; hence, the above-mentioned improvements in performance come at little or no cost. MPSK signal sets in coded systems appear in the work of Divsalar.

  13. State Estimation for Linear Systems Driven Simultaneously by Wiener and Poisson Processes.

    DTIC Science & Technology

    1978-12-01

    The state estimation problem of linear stochastic systems driven simultaneously by Wiener and Poisson processes is considered, especially the case...where the incident intensities of the Poisson processes are low and the system is observed in an additive white Gaussian noise. The minimum mean squared

  14. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  15. 24 CFR 3280.109 - Room requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... DEVELOPMENT MANUFACTURED HOME CONSTRUCTION AND SAFETY STANDARDS Planning Considerations § 3280.109 Room requirements. (a) Every manufactured home shall have at least one living area with not less than 150 sq. ft. of gross floor area. (b) Rooms designed for sleeping purposes shall have a minimum gross square foot floor...

  16. Effects of Linking Methods on Detection of DIF.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    1992-01-01

    Effects of the following methods for linking metrics on detection of differential item functioning (DIF) were compared: (1) test characteristic curve method (TCC); (2) weighted mean and sigma method; and (3) minimum chi-square method. With large samples, results were essentially the same. With small samples, TCC was most accurate. (SLD)

  17. Three-dimensional cell to tissue development process

    NASA Technical Reports Server (NTRS)

    Goodwin, Thomas J. (Inventor); Parker, Clayton R. (Inventor)

    2008-01-01

    An improved three-dimensional cell to tissue development process using a specific time varying electromagnetic force, pulsed, square wave, with minimum fluid shear stress, freedom for 3-dimensional spatial orientation of the suspended particles and localization of particles with differing or similar sedimentation properties in a similar spatial region.

  18. 40 CFR 63.5400 - How do I measure the quantity of leather processed?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... area of each piece of processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the...

  19. 40 CFR 63.5400 - How do I measure the quantity of leather processed?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... area of each piece of processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the...

  20. 40 CFR 63.5400 - How do I measure the quantity of leather processed?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... area of each piece of processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the...

  1. 40 CFR 63.5400 - How do I measure the quantity of leather processed?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the manufacturer's specifications. For...

  2. 40 CFR 63.5400 - How do I measure the quantity of leather processed?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the manufacturer's specifications. For...

  3. Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing

    NASA Technical Reports Server (NTRS)

    Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric

    2016-01-01

    This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.

  4. Effect of nonideal square-law detection on static calibration in noise-injection radiometers

    NASA Technical Reports Server (NTRS)

    Hearn, C. P.

    1984-01-01

    The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.

  5. VizieR Online Data Catalog: delta Cep VEGA/CHARA observing log (Nardetto+, 2016)

    NASA Astrophysics Data System (ADS)

    Nardetto, N.; Merand, A.; Mourard, D.; Storm, J.; Gieren, W.; Fouque, P.; Gallenne, A.; Graczyk, D.; Kervella, P.; Neilson, H.; Pietrzynski, G.; Pilecki, B.; Breitfelder, J.; Berio, P.; Challouf, M.; Clausse, J.-M.; Ligi, R.; Mathias, P.; Meilland, A.; Perraut, K.; Poretti, E.; Rainer, M.; Spang, A.; Stee, P.; Tallon-Bosc, I.; Ten Brummelaar, T.

    2016-07-01

    The columns give, respectively, the date, the RJD, the hour angle (HA), the minimum and maximum wavelengths over which the squared visibility is calculated, the projected baseline length Bp and its orientation PA, the signal-to-noise ratio on the fringe peak; the last column provides the calibrated squared visibility V2 together with the statistic error on V2, and the systematic error on V2 (see text for details). The data are available on the Jean-Marie Mariotti Center OiDB service (Available at http://oidb.jmmc.fr). (1 data file).

  6. Automatic voice recognition using traditional and artificial neural network approaches

    NASA Technical Reports Server (NTRS)

    Botros, Nazeih M.

    1989-01-01

    The main objective of this research is to develop an algorithm for isolated-word recognition. This research is focused on digital signal analysis rather than linguistic analysis of speech. Features extraction is carried out by applying a Linear Predictive Coding (LPC) algorithm with order of 10. Continuous-word and speaker independent recognition will be considered in future study after accomplishing this isolated word research. To examine the similarity between the reference and the training sets, two approaches are explored. The first is implementing traditional pattern recognition techniques where a dynamic time warping algorithm is applied to align the two sets and calculate the probability of matching by measuring the Euclidean distance between the two sets. The second is implementing a backpropagation artificial neural net model with three layers as the pattern classifier. The adaptation rule implemented in this network is the generalized least mean square (LMS) rule. The first approach has been accomplished. A vocabulary of 50 words was selected and tested. The accuracy of the algorithm was found to be around 85 percent. The second approach is in progress at the present time.

  7. Voters' Fickleness:. a Mathematical Model

    NASA Astrophysics Data System (ADS)

    Boccara, Nino

    This paper presents a spatial agent-based model in order to study the evolution of voters' choice during the campaign of a two-candidate election. Each agent, represented by a point inside a two-dimensional square, is under the influence of its neighboring agents, located at a Euclidean distance less than or equal to d, and under the equal influence of both candidates seeking to win its support. Moreover, each agent located at time t at a given point moves at the next timestep to a randomly selected neighboring location distributed normally around its position at time t. Besides their location in space, agents are characterized by their level of awareness, a real a ∈ [0, 1], and their opinion ω ∈ {-1, 0, +1}, where -1 and +1 represent the respective intentions to cast a ballot in favor of one of the two candidates while 0 indicates either disinterest or refusal to vote. The essential purpose of the paper is qualitative; its aim is to show that voters' fickleness is strongly correlated to the level of voters' awareness and the efficiency of candidates' propaganda.

  8. Polar decomposition for attitude determination from vector observations

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.

    1993-01-01

    This work treats the problem of weighted least squares fitting of a 3D Euclidean-coordinate transformation matrix to a set of unit vectors measured in the reference and transformed coordinates. A closed-form analytic solution to the problem is re-derived. The fact that the solution is the closest orthogonal matrix to some matrix defined on the measured vectors and their weights is clearly demonstrated. Several known algorithms for computing the analytic closed form solution are considered. An algorithm is discussed which is based on the polar decomposition of matrices into the closest unitary matrix to the decomposed matrix and a Hermitian matrix. A somewhat longer improved algorithm is suggested too. A comparison of several algorithms is carried out using simulated data as well as real data from the Upper Atmosphere Research Satellite. The comparison is based on accuracy and time consumption. It is concluded that the algorithms based on polar decomposition yield a simple although somewhat less accurate solution. The precision of the latter algorithms increase with the number of the measured vectors and with the accuracy of their measurement.

  9. A spatial model for a stream networks of Citarik River with the environmental variables: potential of hydrogen (PH) and temperature

    NASA Astrophysics Data System (ADS)

    Bachrudin, A.; Mohamed, N. B.; Supian, S.; Sukono; Hidayat, Y.

    2018-03-01

    Application of existing geostatistical theory of stream networks provides a number of interesting and challenging problems. Most of statistical tools in the traditional geostatistics have been based on a Euclidean distance such as autocovariance functions, but for stream data is not permissible since it deals with a stream distance. To overcome this autocovariance developed a model based on the distance the flow with using convolution kernel approach (moving average construction). Spatial model for a stream networks is widely used to monitor environmental on a river networks. In a case study of a river in province of West Java, the objective of this paper is to analyze a capability of a predictive on two environmental variables, potential of hydrogen (PH) and temperature using ordinary kriging. Several the empirical results show: (1) The best fit of autocovariance functions for temperature and potential hydrogen (ph) of Citarik River is linear which also yields the smallest root mean squared prediction error (RMSPE), (2) the spatial correlation values between the locations on upstream and on downstream of Citarik river exhibit decreasingly

  10. Eigenvalues of the Wentzell-Laplace operator and of the fourth order Steklov problems

    NASA Astrophysics Data System (ADS)

    Xia, Changyu; Wang, Qiaoling

    2018-05-01

    We prove a sharp upper bound and a lower bound for the first nonzero eigenvalue of the Wentzell-Laplace operator on compact manifolds with boundary and an isoperimetric inequality for the same eigenvalue in the case where the manifold is a bounded domain in a Euclidean space. We study some fourth order Steklov problems and obtain isoperimetric upper bound for the first eigenvalue of them. We also find all the eigenvalues and eigenfunctions for two kind of fourth order Steklov problems on a Euclidean ball.

  11. Bayesian extraction of the parton distribution amplitude from the Bethe-Salpeter wave function

    NASA Astrophysics Data System (ADS)

    Gao, Fei; Chang, Lei; Liu, Yu-xin

    2017-07-01

    We propose a new numerical method to compute the parton distribution amplitude (PDA) from the Euclidean Bethe-Salpeter wave function. The essential step is to extract the weight function in the Nakanishi representation of the Bethe-Salpeter wave function in Euclidean space, which is an ill-posed inversion problem, via the maximum entropy method (MEM). The Nakanishi weight function as well as the corresponding light-front parton distribution amplitude (PDA) can be well determined. We confirm prior work on PDA computations, which was based on different methods.

  12. Loop-quantum-gravity vertex amplitude.

    PubMed

    Engle, Jonathan; Pereira, Roberto; Rovelli, Carlo

    2007-10-19

    Spin foam models are hoped to provide the dynamics of loop-quantum gravity. However, the most popular of these, the Barrett-Crane model, does not have the good boundary state space and there are indications that it fails to yield good low-energy n-point functions. We present an alternative dynamics that can be derived as a quantization of a Regge discretization of Euclidean general relativity, where second class constraints are imposed weakly. Its state space matches the SO(3) loop gravity one and it yields an SO(4)-covariant vertex amplitude for Euclidean loop gravity.

  13. Mass-Related Dynamical Barriers in Triatomic Reactions

    NASA Astrophysics Data System (ADS)

    Yanao, T.; Koon, W. S.; Marsden, J. E.

    2006-06-01

    A methodology is given to determine the effect of different mass distributions for triatomic reactions using the geometry of shape space. Atomic masses are incorporated into the non-Euclidean shape space metric after the separation of rotations. Using the equations of motion in this non-Euclidean shape space, an averaged field of velocity-dependent fictitious forces is determined. This force field, as opposed to the force arising from the potential, dominates branching ratios of isomerization dynamics of a triatomic molecule. This methodology may be useful for qualitative prediction of branching ratios in general triatomic reactions.

  14. Trading spaces: building three-dimensional nets from two-dimensional tilings

    PubMed Central

    Castle, Toen; Evans, Myfanwy E.; Hyde, Stephen T.; Ramsden, Stuart; Robins, Vanessa

    2012-01-01

    We construct some examples of finite and infinite crystalline three-dimensional nets derived from symmetric reticulations of homogeneous two-dimensional spaces: elliptic (S2), Euclidean (E2) and hyperbolic (H2) space. Those reticulations are edges and vertices of simple spherical, planar and hyperbolic tilings. We show that various projections of the simplest symmetric tilings of those spaces into three-dimensional Euclidean space lead to topologically and geometrically complex patterns, including multiple interwoven nets and tangled nets that are otherwise difficult to generate ab initio in three dimensions. PMID:24098839

  15. Linear regression based on Minimum Covariance Determinant (MCD) and TELBS methods on the productivity of phytoplankton

    NASA Astrophysics Data System (ADS)

    Gusriani, N.; Firdaniza

    2018-03-01

    The existence of outliers on multiple linear regression analysis causes the Gaussian assumption to be unfulfilled. If the Least Square method is forcedly used on these data, it will produce a model that cannot represent most data. For that, we need a robust regression method against outliers. This paper will compare the Minimum Covariance Determinant (MCD) method and the TELBS method on secondary data on the productivity of phytoplankton, which contains outliers. Based on the robust determinant coefficient value, MCD method produces a better model compared to TELBS method.

  16. Kinematics and subpopulations' structure definition of blue fox (Alopex lagopus) sperm motility using the ISAS® V1 CASA system.

    PubMed

    Soler, C; García, A; Contell, J; Segervall, J; Sancho, M

    2014-08-01

    Over recent years, technological advances have brought innovation in assisted reproduction to the agriculture. Fox species are of great economical interest in some countries, but their semen characteristics have not been studied enough. To advance the knowledge of function of fox spermatozoa, five samples were obtained by masturbation, in the breeding season. Kinetic analysis was performed using ISAS® v1 system. Usual kinematic parameters (VCL, VSL, VAP, LIN, STR, WOB, ALH and BCF) were considered. To establish the standardization for the analysis of samples, the minimum number of cells to analyse and the minimum number of fields to capture were defined. In the second step, the presence of subpopulations in blue fox semen was analysed. The minimum number of cells to test was 30, because kinematic parameters remained constant along the groups of analysis. Also, the effectiveness of ISAS® D4C20 counting chamber was studied, showing that the first five squares presented equivalent results, while in the squares six and seven, the kinematic parameters showed a reduction in all of them, but not in the concentration or motility percentage. Kinematic variables were grouped into two principal components (PC). A linear movement characterized PC1, while PC2 showed an oscillatory movement. Three subpopulations were found, varying in structure among different animals. © 2014 Blackwell Verlag GmbH.

  17. Input relegation control for gross motion of a kinematically redundant manipulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    1992-10-01

    This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the jointmore » velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.« less

  18. The applicability of ordinary least squares to consistently short distances between taxa in phylogenetic tree construction and the normal distribution test consequences.

    PubMed

    Roux, C Z

    2009-05-01

    Short phylogenetic distances between taxa occur, for example, in studies on ribosomal RNA-genes with slow substitution rates. For consistently short distances, it is proved that in the completely singular limit of the covariance matrix ordinary least squares (OLS) estimates are minimum variance or best linear unbiased (BLU) estimates of phylogenetic tree branch lengths. Although OLS estimates are in this situation equal to generalized least squares (GLS) estimates, the GLS chi-square likelihood ratio test will be inapplicable as it is associated with zero degrees of freedom. Consequently, an OLS normal distribution test or an analogous bootstrap approach will provide optimal branch length tests of significance for consistently short phylogenetic distances. As the asymptotic covariances between branch lengths will be equal to zero, it follows that the product rule can be used in tree evaluation to calculate an approximate simultaneous confidence probability that all interior branches are positive.

  19. Interspecific utilisation of wax in comb building by honeybees

    NASA Astrophysics Data System (ADS)

    Hepburn, H. Randall; Radloff, Sarah E.; Duangphakdee, Orawan; Phaincharoen, Mananya

    2009-06-01

    Beeswaxes of honeybee species share some homologous neutral lipids; but species-specific differences remain. We analysed behavioural variation for wax choice in honeybees, calculated the Euclidean distances for different beeswaxes and assessed the relationship of Euclidean distances to wax choice. We tested the beeswaxes of Apis mellifera capensis, Apis florea, Apis cerana and Apis dorsata and the plant and mineral waxes Japan, candelilla, bayberry and ozokerite as sheets placed in colonies of A. m. capensis, A. florea and A. cerana. A. m. capensis accepted the four beeswaxes but removed Japan and bayberry wax and ignored candelilla and ozokerite. A. cerana colonies accepted the wax of A. cerana, A. florea and A. dorsata but rejected or ignored that of A. m. capensis, the plant and mineral waxes. A. florea colonies accepted A. cerana, A. dorsata and A. florea wax but rejected that of A. m. capensis. The Euclidean distances for the beeswaxes are consistent with currently prevailing phylogenies for Apis. Despite post-speciation chemical differences in the beeswaxes, they remain largely acceptable interspecifically while the plant and mineral waxes are not chemically close enough to beeswax for their acceptance.

  20. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.

    PubMed

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ d , and the dictionary is learned from the training data using the vector space structure of ℝ d and its Euclidean L 2 -metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.

  1. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning

    PubMed Central

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝd, and the dictionary is learned from the training data using the vector space structure of ℝd and its Euclidean L2-metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis. PMID:24129583

  2. Texture classification using non-Euclidean Minkowski dilation

    NASA Astrophysics Data System (ADS)

    Florindo, Joao B.; Bruno, Odemir M.

    2018-03-01

    This study presents a new method to extract meaningful descriptors of gray-scale texture images using Minkowski morphological dilation based on the Lp metric. The proposed approach is motivated by the success previously achieved by Bouligand-Minkowski fractal descriptors on texture classification. In essence, such descriptors are directly derived from the morphological dilation of a three-dimensional representation of the gray-level pixels using the classical Euclidean metric. In this way, we generalize the dilation for different values of p in the Lp metric (Euclidean is a particular case when p = 2) and obtain the descriptors from the cumulated distribution of the distance transform computed over the texture image. The proposed method is compared to other state-of-the-art approaches (such as local binary patterns and textons for example) in the classification of two benchmark data sets (UIUC and Outex). The proposed descriptors outperformed all the other approaches in terms of rate of images correctly classified. The interesting results suggest the potential of these descriptors in this type of task, with a wide range of possible applications to real-world problems.

  3. New descriptor for skeletons of planar shapes: the calypter

    NASA Astrophysics Data System (ADS)

    Pirard, Eric; Nivart, Jean-Francois

    1994-05-01

    The mathematical definition of the skeleton as the locus of centers of maximal inscribed discs is a nondigitizable one. The idea presented in this paper is to incorporate the skeleton information and the chain-code of the contour into a single descriptor by associating to each point of a contour the center and radius of the maximum inscribed disc tangent at that point. This new descriptor is called calypter. The encoding of a calypter is a three stage algorithm: (1) chain coding of the contour; (2) euclidean distance transformation, (3) climbing on the distance relief from each point of the contour towards the corresponding maximal inscribed disc center. Here we introduce an integer euclidean distance transform called the holodisc distance transform. The major interest of this holodisc transform is to confer 8-connexity to the isolevels of the generated distance relief thereby allowing a climbing algorithm to proceed step by step towards the centers of the maximal inscribed discs. The calypter has a cyclic structure delivering high speed access to the skeleton data. Its potential uses are in high speed euclidean mathematical morphology, shape processing, and analysis.

  4. Translational Symmetry-Breaking for Spiral Waves

    NASA Astrophysics Data System (ADS)

    LeBlanc, V. G.; Wulff, C.

    2000-10-01

    Spiral waves are observed in numerous physical situations, ranging from Belousov-Zhabotinsky (BZ) chemical reactions, to cardiac tissue, to slime-mold aggregates. Mathematical models with Euclidean symmetry have recently been developed to describe the dynamic behavior (for example, meandering) of spiral waves in excitable media. However, no physical experiment is ever infinite in spatial extent, so the Euclidean symmetry is only approximate. Experiments on spiral waves show that inhomogeneities can anchor spirals and that boundary effects (for example, boundary drifting) become very important when the size of the spiral core is comparable to the size of the reacting medium. Spiral anchoring and boundary drifting cannot be explained by the Euclidean model alone. In this paper, we investigate the effects on spiral wave dynamics of breaking the translation symmetry while keeping the rotation symmetry. This is accomplished by introducing a small perturbation in the five-dimensional center bundle equations (describing Hopf bifurcation from one-armed spiral waves) which is SO(2)-equivariant but not equivariant under translations. We then study the effects of this perturbation on rigid spiral rotation, on quasi-periodic meandering and on drifting.

  5. 36 CFR 28.12 - Development standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... significant harm to the natural resources of the Seashore. (c) Minimum lot size is 4,000 square feet. A.../FEMA shown on Flood Insurance Rate Maps for Fire Island communities. (g) A swimming pool is an... hazards and/or detract from the natural or cultural scene. (j) A zoning authority shall have in place...

  6. 40 CFR 91.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...

  7. 40 CFR 91.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...

  8. 40 CFR 91.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...

  9. 40 CFR 91.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...

  10. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    PubMed

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  11. Digital model of the Arikaree Aquifer near Wheatland, southeastern Wyoming

    USGS Publications Warehouse

    Hoxie, Dwight T.

    1977-01-01

    A digital model that mathematically simulates the flow of ground water, approximating the flow system as two-dimensional, has been applied to predict the long-term effects of irrigation and proposed industrial pumping from the unconfined Arikaree aquifer in a 400 square-mile area in southeastern Wyoming. Three cases that represent projected maximum, mean, and minimum combined irrigation and industrial ground-water withdrawals at annual rates of 16,176, 11,168, and 6,749 acre-feet, respectively, were considered. Water-level declines of more than 5 feet over areas of 124, 120, and 98 square miles and depletions in streamflow of 14.4, 8.9, and 7.2 cfs from the Laramie and North Laramie Rivers were predicted to occur at the end of a 40-year simulation period for these maximum, mean, and minimum withdrawal rates, respectively. A tenfold incrase in the vertical hydraulic conductivity that was assumed for the streambeds results in smaller predicted drawdowns near the Laramie and North Laramie Rivers and a 36 percent increase in the predicted depletion in streamflow for the North Laramie River. (Woodard-USGS)

  12. ERTS evaluation for land use inventory

    NASA Technical Reports Server (NTRS)

    Hardy, E. E. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. The feasibility of accomplishing a general inventory of any given region based on spectral categories from satellite data has been demonstrated in a pilot study for an area of 6300 square kilometers in central New York State. This was accomplished by developing special processing techniques to improve and balance contrast and density for each spectral band of an image scene to compare with a standard range of density and contrast found to be acceptable for interpretation of the scene. Diazo film transparencies were made from enlarged black and white transparencies of each spectral band. Color composites were constructed from these diazo films in combinations of hue and spectral bands to enhance different spectral features in the scene. Interpretation and data takeoff was accomplished manually by translating interpreted areas onto an overlay to construct a spectral map. The minimum area interpreted was 25 hectares. The minimum area geographically referenced was one square kilometer. The interpretation and referencing of data from ERTS-1 was found to be about 88% accurate for eight primary spectral categories.

  13. Minimizing the area required for time constants in integrated circuits

    NASA Technical Reports Server (NTRS)

    Lyons, J. C.

    1972-01-01

    When a medium- or large-scale integrated circuit is designed, efforts are usually made to avoid the use of resistor-capacitor time constant generators. The capacitor needed for this circuit usually takes up more surface area on the chip than several resistors and transistors. When the use of this network is unavoidable, the designer usually makes an effort to see that the choice of resistor and capacitor combinations is such that a minimum amount of surface area is consumed. The optimum ratio of resistance to capacitance that will result in this minimum area is equal to the ratio of resistance to capacitance which may be obtained from a unit of surface area for the particular process being used. The minimum area required is a function of the square root of the reciprocal of the products of the resistance and capacitance per unit area. This minimum occurs when the area required by the resistor is equal to the area required by the capacitor.

  14. Gravitational instantons, self-duality, and geometric flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bourliot, F.; Estes, J.; Petropoulos, P. M.

    2010-05-15

    We discuss four-dimensional 'spatially homogeneous' gravitational instantons. These are self-dual solutions of Euclidean vacuum Einstein equations. They are endowed with a product structure RxM{sub 3} leading to a foliation into three-dimensional subspaces evolving in Euclidean time. For a large class of homogeneous subspaces, the dynamics coincides with a geometric flow on the three-dimensional slice, driven by the Ricci tensor plus an so(3) gauge connection. The flowing metric is related to the vielbein of the subspace, while the gauge field is inherited from the anti-self-dual component of the four-dimensional Levi-Civita connection.

  15. The Formalism of Quantum Mechanics Specified by Covariance Properties

    NASA Astrophysics Data System (ADS)

    Nisticò, G.

    2009-03-01

    The known methods, due for instance to G.W. Mackey and T.F. Jordan, which exploit the transformation properties with respect to the Euclidean and Galileian group to determine the formalism of the Quantum Theory of a localizable particle, fail in the case that the considered transformations are not symmetries of the physical system. In the present work we show that the formalism of standard Quantum Mechanics for a particle without spin can be completely recovered by exploiting the covariance properties with respect to the group of Euclidean transformations, without requiring that these transformations are symmetries of the physical system.

  16. Constant curvature black holes in Einstein AdS gravity: Euclidean action and thermodynamics

    NASA Astrophysics Data System (ADS)

    Guilleminot, Pablo; Olea, Rodrigo; Petrov, Alexander N.

    2018-03-01

    We compute the Euclidean action for constant curvature black holes (CCBHs), as an attempt to associate thermodynamic quantities to these solutions of Einstein anti-de Sitter (AdS) gravity. CCBHs are gravitational configurations obtained by identifications along isometries of a D -dimensional globally AdS space, such that the Riemann tensor remains constant. Here, these solutions are interpreted as extended objects, which contain a (D -2 )-dimensional de-Sitter brane as a subspace. Nevertheless, the computation of the free energy for these solutions shows that they do not obey standard thermodynamic relations.

  17. Combating speckle in SAR images - Vector filtering and sequential classification based on a multiplicative noise model

    NASA Technical Reports Server (NTRS)

    Lin, Qian; Allebach, Jan P.

    1990-01-01

    An adaptive vector linear minimum mean-squared error (LMMSE) filter for multichannel images with multiplicative noise is presented. It is shown theoretically that the mean-squared error in the filter output is reduced by making use of the correlation between image bands. The vector and conventional scalar LMMSE filters are applied to a three-band SIR-B SAR, and their performance is compared. Based on a mutliplicative noise model, the per-pel maximum likelihood classifier was derived. The authors extend this to the design of sequential and robust classifiers. These classifiers are also applied to the three-band SIR-B SAR image.

  18. Optimal focal-plane restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1989-01-01

    Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.

  19. Channel estimation based on quantized MMP for FDD massive MIMO downlink

    NASA Astrophysics Data System (ADS)

    Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie

    2016-10-01

    In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.

  20. An empirical Bayes approach for the Poisson life distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1973-01-01

    A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

  1. A robotic reflective Schmidt telescope for Dome C

    NASA Astrophysics Data System (ADS)

    Strassmeier, K. G.; Andersen, M. I.; Steinbach, M.

    2004-10-01

    This paper lays out a wide-field robotic Schmidt telescope (RST) for the Antarctic site Dome C. The telescope is based on 80/120cm reflective Schmidt optics, built originally for a space project, and a mosaic of four 7.5k×7.5k 8-μm thinned CCDs from the PEPSI/LBT wafer run. The telescope's total field of view (FOV) would be 5o circular (minimum 3o× 3o square) with a plate scale of 0.7 arcsec per pixel. Limiting magnitude is expected to be V=21.5mag in 60 sec for a field of 9 square degrees.

  2. Effect of conventional and square stores on the longitudinal aerodynamic characteristics of a fighter aircraft model at supersonic speeds. [in the langley unitary plan wind tunnel

    NASA Technical Reports Server (NTRS)

    Monta, W. J.

    1980-01-01

    The effects of conventional and square stores on the longitudinal aerodynamic characteristics of a fighter aircraft configuration at Mach numbers of 1.6, 1.8, and 2.0 was investigated. Five conventional store configurations and six arrangements of a square store configuration were studied. All configurations of the stores produced small, positive increments in the pitching moment throughout the angle-of-attack range, but the configuration with area ruled wing tanks also had a slight decrease on stability at the higher angles of attack. There were some small changes in lift coefficient because of the addition of the stores, causing the drag increment to vary with the lift coefficient. As a result, there were corresponding changes in the increments of the maximum lift drag ratios. The store drag coefficient based on the cross sectional area of the stores ranged from a maximum of 1.1 for the configuration with three Maverick missiles to a minimum of about .040 for the two MK-84 bombs and the arrangements with four square stores touching or two square stores in tandem. Square stores located side by side yielded about 0.50 in the aft position compared to 0.74 in the forward position.

  3. Estimators of The Magnitude-Squared Spectrum and Methods for Incorporating SNR Uncertainty

    PubMed Central

    Lu, Yang; Loizou, Philipos C.

    2011-01-01

    Statistical estimators of the magnitude-squared spectrum are derived based on the assumption that the magnitude-squared spectrum of the noisy speech signal can be computed as the sum of the (clean) signal and noise magnitude-squared spectra. Maximum a posterior (MAP) and minimum mean square error (MMSE) estimators are derived based on a Gaussian statistical model. The gain function of the MAP estimator was found to be identical to the gain function used in the ideal binary mask (IdBM) that is widely used in computational auditory scene analysis (CASA). As such, it was binary and assumed the value of 1 if the local SNR exceeded 0 dB, and assumed the value of 0 otherwise. By modeling the local instantaneous SNR as an F-distributed random variable, soft masking methods were derived incorporating SNR uncertainty. The soft masking method, in particular, which weighted the noisy magnitude-squared spectrum by the a priori probability that the local SNR exceeds 0 dB was shown to be identical to the Wiener gain function. Results indicated that the proposed estimators yielded significantly better speech quality than the conventional MMSE spectral power estimators, in terms of yielding lower residual noise and lower speech distortion. PMID:21886543

  4. Thermodynamic stability of nanosized multicomponent bubbles/droplets: the square gradient theory and the capillary approach.

    PubMed

    Wilhelmsen, Øivind; Bedeaux, Dick; Kjelstrup, Signe; Reguera, David

    2014-01-14

    Formation of nanosized droplets/bubbles from a metastable bulk phase is connected to many unresolved scientific questions. We analyze the properties and stability of multicomponent droplets and bubbles in the canonical ensemble, and compare with single-component systems. The bubbles/droplets are described on the mesoscopic level by square gradient theory. Furthermore, we compare the results to a capillary model which gives a macroscopic description. Remarkably, the solutions of the square gradient model, representing bubbles and droplets, are accurately reproduced by the capillary model except in the vicinity of the spinodals. The solutions of the square gradient model form closed loops, which shows the inherent symmetry and connected nature of bubbles and droplets. A thermodynamic stability analysis is carried out, where the second variation of the square gradient description is compared to the eigenvalues of the Hessian matrix in the capillary description. The analysis shows that it is impossible to stabilize arbitrarily small bubbles or droplets in closed systems and gives insight into metastable regions close to the minimum bubble/droplet radii. Despite the large difference in complexity, the square gradient and the capillary model predict the same finite threshold sizes and very similar stability limits for bubbles and droplets, both for single-component and two-component systems.

  5. Thermodynamic stability of nanosized multicomponent bubbles/droplets: The square gradient theory and the capillary approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilhelmsen, Øivind, E-mail: oivind.wilhelmsen@ntnu.no; Bedeaux, Dick; Kjelstrup, Signe

    Formation of nanosized droplets/bubbles from a metastable bulk phase is connected to many unresolved scientific questions. We analyze the properties and stability of multicomponent droplets and bubbles in the canonical ensemble, and compare with single-component systems. The bubbles/droplets are described on the mesoscopic level by square gradient theory. Furthermore, we compare the results to a capillary model which gives a macroscopic description. Remarkably, the solutions of the square gradient model, representing bubbles and droplets, are accurately reproduced by the capillary model except in the vicinity of the spinodals. The solutions of the square gradient model form closed loops, which showsmore » the inherent symmetry and connected nature of bubbles and droplets. A thermodynamic stability analysis is carried out, where the second variation of the square gradient description is compared to the eigenvalues of the Hessian matrix in the capillary description. The analysis shows that it is impossible to stabilize arbitrarily small bubbles or droplets in closed systems and gives insight into metastable regions close to the minimum bubble/droplet radii. Despite the large difference in complexity, the square gradient and the capillary model predict the same finite threshold sizes and very similar stability limits for bubbles and droplets, both for single-component and two-component systems.« less

  6. An Adaptive Pheromone Updation of the Ant-System using LMS Technique

    NASA Astrophysics Data System (ADS)

    Paul, Abhishek; Mukhopadhyay, Sumitra

    2010-10-01

    We propose a modified model of pheromone updation for Ant-System, entitled as Adaptive Ant System (AAS), using the properties of basic Adaptive Filters. Here, we have exploited the properties of Least Mean Square (LMS) algorithm for the pheromone updation to find out the best minimum tour for the Travelling Salesman Problem (TSP). TSP library has been used for the selection of benchmark problem and the proposed AAS determines the minimum tour length for the problems containing large number of cities. Our algorithm shows effective results and gives least tour length in most of the cases as compared to other existing approaches.

  7. Statistical summaries of water-quality data for two coal areas of Jackson County, Colorado

    USGS Publications Warehouse

    Kuhn, Gerhard

    1982-01-01

    Statistical summaries of water-quality data are compiled for eight streams in two separate coal areas of Jackson County, Colo. The quality-of-water data were collected from October 1976 to September 1980. For inorganic constituents, the maximum, minimum, and mean concentrations, as well as other statistics are presented; for minor elements, only the maximum, minimum, and mean values are included. Least-squares equations (regressions) are also given relating specific conductance of the streams to the concentration of the major ions. The observed range of specific conductance was 85 to 1,150 micromhos per centimeter for the eight sites. (USGS)

  8. Space-Time Joint Interference Cancellation Using Fuzzy-Inference-Based Adaptive Filtering Techniques in Frequency-Selective Multipath Channels

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng

    2006-12-01

    An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.

  9. 46 CFR 64.63 - Minimum emergency venting capacity.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... tank may have a reduction if— (1) It is shown to the Coast Guard that the insulation reduces the heat... in square feet. L=Latent heat of the product being vaporized at relieving conditions in Btu per pound... based on relation of specific heats, in accordance with appendix J of division 1 of section VIII of the...

  10. 46 CFR 64.63 - Minimum emergency venting capacity.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... tank may have a reduction if— (1) It is shown to the Coast Guard that the insulation reduces the heat... in square feet. L=Latent heat of the product being vaporized at relieving conditions in Btu per pound... based on relation of specific heats, in accordance with Appendix J of Division 1 of Section VIII of the...

  11. 46 CFR 64.63 - Minimum emergency venting capacity.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... tank may have a reduction if— (1) It is shown to the Coast Guard that the insulation reduces the heat... in square feet. L=Latent heat of the product being vaporized at relieving conditions in Btu per pound... based on relation of specific heats, in accordance with Appendix J of Division 1 of Section VIII of the...

  12. 46 CFR 64.63 - Minimum emergency venting capacity.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... tank may have a reduction if— (1) It is shown to the Coast Guard that the insulation reduces the heat... in square feet. L=Latent heat of the product being vaporized at relieving conditions in Btu per pound... based on relation of specific heats, in accordance with Appendix J of Division 1 of Section VIII of the...

  13. 46 CFR 64.63 - Minimum emergency venting capacity.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... tank may have a reduction if— (1) It is shown to the Coast Guard that the insulation reduces the heat... in square feet. L=Latent heat of the product being vaporized at relieving conditions in Btu per pound... based on relation of specific heats, in accordance with appendix J of division 1 of section VIII of the...

  14. A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Louis A; Mason, John J.

    We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less

  15. Investigation of adaptive filtering and MDL mitigation based on space-time block-coding for spatial division multiplexed coherent receivers

    NASA Astrophysics Data System (ADS)

    Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi

    2017-07-01

    In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).

  16. RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.

  17. From EUCLID to Ptolemy in English Crop Circles

    NASA Astrophysics Data System (ADS)

    Hawkins, G. S.

    1997-12-01

    The late Lord Soli Zuckerman, science advisor to several British governments, encouraged the author, an astronomer, to test the theory that all crop circles were made by hoaxers. Within the hundreds of formations in Southern England he saw a thread of surprising historical content at the intellectual level of College Dons. One diagram in celestial mechanics involved triple conjunctions of Mercury, Venus and Mars every 67 2/3 years. Ptolemy's fourth musical scale, tense diatonic, occurred in the circles during the period 1978-88. Starting on E, Ptolemaic ratios make our perfect diatonic scale of white notes on the keyboard of the piano or church organ. For separated circles the ratio was given by diameters, and for concentric circles it was diameters squared. A series of rotationally symmetric figures began in 1988 which combined Ptolemy's ratios with Euclid's theorems. In his last plane theorem, Euclid (Elements 13,12) proved that the square on the side of an equilateral triangle is 3 times the square on the circum-circle radius -- diatonic note G(2). From the 1988 figure one can prove the square on the side is 16/3 times the square on the semi-altitude, giving note F(3). Later rotational figures over the next 5 years led to diatonic ratios for the hexagon, square and triangle. They gave with the exactness of Euclidean theorems the notes F, C(2) and E(2), and they are the only regular polygons to do so. Although these 4 crop theorems derive from Euclid, they were previously unknown as a set in the literature, nor had the Ptolemaic connection been published. Professional magazines asked the readers to provide a fifth theorem that would generate the above 4 theorems, but none was forthcoming. Ultimately the cicle makers showed knowledge of this generating theorem using a 200-ft design at Litchfield, Hampshire. After 1993, rotationally symmetric geometries continued to appear, but with much more complicated patterns. One design showed 6 crescent moons in a hexagon with cusps set on 2 concentric circles defining the note A(2). Here the mathematical level required application of Ptolemy's famous theorem of chords to confirm the A(2) ratio of exactly 10/3. The chords were the side of a hexagon joined to the side of a pentagon. We confirm Zuckerman's suggestion that there is a strong thread of expertise in the phenomenon worthy of scientific interest, and it spans a 20-year period. He asks: Why do they use a wheat field, and "how do they maintain their hidden identities?" Their type of knowledge rests in the past, and is not frequently found in the contemporary educational system.

  18. Exact Boson-Fermion Duality on a 3D Euclidean Lattice

    DOE PAGES

    Chen, Jing-Yuan; Son, Jun Ho; Wang, Chao; ...

    2018-01-05

    The idea of statistical transmutation plays a crucial role in descriptions of the fractional quantum Hall effect. However, a recently conjectured duality between a critical boson and a massless two-component Dirac fermion extends this notion to gapless systems. This duality sheds light on highly nontrivial problems such as the half-filled Landau level, the superconductor-insulator transition, and surface states of strongly coupled topological insulators. Although this boson-fermion duality has undergone many consistency checks, it has remained unproven. Here, we describe the duality in a nonperturbative fashion using an exact UV mapping of partition functions on a 3D Euclidean lattice.

  19. Exact Boson-Fermion Duality on a 3D Euclidean Lattice.

    PubMed

    Chen, Jing-Yuan; Son, Jun Ho; Wang, Chao; Raghu, S

    2018-01-05

    The idea of statistical transmutation plays a crucial role in descriptions of the fractional quantum Hall effect. However, a recently conjectured duality between a critical boson and a massless two-component Dirac fermion extends this notion to gapless systems. This duality sheds light on highly nontrivial problems such as the half-filled Landau level, the superconductor-insulator transition, and surface states of strongly coupled topological insulators. Although this boson-fermion duality has undergone many consistency checks, it has remained unproven. We describe the duality in a nonperturbative fashion using an exact UV mapping of partition functions on a 3D Euclidean lattice.

  20. Supersymmetry and the rotation group

    NASA Astrophysics Data System (ADS)

    McKeon, D. G. C.

    2018-04-01

    A model invariant under a supersymmetric extension of the rotation group 0(3) is mapped, using a stereographic projection, from the spherical surface S2 to two-dimensional Euclidean space. The resulting model is not translation invariant. This has the consequence that fields that are supersymmetric partners no longer have a degenerate mass. This degeneracy is restored once the radius of S2 goes to infinity, and the resulting supersymmetry transformation for the fields is now mass dependent. An analogous model on the surface S4 is introduced and its projection onto four-dimensional Euclidean space is examined. This model in turn suggests a supersymmetric model on (3 + 1)-dimensional Minkowski space.

  1. Multi-stability in folded shells: non-Euclidean origami

    NASA Astrophysics Data System (ADS)

    Evans, Arthur

    2015-03-01

    Both natural and man-made structures benefit from having multiple mechanically stable states, from the quick snapping motion of hummingbird beaks to micro-textured surfaces with tunable roughness. Rather than discuss special fabrication techniques for creating bi-stability through material anisotropy, in this talk I will present several examples of how folding a structure can modify the energy landscape and thus lead to multiple stable states. Using ideas from origami and differential geometry, I will discuss how deforming a non-Euclidean surface can be done either continuously or discontinuously, and explore the effects that global constraints have on the ultimate stability of the surface.

  2. Exact Boson-Fermion Duality on a 3D Euclidean Lattice

    NASA Astrophysics Data System (ADS)

    Chen, Jing-Yuan; Son, Jun Ho; Wang, Chao; Raghu, S.

    2018-01-01

    The idea of statistical transmutation plays a crucial role in descriptions of the fractional quantum Hall effect. However, a recently conjectured duality between a critical boson and a massless two-component Dirac fermion extends this notion to gapless systems. This duality sheds light on highly nontrivial problems such as the half-filled Landau level, the superconductor-insulator transition, and surface states of strongly coupled topological insulators. Although this boson-fermion duality has undergone many consistency checks, it has remained unproven. We describe the duality in a nonperturbative fashion using an exact UV mapping of partition functions on a 3D Euclidean lattice.

  3. Lagrangian Form of the Self-Dual Equations for SU(N) Gauge Fields on Four-Dimensional Euclidean Space

    NASA Astrophysics Data System (ADS)

    Hou, Boyu; Song, Xingchang

    1998-04-01

    By compactifying the four-dimensional Euclidean space into S2 × S2 manifold and introducing two topological relevant Wess-Zumino terms to Hn ≡ SL(n,c)/SU(n) nonlinear sigma model, we construct a Lagrangian form for SU(n) self-dual Yang-Mills field, from which the self-dual equations follow as the Euler-Lagrange equations. The project supported in part by the NSF Contract No. PHY-81-09110-A-01. One of the authors (X.C. SONG) was supported by a Fung King-Hey Fellowship through the Committee for Educational Exchange with China

  4. Constructing financial network based on PMFG and threshold method

    NASA Astrophysics Data System (ADS)

    Nie, Chun-Xiao; Song, Fu-Tie

    2018-04-01

    Based on planar maximally filtered graph (PMFG) and threshold method, we introduced a correlation-based network named PMFG-based threshold network (PTN). We studied the community structure of PTN and applied ISOMAP algorithm to represent PTN in low-dimensional Euclidean space. The results show that the community corresponds well to the cluster in the Euclidean space. Further, we studied the dynamics of the community structure and constructed the normalized mutual information (NMI) matrix. Based on the real data in the market, we found that the volatility of the market can lead to dramatic changes in the community structure, and the structure is more stable during the financial crisis.

  5. Absence of even-integer ζ-function values in Euclidean physical quantities in QCD

    NASA Astrophysics Data System (ADS)

    Jamin, Matthias; Miravitllas, Ramon

    2018-04-01

    At order αs4 in perturbative quantum chromodynamics, even-integer ζ-function values are present in Euclidean physical correlation functions like the scalar quark correlation function or the scalar gluonium correlator. We demonstrate that these contributions cancel when the perturbative expansion is expressed in terms of the so-called C-scheme coupling αˆs which has recently been introduced in Ref. [1]. It is furthermore conjectured that a ζ4 term should arise in the Adler function at order αs5 in the MS ‾-scheme, and that this term is expected to disappear in the C-scheme as well.

  6. A family of heavenly metrics

    NASA Astrophysics Data System (ADS)

    Nutku, Y.; Sheftel, M. B.

    2014-02-01

    This is a corrected and essentially extended version of the unpublished manuscript by Y Nutku and M Sheftel which contains new results. It is proposed to be published in honour of Y Nutku’s memory. All corrections and new results in sections 1, 2 and 4 are due to M Sheftel. We present new anti-self-dual exact solutions of the Einstein field equations with Euclidean and neutral (ultra-hyperbolic) signatures that admit only one rotational Killing vector. Such solutions of the Einstein field equations are determined by non-invariant solutions of Boyer-Finley (BF) equation. For the case of Euclidean signature such a solution of the BF equation was first constructed by Calderbank and Tod. Two years later, Martina, Sheftel and Winternitz applied the method of group foliation to the BF equation and reproduced the Calderbank-Tod solution together with new solutions for the neutral signature. In the case of Euclidean signature we obtain new metrics which asymptotically locally look like a flat space and have a non-removable singular point at the origin. In the case of ultra-hyperbolic signature there exist three inequivalent forms of metric. Only one of these can be obtained by analytic continuation from the Calderbank-Tod solution whereas the other two are new.

  7. Symmetric log-domain diffeomorphic Registration: a demons-based approach.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2008-01-01

    Modern morphometric studies use non-linear image registration to compare anatomies and perform group analysis. Recently, log-Euclidean approaches have contributed to promote the use of such computational anatomy tools by permitting simple computations of statistics on a rather large class of invertible spatial transformations. In this work, we propose a non-linear registration algorithm perfectly fit for log-Euclidean statistics on diffeomorphisms. Our algorithm works completely in the log-domain, i.e. it uses a stationary velocity field. This implies that we guarantee the invertibility of the deformation and have access to the true inverse transformation. This also means that our output can be directly used for log-Euclidean statistics without relying on the heavy computation of the log of the spatial transformation. As it is often desirable, our algorithm is symmetric with respect to the order of the input images. Furthermore, we use an alternate optimization approach related to Thirion's demons algorithm to provide a fast non-linear registration algorithm. First results show that our algorithm outperforms both the demons algorithm and the recently proposed diffeomorphic demons algorithm in terms of accuracy of the transformation while remaining computationally efficient.

  8. Silent initial conditions for cosmological perturbations with a change of spacetime signature

    NASA Astrophysics Data System (ADS)

    Mielczarek, Jakub; Linsefors, Linda; Barrau, Aurelien

    Recent calculations in loop quantum cosmology suggest that a transition from a Lorentzian to a Euclidean spacetime might take place in the very early universe. The transition point leads to a state of silence, characterized by a vanishing speed of light. This behavior can be interpreted as a decoupling of different space points, similar to the one characterizing the BKL phase. In this study, we address the issue of imposing initial conditions for the cosmological perturbations at the transition point between the Lorentzian and Euclidean phases. Motivated by the decoupling of space points, initial conditions characterized by a lack of correlations are investigated. We show that the “white noise” gains some support from analysis of the vacuum state in the deep Euclidean regime. Furthermore, the possibility of imposing the silent initial conditions at the trans-Planckian surface, characterized by a vanishing speed for the propagation of modes with wavelengths of the order of the Planck length, is studied. Such initial conditions might result from the loop deformations of the Poincaré algebra. The conversion of the silent initial power spectrum to a scale-invariant one is also examined.

  9. Study on the Rationality and Validity of Probit Models of Domino Effect to Chemical Process Equipment caused by Overpressure

    NASA Astrophysics Data System (ADS)

    Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong

    2013-04-01

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.

  10. The Lanchester square-law model extended to a (2,2) conflict

    NASA Astrophysics Data System (ADS)

    Colegrave, R. K.; Hyde, J. M.

    1993-01-01

    A natural extension of the Lanchester (1,1) square-law model is the (M,N) linear model in which M forces oppose N forces with constant attrition rates. The (2,2) model is treated from both direct and inverse viewpoints. The inverse problem means that the model is to be fitted to a minimum number of observed force levels, i.e. the attrition rates are to be found from the initial force levels together with the levels observed at two subsequent times. An approach based on Hamiltonian dynamics has enabled the authors to derive a procedure for solving the inverse problem, which is readily computerized. Conflicts in which participants unexpectedly rally or weaken must be excluded.

  11. Modeling Multiplicative Error Variance: An Example Predicting Tree Diameter from Stump Dimensions in Baldcypress

    Treesearch

    Bernard R. Parresol

    1993-01-01

    In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...

  12. Tabulated dose uniformity ratio and minimum dose data: rectangular 60Co source plaques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galanter, L.

    1971-01-01

    The data tabulated herein extend to rectangular cobalt-60 plaques the information presented for square plaques in BNL 50145 (Revised). The user is referred to BNL 50145 (Revised) and to the other reports listed for a complete discussion of the parameters involved in data generation and for instructions on the use of these data in gamma irradiator design.

  13. Non-isolated Resolving Sets of certain Graphs Cartesian Product with a Path

    NASA Astrophysics Data System (ADS)

    Hasibuan, I. M.; Salman, A. N. M.; Saputro, S. W.

    2018-04-01

    Let G be a connected, simple, and finite graph. For an ordered subset W = {w 1 , w 2 , · · ·, wk } of vertices in a graph G and a vertex v of G, the metric representation of v with respect to W is the k-vector r(v|W ) = (d(v, w 1), d(v, w 2), · · ·, d(v, wk )). The set W is called a resolving set for G if every vertex of G has a distinct representation. The minimum cardinality of W is called the metric dimension of G, denoted by dim(G). If the induced subgraph < W> has no isolated vertices, then W is called a non-isolated resolving set. The minimum cardinality of non-isolated resolving set of G is called the non-isolated resolving number of G, denoted by nr(G). In this paper, we consider H\\square {P}n that is a graph obtained from Cartesian product between a connected graph H and a path Pn . We determine nr(H\\square {P}n), for some classes of H, including cycles, complete graphs, complete bipartite graphs, and friendship graphs.

  14. Nurse staffing levels and Medicaid reimbursement rates in nursing facilities.

    PubMed

    Harrington, Charlene; Swan, James H; Carrillo, Helen

    2007-06-01

    To examine the relationship between nursing staffing levels in U.S. nursing homes and state Medicaid reimbursement rates. Facility staffing, characteristics, and case-mix data were from the federal On-Line Survey Certification and Reporting (OSCAR) system and other data were from public sources. Ordinary least squares and two-stage least squares regression analyses were used to separately examine the relationship between registered nurse (RN) and total nursing hours in all U.S. nursing homes in 2002, with two endogenous variables: Medicaid reimbursement rates and resident case mix. RN hours and total nursing hours were endogenous with Medicaid reimbursement rates and resident case mix. As expected, Medicaid nursing home reimbursement rates were positively related to both RN and total nursing hours. Resident case mix was a positive predictor of RN hours and a negative predictor of total nursing hours. Higher state minimum RN staffing standards was a positive predictor of RN and total nursing hours while for-profit facilities and the percent of Medicaid residents were negative predictors. To increase staffing levels, average Medicaid reimbursement rates would need to be substantially increased while higher state minimum RN staffing standards is a stronger positive predictor of RN and total nursing hours.

  15. Doppler Feature Based Classification of Wind Profiler Data

    NASA Astrophysics Data System (ADS)

    Sinha, Swati; Chandrasekhar Sarma, T. V.; Lourde. R, Mary

    2017-01-01

    Wind Profilers (WP) are coherent pulsed Doppler radars in UHF and VHF bands. They are used for vertical profiling of wind velocity and direction. This information is very useful for weather modeling, study of climatic patterns and weather prediction. Observations at different height and different wind velocities are possible by changing the operating parameters of WP. A set of Doppler power spectra is the standard form of WP data. Wind velocity, direction and wind velocity turbulence at different heights can be derived from it. Modern wind profilers operate for long duration and generate approximately 4 megabytes of data per hour. The radar data stream contains Doppler power spectra from different radar configurations with echoes from different atmospheric targets. In order to facilitate systematic study, this data needs to be segregated according the type of target. A reliable automated target classification technique is required to do this job. Classical techniques of radar target identification use pattern matching and minimization of mean squared error, Euclidean distance etc. These techniques are not effective for the classification of WP echoes, as these targets do not have well-defined signature in Doppler power spectra. This paper presents an effective target classification technique based on range-Doppler features.

  16. Determination of Spatially Resolved Tablet Density and Hardness Using Near-Infrared Chemical Imaging (NIR-CI).

    PubMed

    Talwar, Sameer; Roopwani, Rahul; Anderson, Carl A; Buckner, Ira S; Drennen, James K

    2017-08-01

    Near-infrared chemical imaging (NIR-CI) combines spectroscopy with digital imaging, enabling spatially resolved analysis and characterization of pharmaceutical samples. Hardness and relative density are critical quality attributes (CQA) that affect tablet performance. Intra-sample density or hardness variability can reveal deficiencies in formulation design or the tableting process. This study was designed to develop NIR-CI methods to predict spatially resolved tablet density and hardness. The method was implemented using a two-step procedure. First, NIR-CI was used to develop a relative density/solid fraction (SF) prediction method for pure microcrystalline cellulose (MCC) compacts only. A partial least squares (PLS) model for predicting SF was generated by regressing the spectra of certain representative pixels selected from each image against the compact SF. Pixel selection was accomplished with a threshold based on the Euclidean distance from the median tablet spectrum. Second, micro-indentation was performed on the calibration compacts to obtain hardness values. A univariate model was developed by relating the empirical hardness values to the NIR-CI predicted SF at the micro-indented pixel locations: this model generated spatially resolved hardness predictions for the entire tablet surface.

  17. Learning Human Actions by Combining Global Dynamics and Local Appearance.

    PubMed

    Luo, Guan; Yang, Shuang; Tian, Guodong; Yuan, Chunfeng; Hu, Weiming; Maybank, Stephen J

    2014-12-01

    In this paper, we address the problem of human action recognition through combining global temporal dynamics and local visual spatio-temporal appearance features. For this purpose, in the global temporal dimension, we propose to model the motion dynamics with robust linear dynamical systems (LDSs) and use the model parameters as motion descriptors. Since LDSs live in a non-Euclidean space and the descriptors are in non-vector form, we propose a shift invariant subspace angles based distance to measure the similarity between LDSs. In the local visual dimension, we construct curved spatio-temporal cuboids along the trajectories of densely sampled feature points and describe them using histograms of oriented gradients (HOG). The distance between motion sequences is computed with the Chi-Squared histogram distance in the bag-of-words framework. Finally we perform classification using the maximum margin distance learning method by combining the global dynamic distances and the local visual distances. We evaluate our approach for action recognition on five short clips data sets, namely Weizmann, KTH, UCF sports, Hollywood2 and UCF50, as well as three long continuous data sets, namely VIRAT, ADL and CRIM13. We show competitive results as compared with current state-of-the-art methods.

  18. Measurement of the PPN parameter γ by testing the geometry of near-Earth space

    NASA Astrophysics Data System (ADS)

    Luo, Jie; Tian, Yuan; Wang, Dian-Hong; Qin, Cheng-Gang; Shao, Cheng-Gang

    2016-06-01

    The Beyond Einstein Advanced Coherent Optical Network (BEACON) mission was designed to achieve an accuracy of 10^{-9} in measuring the Eddington parameter γ , which is perhaps the most fundamental Parameterized Post-Newtonian parameter. However, this ideal accuracy was just estimated as a ratio of the measurement accuracy of the inter-spacecraft distances to the magnitude of the departure from Euclidean geometry. Based on the BEACON concept, we construct a measurement model to estimate the parameter γ with the least squares method. Influences of the measurement noise and the out-of-plane error on the estimation accuracy are evaluated based on the white noise model. Though the BEACON mission does not require expensive drag-free systems and avoids physical dynamical models of spacecraft, the relatively low accuracy of initial inter-spacecraft distances poses a great challenge, which reduces the estimation accuracy in about two orders of magnitude. Thus the noise requirements may need to be more stringent in the design in order to achieve the target accuracy, which is demonstrated in the work. Considering that, we have given the limits on the power spectral density of both noise sources for the accuracy of 10^{-9}.

  19. Multivariate approaches for stability control of the olive oil reference materials for sensory analysis - part I: framework and fundamentals.

    PubMed

    Valverde-Som, Lucia; Ruiz-Samblás, Cristina; Rodríguez-García, Francisco P; Cuadros-Rodríguez, Luis

    2018-02-09

    Virgin olive oil is the only food product for which sensory analysis is regulated to classify it in different quality categories. To harmonize the results of the sensorial method, the use of standards or reference materials is crucial. The stability of sensory reference materials is required to enable their suitable control, aiming to confirm that their specific target values are maintained on an ongoing basis. Currently, such stability is monitored by means of sensory analysis and the sensory panels are in the paradoxical situation of controlling the standards that are devoted to controlling the panels. In the present study, several approaches based on similarity analysis are exploited. For each approach, the specific methodology to build a proper multivariate control chart to monitor the stability of the sensory properties is explained and discussed. The normalized Euclidean and Mahalanobis distances, the so-called nearness and hardiness indices respectively, have been defined as new similarity indices to range the values from 0 to 1. Also, the squared mean from Hotelling's T 2 -statistic and Q 2 -statistic has been proposed as another similarity index. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  20. A Simple Algorithm for the Metric Traveling Salesman Problem

    NASA Technical Reports Server (NTRS)

    Grimm, M. J.

    1984-01-01

    An algorithm was designed for a wire list net sort problem. A branch and bound algorithm for the metric traveling salesman problem is presented for this. The algorithm is a best bound first recursive descent where the bound is based on the triangle inequality. The bounded subsets are defined by the relative order of the first K of the N cities (i.e., a K city subtour). When K equals N, the bound is the length of the tour. The algorithm is implemented as a one page subroutine written in the C programming language for the VAX 11/750. Average execution times for randomly selected planar points using the Euclidean metric are 0.01, 0.05, 0.42, and 3.13 seconds for ten, fifteen, twenty, and twenty-five cities, respectively. Maximum execution times for a hundred cases are less than eleven times the averages. The speed of the algorithms is due to an initial ordering algorithm that is a N squared operation. The algorithm also solves the related problem where the tour does not return to the starting city and the starting and/or ending cities may be specified. It is possible to extend the algorithm to solve a nonsymmetric problem satisfying the triangle inequality.

  1. Accuracy of different sensors for the estimation of pollutant concentrations (total suspended solids, total and dissolved chemical oxygen demand) in wastewater and stormwater.

    PubMed

    Lepot, Mathieu; Aubin, Jean-Baptiste; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Many field investigations have used continuous sensors (turbidimeters and/or ultraviolet (UV)-visible spectrophotometers) to estimate with a short time step pollutant concentrations in sewer systems. Few, if any, publications compare the performance of various sensors for the same set of samples. Different surrogate sensors (turbidity sensors, UV-visible spectrophotometer, pH meter, conductivity meter and microwave sensor) were tested to link concentrations of total suspended solids (TSS), total and dissolved chemical oxygen demand (COD), and sensors' outputs. In the combined sewer at the inlet of a wastewater treatment plant, 94 samples were collected during dry weather, 44 samples were collected during wet weather, and 165 samples were collected under both dry and wet weather conditions. From these samples, triplicate standard laboratory analyses were performed and corresponding sensors outputs were recorded. Two outlier detection methods were developed, based, respectively, on the Mahalanobis and Euclidean distances. Several hundred regression models were tested, and the best ones (according to the root mean square error criterion) are presented in order of decreasing performance. No sensor appears as the best one for all three investigated pollutants.

  2. A Transmission Electron Microscope Study of Experimentally Shocked Pregraphitic Carbon

    NASA Technical Reports Server (NTRS)

    Rietmeijer, Frans J. M.

    1995-01-01

    A transmission electron microscope study of experimental shock metamorphism in natural pre-graphitic carbon simulates the response of the most common natural carbons to increased shock pressure. The d-spacings of this carbon are insensitive to the shock pressure and have no apparent diagnostic value, but progressive comminution occurs in response to increased shock pressure up to 59.6 GPa. The function, P = 869.1 x (size(sub minimum )(exp -0.83), describes the relationship between the minimum root-mean-square subgrain size (nm) and shock pressure (GPa). While a subgrain texture of natural pregraphitic carbons carries little information when pre-shock textures are unknown, this texture may go unnoticed as a shock metamorphic feature.

  3. An entropy method for induced drag minimization

    NASA Technical Reports Server (NTRS)

    Greene, George C.

    1989-01-01

    A fundamentally new approach to the aircraft minimum induced drag problem is presented. The method, a 'viscous lifting line', is based on the minimum entropy production principle and does not require the planar wake assumption. An approximate, closed form solution is obtained for several wing configurations including a comparison of wing extension, winglets, and in-plane wing sweep, with and without a constraint on wing-root bending moment. Like the classical lifting-line theory, this theory predicts that induced drag is proportional to the square of the lift coefficient and inversely proportioinal to the wing aspect ratio. Unlike the classical theory, it predicts that induced drag is Reynolds number dependent and that the optimum spanwise circulation distribution is non-elliptic.

  4. Nucleon form factors in dispersively improved chiral effective field theory: Scalar form factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alarcon Soriano, Jose Manuel; Weiss, Christian

    We propose a method for calculating the nucleon form factors (FFs) ofmore » $G$-parity-even operators by combining Chiral Effective Field Theory ($$\\chi$$EFT) and dispersion analysis. The FFs are expressed as dispersive integrals over the two-pion cut at $$t > 4 M_\\pi^2$$. The spectral functions are obtained from the elastic unitarity condition and expressed as products of the complex $$\\pi\\pi \\rightarrow N\\bar N$$ partial-wave amplitudes and the timelike pion FF. $$\\chi$$EFT is used to calculate the ratio of the partial-wave amplitudes and the pion FF, which is real and free of $$\\pi\\pi$$ rescattering in the $t$-channel ($N/D$ method). The rescattering effects are then incorporated by multiplying with the squared modulus of the empirical pion FF. The procedure results in a marked improvement compared to conventional $$\\chi$$EFT calculations of the spectral functions. We apply the method to the nucleon scalar FF and compute the scalar spectral function, the scalar radius, the $t$-dependent FF, and the Cheng-Dashen discrepancy. Higher-order chiral corrections are estimated through the $$\\pi N$$ low-energy constants. Results are in excellent agreement with dispersion-theoretical calculations. We elaborate several other interesting aspects of our method. The results show proper scaling behavior in the large-$$N_c$$ limit of QCD because the $$\\chi$$EFT includes $N$ and $$\\Delta$$ intermediate states. The squared modulus of the timelike pion FF required by our method can be extracted from Lattice QCD calculations of vacuum correlation functions of the operator at large Euclidean distances. Our method can be applied to the nucleon FFs of other operators of interest, such as the isovector-vector current, the energy-momentum tensor, and twist-2 QCD operators (moments of generalized parton distributions).« less

  5. Nucleon form factors in dispersively improved chiral effective field theory: Scalar form factor

    DOE PAGES

    Alarcon Soriano, Jose Manuel; Weiss, Christian

    2017-11-20

    We propose a method for calculating the nucleon form factors (FFs) ofmore » $G$-parity-even operators by combining Chiral Effective Field Theory ($$\\chi$$EFT) and dispersion analysis. The FFs are expressed as dispersive integrals over the two-pion cut at $$t > 4 M_\\pi^2$$. The spectral functions are obtained from the elastic unitarity condition and expressed as products of the complex $$\\pi\\pi \\rightarrow N\\bar N$$ partial-wave amplitudes and the timelike pion FF. $$\\chi$$EFT is used to calculate the ratio of the partial-wave amplitudes and the pion FF, which is real and free of $$\\pi\\pi$$ rescattering in the $t$-channel ($N/D$ method). The rescattering effects are then incorporated by multiplying with the squared modulus of the empirical pion FF. The procedure results in a marked improvement compared to conventional $$\\chi$$EFT calculations of the spectral functions. We apply the method to the nucleon scalar FF and compute the scalar spectral function, the scalar radius, the $t$-dependent FF, and the Cheng-Dashen discrepancy. Higher-order chiral corrections are estimated through the $$\\pi N$$ low-energy constants. Results are in excellent agreement with dispersion-theoretical calculations. We elaborate several other interesting aspects of our method. The results show proper scaling behavior in the large-$$N_c$$ limit of QCD because the $$\\chi$$EFT includes $N$ and $$\\Delta$$ intermediate states. The squared modulus of the timelike pion FF required by our method can be extracted from Lattice QCD calculations of vacuum correlation functions of the operator at large Euclidean distances. Our method can be applied to the nucleon FFs of other operators of interest, such as the isovector-vector current, the energy-momentum tensor, and twist-2 QCD operators (moments of generalized parton distributions).« less

  6. Optimal steering for kinematic vehicles with applications to spatially distributed agents

    NASA Astrophysics Data System (ADS)

    Brown, Scott; Praeger, Cheryl E.; Giudici, Michael

    While there is no universal method to address control problems involving networks of autonomous vehicles, there exist a few promising schemes that apply to different specific classes of problems, which have attracted the attention of many researchers from different fields. In particular, one way to extend techniques that address problems involving a single autonomous vehicle to those involving teams of autonomous vehicles is to use the concept of Voronoi diagram. The Voronoi diagram provides a spatial partition of the environment the team of vehicles operate in, where each element of this partition is associated with a unique vehicle from the team. The partition induces a graph abstraction of the operating space that is in an one-to-one correspondence with the network abstraction of the team of autonomous vehicles; a fact that can provide both conceptual and analytical advantages during mission planning and execution. In this dissertation, we propose the use of a new class of Voronoi-like partitioning schemes with respect to state-dependent proximity (pseudo-) metrics rather than the Euclidean distance or other generalized distance functions, which are typically used in the literature. An important nuance here is that, in contrast to the Euclidean distance, state-dependent metrics can succinctly capture system theoretic features of each vehicle from the team (e.g., vehicle kinematics), as well as the environment-vehicle interactions, which are induced, for example, by local winds/currents. We subsequently illustrate how the proposed concept of state-dependent Voronoi-like partition can induce local control schemes for problems involving networks of spatially distributed autonomous vehicles by examining a sequential pursuit problem of a maneuvering target by a group of pursuers distributed in the plane. The construction of generalized Voronoi diagrams with respect to state-dependent metrics poses some significant challenges. First, the generalized distance metric may be a function of the direction of motion of the vehicle (anisotropic pseudo-distance function) and/or may not be expressible in closed form. Second, such problems fall under the general class of partitioning problems for which the vehicles' dynamics must be taken into account. The topology of the vehicle's configuration space may be non-Euclidean, for example, it may be a manifold embedded in a Euclidean space. In other words, these problems may not be reducible to generalized Voronoi diagram problems for which efficient construction schemes, analytical and/or computational, exist in the literature. This research effort pursues three main objectives. First, we present the complete solution of different steering problems involving a single vehicle in the presence of motion constraints imposed by the maneuverability envelope of the vehicle and/or the presence of a drift field induced by winds/currents in its vicinity. The analysis of each steering problem involving a single vehicle provides us with a state-dependent generalized metric, such as the minimum time-to-go/come. We subsequently use these state-dependent generalized distance functions as the proximity metrics in the formulation of generalized Voronoi-like partitioning problems. The characterization of the solutions of these state-dependent Voronoi-like partitioning problems using either analytical or computational techniques constitutes the second main objective of this dissertation. The third objective of this research effort is to illustrate the use of the proposed concept of state-dependent Voronoi-like partition as a means for passing from control techniques that apply to problems involving a single vehicle to problems involving networks of spatially distributed autonomous vehicles. To this aim, we formulate the problem of sequential/relay pursuit of a maneuvering target by a group of spatially distributed pursuers and subsequently propose a distributed group pursuit strategy that directly derives from the solution of a state-dependent Voronoi-like partitioning problem. (Abstract shortened by UMI.)

  7. Integrated optimization of unmanned aerial vehicle task allocation and path planning under steady wind.

    PubMed

    Luo, He; Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang

    2018-01-01

    Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided.

  8. Integrated optimization of unmanned aerial vehicle task allocation and path planning under steady wind

    PubMed Central

    Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang

    2018-01-01

    Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided. PMID:29561888

  9. Some constructions of biharmonic maps and Chen’s conjecture on biharmonic hypersurfaces

    NASA Astrophysics Data System (ADS)

    Ou, Ye-Lin

    2012-04-01

    We give several construction methods and use them to produce many examples of proper biharmonic maps including biharmonic tori of any dimension in Euclidean spheres (Theorem 2.2, Corollaries 2.3, 2.4 and 2.6), biharmonic maps between spheres (Theorem 2.9) and into spheres (Theorem 2.10) via orthogonal multiplications and eigenmaps. We also study biharmonic graphs of maps, derive the equation for a function whose graph is a biharmonic hypersurface in a Euclidean space, and give an equivalent formulation of Chen's conjecture on biharmonic hypersurfaces by using the biharmonic graph equation (Theorem 4.1) which paves a way for the analytic study of the conjecture.

  10. Modified fuzzy c-means applied to a Bragg grating-based spectral imager for material clustering

    NASA Astrophysics Data System (ADS)

    Rodríguez, Aida; Nieves, Juan Luis; Valero, Eva; Garrote, Estíbaliz; Hernández-Andrés, Javier; Romero, Javier

    2012-01-01

    We have modified the Fuzzy C-Means algorithm for an application related to segmentation of hyperspectral images. Classical fuzzy c-means algorithm uses Euclidean distance for computing sample membership to each cluster. We have introduced a different distance metric, Spectral Similarity Value (SSV), in order to have a more convenient similarity measure for reflectance information. SSV distance metric considers both magnitude difference (by the use of Euclidean distance) and spectral shape (by the use of Pearson correlation). Experiments confirmed that the introduction of this metric improves the quality of hyperspectral image segmentation, creating spectrally more dense clusters and increasing the number of correctly classified pixels.

  11. Spontaneous PT-Symmetry Breaking for Systems of Noncommutative Euclidean Lie Algebraic Type

    NASA Astrophysics Data System (ADS)

    Dey, Sanjib; Fring, Andreas; Mathanaranjan, Thilagarajah

    2015-11-01

    We propose a noncommutative version of the Euclidean Lie algebra E 2. Several types of non-Hermitian Hamiltonian systems expressed in terms of generic combinations of the generators of this algebra are investigated. Using the breakdown of the explicitly constructed Dyson maps as a criterium, we identify the domains in the parameter space in which the Hamiltonians have real energy spectra and determine the exceptional points signifying the crossover into the different types of spontaneously broken PT-symmetric regions with pairs of complex conjugate eigenvalues. We find exceptional points which remain invariant under the deformation as well as exceptional points becoming dependent on the deformation parameter of the algebra.

  12. Hadronic vacuum polarization in QCD and its evaluation in Euclidean spacetime

    NASA Astrophysics Data System (ADS)

    de Rafael, Eduardo

    2017-07-01

    We discuss a new technique to evaluate integrals of QCD Green's functions in the Euclidean based on their Mellin-Barnes representation. We present as a first application the evaluation of the lowest order hadronic vacuum polarization (HVP) contribution to the anomalous magnetic moment of the muon 1/2 (gμ-2 )HVP≡aμHVP . It is shown that with a precise determination of the slope and curvature of the HVP function at the origin from lattice QCD (LQCD), one can already obtain a result for aμHVP which may serve as a test of the determinations based on experimental measurements of the e+e- annihilation cross section into hadrons.

  13. Querying databases of trajectories of differential equations: Data structures for trajectories

    NASA Technical Reports Server (NTRS)

    Grossman, Robert

    1989-01-01

    One approach to qualitative reasoning about dynamical systems is to extract qualitative information by searching or making queries on databases containing very large numbers of trajectories. The efficiency of such queries depends crucially upon finding an appropriate data structure for trajectories of dynamical systems. Suppose that a large number of parameterized trajectories gamma of a dynamical system evolving in R sup N are stored in a database. Let Eta is contained in set R sup N denote a parameterized path in Euclidean Space, and let the Euclidean Norm denote a norm on the space of paths. A data structure is defined to represent trajectories of dynamical systems, and an algorithm is sketched which answers queries.

  14. Numerical analysis of interface debonding detection in bonded repair with Rayleigh waves

    NASA Astrophysics Data System (ADS)

    Xu, Ying; Li, BingCheng; Lu, Miaomiao

    2017-01-01

    This paper studied how to use the variation of the dispersion curves of Rayleigh wave group velocity to detect interfacial debonding damage between FRP plate and steel beam. Since FRP strengthened steel beam is two layers medium, Rayleigh wave velocity dispersion phenomenon will happen. The interface debonding damage of FRP strengthened steel beam have an obvious effect on the Rayleigh wave velocity dispersion curve. The paper first put forward average Euclidean distance and Angle separation degree to describe the relationship between the different dispersion curves. Numerical results indicate that there is a approximate linear mapping relationship between the average Euclidean distance of dispersion curves and the length of interfacial debonding damage.

  15. New wideband radar target classification method based on neural learning and modified Euclidean metric

    NASA Astrophysics Data System (ADS)

    Jiang, Yicheng; Cheng, Ping; Ou, Yangkui

    2001-09-01

    A new method for target classification of high-range resolution radar is proposed. It tries to use neural learning to obtain invariant subclass features of training range profiles. A modified Euclidean metric based on the Box-Cox transformation technique is investigated for Nearest Neighbor target classification improvement. The classification experiments using real radar data of three different aircraft have demonstrated that classification error can reduce 8% if this method proposed in this paper is chosen instead of the conventional method. The results of this paper have shown that by choosing an optimized metric, it is indeed possible to reduce the classification error without increasing the number of samples.

  16. Superintegrable three-body systems on the line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chanu, Claudia; Degiovanni, Luca; Rastelli, Giovanni

    2008-11-15

    We consider classical three-body interactions on a Euclidean line depending on the reciprocal distance of the particles and admitting four functionally independent quadratic in the momentum first integrals. These systems are multiseparable, superintegrable, and equivalent (up to rescalings) to a one-particle system in the three-dimensional Euclidean space. Common features of the dynamics are discussed. We show how to determine quantum symmetry operators associated with the first integrals considered here but do not analyze the corresponding quantum dynamics. The conformal multiseparability is discussed and examples of conformal first integrals are given. The systems considered here in generality include the Calogero, Wolfes,more » and other three-body interactions widely studied in mathematical physics.« less

  17. Global invariants of paths and curves for the group of all linear similarities in the two-dimensional Euclidean space

    NASA Astrophysics Data System (ADS)

    Khadjiev, Djavvat; Ören, Idri˙s; Pekşen, Ömer

    Let E2 be the 2-dimensional Euclidean space, LSim(2) be the group of all linear similarities of E2 and LSim+(2) be the group of all orientation-preserving linear similarities of E2. The present paper is devoted to solutions of problems of global G-equivalence of paths and curves in E2 for the groups G = LSim(2),LSim+(2). Complete systems of global G-invariants of a path and a curve in E2 are obtained. Existence and uniqueness theorems are given. Evident forms of a path and a curve with the given global invariants are obtained.

  18. Comparison of five cluster validity indices performance in brain [18 F]FET-PET image segmentation using k-means.

    PubMed

    Abualhaj, Bedor; Weng, Guoyang; Ong, Melissa; Attarwala, Ali Asgar; Molina, Flavia; Büsing, Karen; Glatting, Gerhard

    2017-01-01

    Dynamic [ 18 F]fluoro-ethyl-L-tyrosine positron emission tomography ([ 18 F]FET-PET) is used to identify tumor lesions for radiotherapy treatment planning, to differentiate glioma recurrence from radiation necrosis and to classify gliomas grading. To segment different regions in the brain k-means cluster analysis can be used. The main disadvantage of k-means is that the number of clusters must be pre-defined. In this study, we therefore compared different cluster validity indices for automated and reproducible determination of the optimal number of clusters based on the dynamic PET data. The k-means algorithm was applied to dynamic [ 18 F]FET-PET images of 8 patients. Akaike information criterion (AIC), WB, I, modified Dunn's and Silhouette indices were compared on their ability to determine the optimal number of clusters based on requirements for an adequate cluster validity index. To check the reproducibility of k-means, the coefficients of variation CVs of the objective function values OFVs (sum of squared Euclidean distances within each cluster) were calculated using 100 random centroid initialization replications RCI 100 for 2 to 50 clusters. k-means was performed independently on three neighboring slices containing tumor for each patient to investigate the stability of the optimal number of clusters within them. To check the independence of the validity indices on the number of voxels, cluster analysis was applied after duplication of a slice selected from each patient. CVs of index values were calculated at the optimal number of clusters using RCI 100 to investigate the reproducibility of the validity indices. To check if the indices have a single extremum, visual inspection was performed on the replication with minimum OFV from RCI 100 . The maximum CV of OFVs was 2.7 × 10 -2 from all patients. The optimal number of clusters given by modified Dunn's and Silhouette indices was 2 or 3 leading to a very poor segmentation. WB and I indices suggested in median 5, [range 4-6] and 4, [range 3-6] clusters, respectively. For WB, I, modified Dunn's and Silhouette validity indices the suggested optimal number of clusters was not affected by the number of the voxels. The maximum coefficient of variation of WB, I, modified Dunn's, and Silhouette validity indices were 3 × 10 -2 , 1, 2 × 10 -1 and 3 × 10 -3 , respectively. WB-index showed a single global maximum, whereas the other indices showed also local extrema. From the investigated cluster validity indices, the WB-index is best suited for automated determination of the optimal number of clusters for [ 18 F]FET-PET brain images for the investigated image reconstruction algorithm and the used scanner: it yields meaningful results allowing better differentiation of tissues with higher number of clusters, it is simple, reproducible and has an unique global minimum. © 2016 American Association of Physicists in Medicine.

  19. Statistical procedures for determination and verification of minimum reporting levels for drinking water methods.

    PubMed

    Winslow, Stephen D; Pepich, Barry V; Martin, John J; Hallberg, George R; Munch, David J; Frebis, Christopher P; Hedrick, Elizabeth J; Krop, Richard A

    2006-01-01

    The United States Environmental Protection Agency's Office of Ground Water and Drinking Water has developed a single-laboratory quantitation procedure: the lowest concentration minimum reporting level (LCMRL). The LCMRL is the lowest true concentration for which future recovery is predicted to fall, with high confidence (99%), between 50% and 150%. The procedure takes into account precision and accuracy. Multiple concentration replicates are processed through the entire analytical method and the data are plotted as measured sample concentration (y-axis) versus true concentration (x-axis). If the data support an assumption of constant variance over the concentration range, an ordinary least-squares regression line is drawn; otherwise, a variance-weighted least-squares regression is used. Prediction interval lines of 99% confidence are drawn about the regression. At the points where the prediction interval lines intersect with data quality objective lines of 50% and 150% recovery, lines are dropped to the x-axis. The higher of the two values is the LCMRL. The LCMRL procedure is flexible because the data quality objectives (50-150%) and the prediction interval confidence (99%) can be varied to suit program needs. The LCMRL determination is performed during method development only. A simpler procedure for verification of data quality objectives at a given minimum reporting level (MRL) is also presented. The verification procedure requires a single set of seven samples taken through the entire method procedure. If the calculated prediction interval is contained within data quality recovery limits (50-150%), the laboratory performance at the MRL is verified.

  20. Video-based face recognition via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  1. Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas

    NASA Astrophysics Data System (ADS)

    Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.

    2017-12-01

    Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.

  2. A Simple but Powerful Heuristic Method for Accelerating k-Means Clustering of Large-Scale Data in Life Science.

    PubMed

    Ichikawa, Kazuki; Morishita, Shinichi

    2014-01-01

    K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.

  3. Improved pedagogy for linear differential equations by reconsidering how we measure the size of solutions

    NASA Astrophysics Data System (ADS)

    Tisdell, Christopher C.

    2017-11-01

    For over 50 years, the learning of teaching of a priori bounds on solutions to linear differential equations has involved a Euclidean approach to measuring the size of a solution. While the Euclidean approach to a priori bounds on solutions is somewhat manageable in the learning and teaching of the proofs involving second-order, linear problems with constant co-efficients, we believe it is not pedagogically optimal. Moreover, the Euclidean method becomes pedagogically unwieldy in the proofs involving higher-order cases. The purpose of this work is to propose a simpler pedagogical approach to establish a priori bounds on solutions by considering a different way of measuring the size of a solution to linear problems, which we refer to as the Uber size. The Uber form enables a simplification of pedagogy from the literature and the ideas are accessible to learners who have an understanding of the Fundamental Theorem of Calculus and the exponential function, both usually seen in a first course in calculus. We believe that this work will be of mathematical and pedagogical interest to those who are learning and teaching in the area of differential equations or in any of the numerous disciplines where linear differential equations are used.

  4. Bayesian Maximum Entropy space/time estimation of surface water chloride in Maryland using river distances.

    PubMed

    Jat, Prahlad; Serre, Marc L

    2016-12-01

    Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R 2 by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles. Copyright © 2016. Published by Elsevier Ltd.

  5. Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.

    PubMed

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-05-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Multi-resolutional shape features via non-Euclidean wavelets: Applications to statistical analysis of cortical thickness

    PubMed Central

    Kim, Won Hwa; Singh, Vikas; Chung, Moo K.; Hinrichs, Chris; Pachauri, Deepti; Okonkwo, Ozioma C.; Johnson, Sterling C.

    2014-01-01

    Statistical analysis on arbitrary surface meshes such as the cortical surface is an important approach to understanding brain diseases such as Alzheimer’s disease (AD). Surface analysis may be able to identify specific cortical patterns that relate to certain disease characteristics or exhibit differences between groups. Our goal in this paper is to make group analysis of signals on surfaces more sensitive. To do this, we derive multi-scale shape descriptors that characterize the signal around each mesh vertex, i.e., its local context, at varying levels of resolution. In order to define such a shape descriptor, we make use of recent results from harmonic analysis that extend traditional continuous wavelet theory from the Euclidean to a non-Euclidean setting (i.e., a graph, mesh or network). Using this descriptor, we conduct experiments on two different datasets, the Alzheimer’s Disease NeuroImaging Initiative (ADNI) data and images acquired at the Wisconsin Alzheimer’s Disease Research Center (W-ADRC), focusing on individuals labeled as having Alzheimer’s disease (AD), mild cognitive impairment (MCI) and healthy controls. In particular, we contrast traditional univariate methods with our multi-resolution approach which show increased sensitivity and improved statistical power to detect a group-level effects. We also provide an open source implementation. PMID:24614060

  7. Economies of scale and trends in the size of southern forest industries

    Treesearch

    James E. Granskog

    1978-01-01

    In each of the major southern forest industries, the trend has been toward achieving economies of scale, that is, to build larger production units to reduce unit costs. Current minimum efficient plant size estimated by survivor analysis is 1,000 tons per day capacity for sulfate pulping, 100 million square feet (3/8- inch basis) annual capacity for softwood plywood,...

  8. Methods of Constructing a Blended Performance Function Suitable for Formation Flight

    NASA Technical Reports Server (NTRS)

    Ryan, Jack

    2017-01-01

    Two methods for constructing performance functions for formation fight-for-drag-reduction suitable for use with an extreme-seeking control system are presented. The first method approximates an a prior measured or estimated drag-reduction performance function by combining real-time measurements of readily available parameters. The parameters are combined with weightings determined from a minimum squares optimization to form a blended performance function.

  9. Analytical Design of Evolvable Software for High-Assurance Computing

    DTIC Science & Technology

    2001-02-14

    Mathematical expression for the Total Sum of Squares which measures the variability that results when all values are treated as a combined sample coming from...primarily interested in background on software design and high-assurance computing, research in software architecture generation or evaluation...respectively. Those readers solely interested in the validation of a software design approach should at the minimum read Chapter 6 followed by Chapter

  10. 76 FR 71959 - KC Hydro LLC of New Hampshire; Notice of Preliminary Permit Application Accepted for Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-21

    ... shaft Kaplan turbine-generating unit with a total installed capacity of 0.84 MW; (5) a new 135-foot-long... outlet works; (4) a minimum flow turbine generator and a new 2,000-square-foot powerhouse containing one or two submersible or tubular-type turbine generators with a total installed capacity of 0.36 MW; (5...

  11. a Perspective on the Magic Square and the "special Unitary" Realization of Real Simple Lie Algebras

    NASA Astrophysics Data System (ADS)

    Santander, Mariano

    2013-07-01

    This paper contains the last part of the minicourse "Spaces: A Perspective View" delivered at the IFWGP2012. The series of three lectures was intended to bring the listeners from the more naive and elementary idea of space as "our physical Space" (which after all was the dominant one up to the 1820s) through the generalization of the idea of space which took place in the last third of the 19th century. That was a consequence of first the discovery and acceptance of non-Euclidean geometry and second, of the views afforded by the works of Riemann and Klein and continued since then by many others, outstandingly Lie and Cartan. Here we deal with the part of the minicourse which centers on the classification questions associated to the simple real Lie groups. We review the original introduction of the Magic Square "á la Freudenthal", putting the emphasis in the role played in this construction by the four normed division algebras ℝ, ℂ, ℍ, 𝕆. We then explore the possibility of understanding some simple real Lie algebras as "special unitary" over some algebras 𝕂 or tensor products 𝕂1 ⊗ 𝕂2, and we argue that the proper setting for this construction is not to confine only to normed division algebras, but to allow the split versions ℂ‧, ℍ‧, 𝕆‧ of complex, quaternions and octonions as well. This way we get a "Grand Magic Square" and we fill in all details required to cover all real forms of simple real Lie algebras within this scheme. The paper ends with the complete lists of all realizations of simple real Lie algebras as "special unitary" (or only unitary when n = 2) over some tensor product of two *-algebras 𝕂1, 𝕂2, which in all cases are obtained from ℝ, ℂ, ℂ‧, ℍ, ℍ‧, 𝕆, 𝕆‧ as sets, endowing them with a *-conjugation which usually but not always is the natural complex, quaternionic or octonionic conjugation.

  12. Illumination-redistribution lenses for non-circular spots

    NASA Astrophysics Data System (ADS)

    Parkyn, William A.; Pelka, David G.

    2005-08-01

    The design of illumination lenses is far easier under the regime of the small-source approximation, whereby central rays are taken as representative of the entire source. This implies that the lens is much larger than the source's active emitter, and its entire interior surface is nowhere close to the source. Also, a given source luminance requires a minimum lens area to achieve the candlepower necessary for target illumination. We introduce two-surface aspheric lenses for specific illuminations tasks involving ceiling-mounted downlights, lenses that achieve uniform illuminance at the output aperture as well as at the target. This means that squared-off lenses will produce square spots. In particular, a semicircular lens and a vertical mirror will produce a semicircular spot suitable for gambling tables.

  13. Adaptive control strategies for flexible robotic arm

    NASA Technical Reports Server (NTRS)

    Bialasiewicz, Jan T.

    1993-01-01

    The motivation of this research came about when a neural network direct adaptive control scheme was applied to control the tip position of a flexible robotic arm. Satisfactory control performance was not attainable due to the inherent non-minimum phase characteristics of the flexible robotic arm tip. Most of the existing neural network control algorithms are based on the direct method and exhibit very high sensitivity if not unstable closed-loop behavior. Therefore a neural self-tuning control (NSTC) algorithm is developed and applied to this problem and showed promising results. Simulation results of the NSTC scheme and the conventional self-tuning (STR) control scheme are used to examine performance factors such as control tracking mean square error, estimation mean square error, transient response, and steady state response.

  14. Estimation of the simple correlation coefficient.

    PubMed

    Shieh, Gwowen

    2010-11-01

    This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.

  15. Precise identification of Dirac-like point through a finite photonic crystal square matrix

    PubMed Central

    Dong, Guoyan; Zhou, Ji; Yang, Xiulun; Meng, Xiangfeng

    2016-01-01

    The phenomena of the minimum transmittance spectrum or the maximum reflection spectrum located around the Dirac frequency have been observed to demonstrate the 1/L scaling law near the Dirac-like point through the finite ribbon structure. However, so far there is no effective way to identify the Dirac-like point accurately. In this work we provide an effective measurement method to identify the Dirac-like point accurately through a finite photonic crystal square matrix. Based on the Dirac-like dispersion achieved by the accidental degeneracy at the centre of the Brillouin zone of dielectric photonic crystal, both the simulated and experimental results demonstrate that the transmittance spectra through a finite photonic crystal square matrix not only provide the clear evidence for the existence of Dirac-like point but also can be used to identify the precise location of Dirac-like point by the characteristics of sharp cusps embedded in the extremum spectra surrounding the conical singularity. PMID:27857145

  16. Development of a non-destructive method for determining protein nitrogen in a yellow fever vaccine by near infrared spectroscopy and multivariate calibration.

    PubMed

    Dabkiewicz, Vanessa Emídio; de Mello Pereira Abrantes, Shirley; Cassella, Ricardo Jorgensen

    2018-08-05

    Near infrared spectroscopy (NIR) with diffuse reflectance associated to multivariate calibration has as main advantage the replacement of the physical separation of interferents by the mathematical separation of their signals, rapidly with no need for reagent consumption, chemical waste production or sample manipulation. Seeking to optimize quality control analyses, this spectroscopic analytical method was shown to be a viable alternative to the classical Kjeldahl method for the determination of protein nitrogen in yellow fever vaccine. The most suitable multivariate calibration was achieved by the partial least squares method (PLS) with multiplicative signal correction (MSC) treatment and data mean centering (MC), using a minimum number of latent variables (LV) equal to 1, with the lower value of the square root of the mean squared prediction error (0.00330) associated with the highest percentage value (91%) of samples. Accuracy ranged 95 to 105% recovery in the 4000-5184 cm -1 region. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. The Wall Interference of a Wind Tunnel of Elliptic Cross Section

    NASA Technical Reports Server (NTRS)

    Tani, Itiro; Sanuki, Matao

    1944-01-01

    The wall interference is obtained for a wind tunnel of elliptic section for the two cases of closed and open working sections. The approximate and exact methods used gave results in practically good agreement. Corresponding to the result given by Glauert for the case of the closed rectangular section, the interference is found to be a minimum for a ratio of minor to major axis of 1:square root of 6 This, however, is true only for the case where the span of the airfoil is small in comparison with the width of the tunnel. For a longer airfoil the favorable ellipse is flatter. In the case of the open working section the circular shape gives the minimum interference.

  18. Performance of transonic fan stage with weight flow per unit annulus area of 178 kilograms per second per square meter (6.5(lb/sec)/(sq ft))

    NASA Technical Reports Server (NTRS)

    Moore, R. D.; Urasek, D. C.; Kovich, G.

    1973-01-01

    The overall and blade-element performances are presented over the stable flow operating range from 50 to 100 percent of design speed. Stage peak efficiency of 0.834 was obtained at a weight flow of 26.4 kg/sec (58.3 lb/sec) and a pressure ratio of 1.581. The stall margin for the stage was 7.5 percent based on weight flow and pressure ratio at stall and peak efficiency conditions. The rotor minimum losses were approximately equal to design except in the blade vibration damper region. Stator minimum losses were less than design except in the tip and damper regions.

  19. Determining Metacarpophalangeal Flexion Angle Tolerance for Reliable Volumetric Joint Space Measurements by High-resolution Peripheral Quantitative Computed Tomography.

    PubMed

    Tom, Stephanie; Frayne, Mark; Manske, Sarah L; Burghardt, Andrew J; Stok, Kathryn S; Boyd, Steven K; Barnabe, Cheryl

    2016-10-01

    The position-dependence of a method to measure the joint space of metacarpophalangeal (MCP) joints using high-resolution peripheral quantitative computed tomography (HR-pQCT) was studied. Cadaveric MCP were imaged at 7 flexion angles between 0 and 30 degrees. The variability in reproducibility for mean, minimum, and maximum joint space widths and volume measurements was calculated for increasing degrees of flexion. Root mean square coefficient of variance values were < 5% under 20 degrees of flexion for mean, maximum, and volumetric joint spaces. Values for minimum joint space width were optimized under 10 degrees of flexion. MCP joint space measurements should be acquired at < 10 degrees of flexion in longitudinal studies.

  20. Gravitational instantons from minimal surfaces

    NASA Astrophysics Data System (ADS)

    Aliev, A. N.; Hortaçsu, M.; Kalayci, J.; Nutku, Y.

    1999-02-01

    Physical properties of gravitational instantons which are derivable from minimal surfaces in three-dimensional Euclidean space are examined using the Newman-Penrose formalism for Euclidean signature. The gravitational instanton that corresponds to the helicoid minimal surface is investigated in detail. This is a metric of Bianchi type 0264-9381/16/2/024/img9, or E(2), which admits a hidden symmetry due to the existence of a quadratic Killing tensor. It leads to a complete separation of variables in the Hamilton-Jacobi equation for geodesics, as well as in Laplace's equation for a massless scalar field. The scalar Green function can be obtained in closed form, which enables us to calculate the vacuum fluctuations of a massless scalar field in the background of this instanton.

  1. Twistor Geometry of Null Foliations in Complex Euclidean Space

    NASA Astrophysics Data System (ADS)

    Taghavi-Chabert, Arman

    2017-01-01

    We give a detailed account of the geometric correspondence between a smooth complex projective quadric hypersurface Q^n of dimension n ≥ 3, and its twistor space PT, defined to be the space of all linear subspaces of maximal dimension of Q^n. Viewing complex Euclidean space CE^n as a dense open subset of Q^n, we show how local foliations tangent to certain integrable holomorphic totally null distributions of maximal rank on CE^n can be constructed in terms of complex submanifolds of PT. The construction is illustrated by means of two examples, one involving conformal Killing spinors, the other, conformal Killing-Yano 2-forms. We focus on the odd-dimensional case, and we treat the even-dimensional case only tangentially for comparison.

  2. Canonical Drude Weight for Non-integrable Quantum Spin Chains

    NASA Astrophysics Data System (ADS)

    Mastropietro, Vieri; Porta, Marcello

    2018-03-01

    The Drude weight is a central quantity for the transport properties of quantum spin chains. The canonical definition of Drude weight is directly related to Kubo formula of conductivity. However, the difficulty in the evaluation of such expression has led to several alternative formulations, accessible to different methods. In particular, the Euclidean, or imaginary-time, Drude weight can be studied via rigorous renormalization group. As a result, in the past years several universality results have been proven for such quantity at zero temperature; remarkably, the proofs work for both integrable and non-integrable quantum spin chains. Here we establish the equivalence of Euclidean and canonical Drude weights at zero temperature. Our proof is based on rigorous renormalization group methods, Ward identities, and complex analytic ideas.

  3. Duality of caustics in Minkowski billiards

    NASA Astrophysics Data System (ADS)

    Artstein-Avidan, S.; Florentin, D. I.; Ostrover, Y.; Rosen, D.

    2018-04-01

    In this paper we study convex caustics in Minkowski billiards. We show that for the Euclidean billiard dynamics in a planar smooth, centrally symmetric, strictly convex body K, for every convex caustic which K possesses, the ‘dual’ billiard dynamics in which the table is the Euclidean unit ball and the geometry that governs the motion is induced by the body K, possesses a dual convex caustic. Such a pair of caustics are dual in a strong sense, and in particular they have the same perimeter, Lazutkin parameter (both measured with respect to the corresponding geometries), and rotation number. We show moreover that for general Minkowski billiards this phenomenon fails, and one can construct a smooth caustic in a Minkowski billiard table which possesses no dual convex caustic.

  4. Action with Acceleration II: Euclidean Hamiltonian and Jordan Blocks

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.

    2013-10-01

    The Euclidean action with acceleration has been analyzed in Ref. 1, and referred to henceforth as Paper I, for its Hamiltonian and path integral. In this paper, the state space of the Hamiltonian is analyzed for the case when it is pseudo-Hermitian (equivalent to a Hermitian Hamiltonian), as well as the case when it is inequivalent. The propagator is computed using both creation and destruction operators as well as the path integral. A state space calculation of the propagator shows the crucial role played by the dual state vectors that yields a result impossible to obtain from a Hermitian Hamiltonian. When it is not pseudo-Hermitian, the Hamiltonian is shown to be a direct sum of Jordan blocks.

  5. On the stabilizability of multivariable systems by minimum order compensation

    NASA Technical Reports Server (NTRS)

    Byrnes, C. I.; Anderson, B. D. O.

    1983-01-01

    In this paper, a derivation is provided of the necessary condition, mp equal to or greater than n, for stabilizability by constant gain feedback of the generic degree n, p x m system. This follows from another of the main results, which asserts that generic stabilizability is equivalent to generic solvability of a deadbeat control problem, provided mp equal to or less than n. Taken together, these conclusions make it possible to make some sharp statements concerning minimum order stabilization. The techniques are primarily drawn from decision algebra and classical algebraic geometry and have additional consequences for problems of stabilizability and pole-assignability. Among these are the decidability (by a Sturm test) of the equivalence of generic pole-assignability and generic stabilizability, the semi-algebraic nature of the minimum order, q, of a stabilizing compensator, and the nonexistence of formulae involving rational operations and extraction of square roots for pole-assigning gains when they exist, answering in the negative a question raised by Anderson, Bose, and Jury (1975).

  6. A Preliminary Appraisal of the Needs for and Means of Obtaining the Necessary College Facilities at a Minimal Cost to the Taxpayer.

    ERIC Educational Resources Information Center

    Bortolazzo, Julio L.

    San Joaquin Delta College (California), planning on an enrollment increase of more than 10% annually, has estimated its minimum facility needs for an enrollment of approximately 7500 students by 1972. The gross cost per square foot is expected to be $25.00 for general construction and $38.50 for special construction. For an estimated total of…

  7. Recent changes in the size of southern forest enterprises: A survivor analysis

    Treesearch

    James E. Granskog

    1989-01-01

    Over the decade from 1976 to 1986, the trend among southern enterprises that process softwood timer has been to build larger operations to reduce unit cots. Minimum efficient plant size, as determined by survivor analysis, has increased from 1,000 to 1,500 tons per day from pulpmills, 100 to 250 million square feeet per year for softwood plywood plants, and 20 to 50...

  8. 46 CFR 116.415 - Fire control boundaries.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 2 (0.5 pounds per square foot) must be minimum A-0 Class construction. 2 Toilet space boundaries may... various spaces must meet the requirements of Table 116.415(b). Table 116.415 (b)—Bulkheads Spaces (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) Control Space (1) B-0 A-0 A-0 A-0 A-15 A-60 A-60 A-0...

  9. 46 CFR 116.415 - Fire control boundaries.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 2 (0.5 pounds per square foot) must be minimum A-0 Class construction. 2 Toilet space boundaries may... various spaces must meet the requirements of Table 116.415(b). Table 116.415 (b)—Bulkheads Spaces (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) Control Space (1) B-0 A-0 A-0 A-0 A-15 A-60 A-60 A-0...

  10. 46 CFR 116.415 - Fire control boundaries.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 2 (0.5 pounds per square foot) must be minimum A-0 Class construction. 2 Toilet space boundaries may... various spaces must meet the requirements of Table 116.415(b). Table 116.415 (b)—Bulkheads Spaces (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) Control Space (1) B-0 A-0 A-0 A-0 A-15 A-60 A-60 A-0...

  11. Measuring Microaggression and Organizational Climate Factors in Military Units

    DTIC Science & Technology

    2011-04-01

    i.e., items) to accurately assess what we intend for them to measure. To assess construct and convergent validity, the author assessed the statistical ...sample indicated both convergent and construct validity of the microaggression scale. Table 5 presents these statistics . Measuring Microaggressions...models. As shown in Table 7, the measurement models had acceptable fit indices. That is, the Chi-square statistics were at their minimum; although the

  12. Prediction of pKa Values for Neutral and Basic Drugs based on Hybrid Artificial Intelligence Methods.

    PubMed

    Li, Mengshan; Zhang, Huaijing; Chen, Bingsheng; Wu, Yan; Guan, Lixin

    2018-03-05

    The pKa value of drugs is an important parameter in drug design and pharmacology. In this paper, an improved particle swarm optimization (PSO) algorithm was proposed based on the population entropy diversity. In the improved algorithm, when the population entropy was higher than the set maximum threshold, the convergence strategy was adopted; when the population entropy was lower than the set minimum threshold the divergence strategy was adopted; when the population entropy was between the maximum and minimum threshold, the self-adaptive adjustment strategy was maintained. The improved PSO algorithm was applied in the training of radial basis function artificial neural network (RBF ANN) model and the selection of molecular descriptors. A quantitative structure-activity relationship model based on RBF ANN trained by the improved PSO algorithm was proposed to predict the pKa values of 74 kinds of neutral and basic drugs and then validated by another database containing 20 molecules. The validation results showed that the model had a good prediction performance. The absolute average relative error, root mean square error, and squared correlation coefficient were 0.3105, 0.0411, and 0.9685, respectively. The model can be used as a reference for exploring other quantitative structure-activity relationships.

  13. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    NASA Astrophysics Data System (ADS)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  14. Minima de L'intégrale D'action du Problème Newtoniende 4 Corps de Masses Égales Dans R3: Orbites `Hip-Hop'

    NASA Astrophysics Data System (ADS)

    Chenciner, Alain; Venturelli, Andrea

    2000-09-01

    We consider the problem of 4 bodies of equal masses in R 3 for the Newtonian r-1 potential. We address the question of the absolute minima of the action integral among (anti)symmetric loops of class H 1 whose period is fixed. It is the simplest case for which the results of [4] (corrected in [5]) do not apply: the minima cannot be the relative equilibria whose configuration is an absolute minimum of the potential among the configurations having a given moment of inertia with respect to their center of mass. This is because the regular tetrahedron cannot have a relative equilibrium motion in R 3 (see [2]). We show that the absolute minima of the action are not homographic motions. We also show that if we force the configuration to admit a certain type of symmetry of order 4, the absolute minimum is a collisionless orbit whose configuration ‘hesitates’ between the central configuration of the square and the one of the tetrahedron. We call these orbits ‘hip-hop’. A similar result holds in case of a symmetry of order 3 where the central configuration of the equilateral triangle with a body at the center of mass replaces the square.

  15. Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera

    NASA Astrophysics Data System (ADS)

    Rahman, Samiur; Ullah, Sana; Ullah, Sehat

    2018-01-01

    Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.

  16. Joint Entropy for Space and Spatial Frequency Domains Estimated from Psychometric Functions of Achromatic Discrimination

    PubMed Central

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised. PMID:24466158

  17. A three-dimensional gravity inversion applied to São Miguel Island (Azores)

    NASA Astrophysics Data System (ADS)

    Camacho, A. G.; Montesinos, F. G.; Vieira, R.

    1997-04-01

    Gravimetric studies are becoming more and more widely acknowledged as a useful tool for studying and modeling the distributions of subsurface masses that are associated with volcanic activity. In this paper, new gravimetric data for the volcanic island of São Miguel (Azores) were analyzed and interpreted by a stabilized linear inversion methodology. An inversion model of higher resolution was calculated for the Caldera of Furnas, which has a larger density of data. In order to filter out the noncorrelatable anomalies, least squares prediction was used, resulting in a correlated gravimetric signal model with an accuracy of the order of 0.9 mGal. The gravimetric inversion technique is based on the adjustment of a three-dimensional (3-D) model of cubes of unknown density that represents the island's subsurface. The problem of non-uniqueness is solved by minimization with appropriate covariance matrices of the data (resulting from the least squares prediction) and of the unknowns. We also propose a criterion for choosing a balance between the data fit (which in this case corresponds to residues with rms of the order of 0.6 mGal) and the smoothness of the solution. The global model of the island includes a low-density zone in a WNW-ESE direction and a depth of the order of 20 km, associated with the Terceira rift spreading center. The minimums located at a depth of 4 km may be associated with shallow magmatic chambers beneath the main volcanoes of the island. The main high-density area is related to the Nordeste basaltic shield. With regard to the Caldera Furnas, in addition to the minimum that can be associated with a magmatic chamber, there are other shallow minimums that correspond to eruptive processes.

  18. Joint entropy for space and spatial frequency domains estimated from psychometric functions of achromatic discrimination.

    PubMed

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised.

  19. Geographic and topographic determinants of local FMD transmission applied to the 2001 UK FMD epidemic.

    PubMed

    Bessell, Paul R; Shaw, Darren J; Savill, Nicholas J; Woolhouse, Mark E J

    2008-10-03

    Models of Foot and Mouth Disease (FMD) transmission have assumed a homogeneous landscape across which Euclidean distance is a suitable measure of the spatial dependency of transmission. This paper investigated features of the landscape and their impact on transmission during the period of predominantly local spread which followed the implementation of the national movement ban during the 2001 UK FMD epidemic. In this study 113 farms diagnosed with FMD which had a known source of infection within 3 km (cases) were matched to 188 control farms which were either uninfected or infected at a later timepoint. Cases were matched to controls by Euclidean distance to the source of infection and farm size. Intervening geographical features and connectivity between the source of infection and case and controls were compared. Road distance between holdings, access to holdings, presence of forest, elevation change between holdings and the presence of intervening roads had no impact on the risk of local FMD transmission (p > 0.2). However the presence of linear features in the form of rivers and railways acted as barriers to FMD transmission (odds ratio = 0.507, 95% CIs = 0.297,0.887, p = 0.018). This paper demonstrated that although FMD spread can generally be modelled using Euclidean distance and numbers of animals on susceptible holdings, the presence of rivers and railways has an additional protective effect reducing the probability of transmission between holdings.

  20. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion

    PubMed Central

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-01-01

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278

  1. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.

    PubMed

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-08-31

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy.

  2. Reprint of "Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency".

    PubMed

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-08-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Puzzles, Pastimes, Problems.

    ERIC Educational Resources Information Center

    Eperson, D. B.

    1985-01-01

    Presents six mathematical problems (with answers) which focus on: (1) chess moves; (2) patterned numbers; (3) quadratics with rational roots; (4) number puzzles; (5) Euclidean geometry; and (6) Carrollian word puzzles. (JN)

  4. Systems identification using a modified Newton-Raphson method: A FORTRAN program

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Iliff, K. W.

    1972-01-01

    A FORTRAN program is offered which computes a maximum likelihood estimate of the parameters of any linear, constant coefficient, state space model. For the case considered, the maximum likelihood estimate can be identical to that which minimizes simultaneously the weighted mean square difference between the computed and measured response of a system and the weighted square of the difference between the estimated and a priori parameter values. A modified Newton-Raphson or quasilinearization method is used to perform the minimization which typically requires several iterations. A starting technique is used which insures convergence for any initial values of the unknown parameters. The program and its operation are described in sufficient detail to enable the user to apply the program to his particular problem with a minimum of difficulty.

  5. A channel estimation scheme for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen

    2017-08-01

    In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.

  6. On the robustness of a Bayes estimate. [in reliability theory

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    This paper examines the robustness of a Bayes estimator with respect to the assigned prior distribution. A Bayesian analysis for a stochastic scale parameter of a Weibull failure model is summarized in which the natural conjugate is assigned as the prior distribution of the random parameter. The sensitivity analysis is carried out by the Monte Carlo method in which, although an inverted gamma is the assigned prior, realizations are generated using distribution functions of varying shape. For several distributional forms and even for some fixed values of the parameter, simulated mean squared errors of Bayes and minimum variance unbiased estimators are determined and compared. Results indicate that the Bayes estimator remains squared-error superior and appears to be largely robust to the form of the assigned prior distribution.

  7. Unstable spiral waves and local Euclidean symmetry in a model of cardiac tissue.

    PubMed

    Marcotte, Christopher D; Grigoriev, Roman O

    2015-06-01

    This paper investigates the properties of unstable single-spiral wave solutions arising in the Karma model of two-dimensional cardiac tissue. In particular, we discuss how such solutions can be computed numerically on domains of arbitrary shape and study how their stability, rotational frequency, and spatial drift depend on the size of the domain as well as the position of the spiral core with respect to the boundaries. We also discuss how the breaking of local Euclidean symmetry due to finite size effects as well as the spatial discretization of the model is reflected in the structure and dynamics of spiral waves. This analysis allows identification of a self-sustaining process responsible for maintaining the state of spiral chaos featuring multiple interacting spirals.

  8. Discrimination of malignant lymphomas and leukemia using Radon transform based-higher order spectra

    NASA Astrophysics Data System (ADS)

    Luo, Yi; Celenk, Mehmet; Bejai, Prashanth

    2006-03-01

    A new algorithm that can be used to automatically recognize and classify malignant lymphomas and leukemia is proposed in this paper. The algorithm utilizes the morphological watersheds to obtain boundaries of cells from cell images and isolate them from the surrounding background. The areas of cells are extracted from cell images after background subtraction. The Radon transform and higher-order spectra (HOS) analysis are utilized as an image processing tool to generate class feature vectors of different type cells and to extract testing cells' feature vectors. The testing cells' feature vectors are then compared with the known class feature vectors for a possible match by computing the Euclidean distances. The cell in question is classified as belonging to one of the existing cell classes in the least Euclidean distance sense.

  9. A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1988-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.

  10. Corrected Mean-Field Model for Random Sequential Adsorption on Random Geometric Graphs

    NASA Astrophysics Data System (ADS)

    Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur

    2018-03-01

    A notorious problem in mathematics and physics is to create a solvable model for random sequential adsorption of non-overlapping congruent spheres in the d-dimensional Euclidean space with d≥ 2 . Spheres arrive sequentially at uniformly chosen locations in space and are accepted only when there is no overlap with previously deposited spheres. Due to spatial correlations, characterizing the fraction of accepted spheres remains largely intractable. We study this fraction by taking a novel approach that compares random sequential adsorption in Euclidean space to the nearest-neighbor blocking on a sequence of clustered random graphs. This random network model can be thought of as a corrected mean-field model for the interaction graph between the attempted spheres. Using functional limit theorems, we characterize the fraction of accepted spheres and its fluctuations.

  11. Protein–DNA Interactions: The Story so Far and a New Method for Prediction

    DOE PAGES

    Jones, Susan; Thornton, Janet M.

    2003-01-01

    This review describes methods for the prediction of DNA binding function, and specifically summarizes a new method using 3D structural templates. The new method features the HTH motif that is found in approximately one-third of DNAbinding protein families. A library of 3D structural templates of HTH motifs was derived from proteins in the PDB. Templates were scanned against complete protein structures and the optimal superposition of a template on a structure calculated. Significance thresholds in terms of a minimum root mean squared deviation (rmsd) of an optimal superposition, and a minimum motif accessible surface area (ASA), have been calculated. Inmore » this way, it is possible to scan the template library against proteins of unknown function to make predictions about DNA-binding functionality.« less

  12. The Mass of Graviton and Its Relation to the Number of Information according to the Holographic Principle

    PubMed Central

    Gkigkitzis, Ioannis

    2014-01-01

    We investigate the relation of the mass of the graviton to the number of information N in a flat universe. As a result we find that the mass of the graviton scales as mgr∝1/N. Furthermore, we find that the number of gravitons contained inside the observable horizon is directly proportional to the number of information N; that is, N gr ∝ N. Similarly, the total mass of gravitons that exist in the universe is proportional to the number of information N; that is, Mgr∝N. In an effort to establish a relation between the graviton mass and the basic parameters of the universe, we find that the mass of the graviton is simply twice the Hubble mass m H as it is defined by Gerstein et al. (2003), times the square root of the quantity q − 1/2, where q is the deceleration parameter of the universe. In relation to the geometry of the universe we find that the mass of the graviton varies according to the relation mgr∝Rsc, and therefore m gr obviously controls the geometry of the space time through a deviation of the geodesic spheres from the spheres of Euclidean metric. PMID:27433513

  13. Predictions of the quantum landscape multiverse

    NASA Astrophysics Data System (ADS)

    Mersini-Houghton, Laura

    2017-02-01

    The 2015 Planck data release has placed tight constraints on the class of inflationary models allowed. The current best fit region favors concave downwards inflationary potentials, since they produce a suppressed tensor to scalar index ratio r. Concave downward potentials have a negative curvature {{V}\\prime \\prime} , therefore a tachyonic mass square that drives fluctuations. Furthermore, their use can become problematic if the field rolls in a part of the potential away from the extrema, since the semiclassical approximation of quantum cosmology, used for deriving the most probable wavefunction of the universe from the landscape and for addressing the quantum to classical transition, breaks down away from the steepest descent region. We here propose a way of dealing with such potentials by inverting the metric signature and solving for the wavefunction of the universe in the Euclidean sector. This method allows us to extend our theory of the origin of the universe from a quantum multiverse, to a more general class of concave inflationary potentials where a straightforward application of the semiclassical approximation fails. The work here completes the derivation of modifications to the Newtonian potential and to the inflationary potential, which originate from the quantum entanglement of our universe with all others in the quantum landscape multiverse, leading to predictions of observational signatures for both types of inflationary models, concave and convex potentials.

  14. Mapping growing stock volume and forest live biomass: a case study of the Polissya region of Ukraine

    NASA Astrophysics Data System (ADS)

    Bilous, Andrii; Myroniuk, Viktor; Holiaka, Dmytrii; Bilous, Svitlana; See, Linda; Schepaschenko, Dmitry

    2017-10-01

    Forest inventory and biomass mapping are important tasks that require inputs from multiple data sources. In this paper we implement two methods for the Ukrainian region of Polissya: random forest (RF) for tree species prediction and k-nearest neighbors (k-NN) for growing stock volume and biomass mapping. We examined the suitability of the five-band RapidEye satellite image to predict the distribution of six tree species. The accuracy of RF is quite high: ~99% for forest/non-forest mask and 89% for tree species prediction. Our results demonstrate that inclusion of elevation as a predictor variable in the RF model improved the performance of tree species classification. We evaluated different distance metrics for the k-NN method, including Euclidean or Mahalanobis distance, most similar neighbor (MSN), gradient nearest neighbor, and independent component analysis. The MSN with the four nearest neighbors (k = 4) is the most precise (according to the root-mean-square deviation) for predicting forest attributes across the study area. The k-NN method allowed us to estimate growing stock volume with an accuracy of 3 m3 ha-1 and for live biomass of about 2 t ha-1 over the study area.

  15. Discriminating model for diagnosis of basal cell carcinoma and melanoma in vitro based on the Raman spectra of selected biochemicals

    NASA Astrophysics Data System (ADS)

    Silveira, Landulfo; Silveira, Fabrício Luiz; Bodanese, Benito; Zângaro, Renato Amaro; Pacheco, Marcos Tadeu T.

    2012-07-01

    Raman spectroscopy has been employed to identify differences in the biochemical constitution of malignant [basal cell carcinoma (BCC) and melanoma (MEL)] cells compared to normal skin tissues, with the goal of skin cancer diagnosis. We collected Raman spectra from compounds such as proteins, lipids, and nucleic acids, which are expected to be represented in human skin spectra, and developed a linear least-squares fitting model to estimate the contributions of these compounds to the tissue spectra. We used a set of 145 spectra from biopsy fragments of normal (30 spectra), BCC (96 spectra), and MEL (19 spectra) skin tissues, collected using a near-infrared Raman spectrometer (830 nm, 50 to 200 mW, and 20 s exposure time) coupled to a Raman probe. We applied the best-fitting model to the spectra of biochemicals and tissues, hypothesizing that the relative spectral contribution of each compound to the tissue Raman spectrum changes according to the disease. We verified that actin, collagen, elastin, and triolein were the most important biochemicals representing the spectral features of skin tissues. A classification model applied to the relative contribution of collagen III, elastin, and melanin using Euclidean distance as a discriminator could differentiate normal from BCC and MEL.

  16. Statistical validation of brain tumor shape approximation via spherical harmonics for image-guided neurosurgery.

    PubMed

    Goldberg-Zimring, Daniel; Talos, Ion-Florin; Bhagwat, Jui G; Haker, Steven J; Black, Peter M; Zou, Kelly H

    2005-04-01

    Surgical planning now routinely uses both two-dimensional (2D) and three-dimensional (3D) models that integrate data from multiple imaging modalities, each highlighting one or more aspects of morphology or function. We performed a preliminary evaluation of the use of spherical harmonics (SH) in approximating the 3D shape and estimating the volume of brain tumors of varying characteristics. Magnetic resonance (MR) images from five patients with brain tumors were selected randomly from our MR-guided neurosurgical practice. Standardized mean square reconstruction errors (SMSRE) by tumor volume were measured. Validation metrics for comparing performances of the SH method against segmented contours (SC) were the dice similarity coefficient (DSC) and standardized Euclidean distance (SED) measure. Tumor volume range was 22,413-85,189 mm3, and range of number of vertices in triangulated models was 3674-6544. At SH approximations with degree of at least 30, SMSRE were within 1.66 x 10(-5) mm(-1). Summary measures yielded a DSC range of 0.89-0.99 (pooled median, 0.97 and significantly >0.7; P < .001) and an SED range of 0.0002-0.0028 (pooled median, 0.0005). 3D shapes of tumors may be approximated by using SH for neurosurgical applications.

  17. Development of an automatic cow body condition scoring using body shape signature and Fourier descriptors.

    PubMed

    Bercovich, A; Edan, Y; Alchanatis, V; Moallem, U; Parmet, Y; Honig, H; Maltz, E; Antler, A; Halachmi, I

    2013-01-01

    Body condition evaluation is a common tool to assess energy reserves of dairy cows and to estimate their fatness or thinness. This study presents a computer-vision tool that automatically estimates cow's body condition score. Top-view images of 151 cows were collected on an Israeli research dairy farm using a digital still camera located at the entrance to the milking parlor. The cow's tailhead area and its contour were segmented and extracted automatically. Two types of features of the tailhead contour were extracted: (1) the angles and distances between 5 anatomical points; and (2) the cow signature, which is a 1-dimensional vector of the Euclidean distances from each point in the normalized tailhead contour to the shape center. Two methods were applied to describe the cow's signature and to reduce its dimension: (1) partial least squares regression, and (2) Fourier descriptors of the cow signature. Three prediction models were compared with manual scores of an expert. Results indicate that (1) it is possible to automatically extract and predict body condition from color images without any manual interference; and (2) Fourier descriptors of the cow's signature result in improved performance (R(2)=0.77). Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Euclidean chemical spaces from molecular fingerprints: Hamming distance and Hempel's ravens.

    PubMed

    Martin, Eric; Cao, Eddie

    2015-05-01

    Molecules are often characterized by sparse binary fingerprints, where 1s represent the presence of substructures and 0s represent their absence. Fingerprints are especially useful for similarity calculations, such as database searching or clustering, generally measuring similarity as the Tanimoto coefficient. In other cases, such as visualization, design of experiments, or latent variable regression, a low-dimensional Euclidian "chemical space" is more useful, where proximity between points reflects chemical similarity. A temptation is to apply principal components analysis (PCA) directly to these fingerprints to obtain a low dimensional continuous chemical space. However, Gower has shown that distances from PCA on bit vectors are proportional to the square root of Hamming distance. Unlike Tanimoto similarity, Hamming similarity (HS) gives equal weight to shared 0s as to shared 1s, that is, HS gives as much weight to substructures that neither molecule contains, as to substructures which both molecules contain. Illustrative examples show that proximity in the corresponding chemical space reflects mainly similar size and complexity rather than shared chemical substructures. These spaces are ill-suited for visualizing and optimizing coverage of chemical space, or as latent variables for regression. A more suitable alternative is shown to be Multi-dimensional scaling on the Tanimoto distance matrix, which produces a space where proximity does reflect structural similarity.

  19. Inhomogeneous field theory inside the arctic circle

    NASA Astrophysics Data System (ADS)

    Allegra, Nicolas; Dubail, Jérôme; Stéphan, Jean-Marie; Viti, Jacopo

    2016-05-01

    Motivated by quantum quenches in spin chains, a one-dimensional toy-model of fermionic particles evolving in imaginary-time from a domain-wall initial state is solved. The main interest of this toy-model is that it exhibits the arctic circle phenomenon, namely a spatial phase separation between a critically fluctuating region and a frozen region. Large-scale correlations inside the critical region are expressed in terms of correlators in a (euclidean) two-dimensional massless Dirac field theory. It is observed that this theory is inhomogenous: the metric is position-dependent, so it is in fact a Dirac theory in curved space. The technique used to solve the toy-model is then extended to deal with the transfer matrices of other models: dimers on the honeycomb and square lattice, and the six-vertex model at the free fermion point (Δ =0 ). In all cases, explicit expressions are given for the long-range correlations in the critical region, as well as for the underlying Dirac action. Although the setup developed here is heavily based on fermionic observables, the results can be translated into the language of height configurations and of the gaussian free field, via bosonization. Correlations close to the phase boundary and the generic appearance of Airy processes in all these models are also briefly revisited in the appendix.

  20. Quantum statistical relation for black holes in nonlinear electrodynamics coupled to Einstein-Gauss-Bonnet AdS gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miskovic, Olivera; Olea, Rodrigo

    2011-03-15

    We consider curvature-squared corrections to Einstein-Hilbert gravity action in the form of a Gauss-Bonnet term in D>4 dimensions. In this theory, we study the thermodynamics of charged static black holes with anti-de Sitter (AdS) asymptotics, and whose electric field is described by nonlinear electrodynamics. These objects have received considerable attention in recent literature on gravity/gauge dualities. It is well-known that, within the framework of anti-de Sitter/conformal field theory (AdS/CFT) correspondence, there exists a nonvanishing Casimir contribution to the internal energy of the system, manifested as the vacuum energy for global AdS spacetime in odd dimensions. Because of this reason, wemore » derive a quantum statistical relation directly from the Euclidean action and not from the integration of the first law of thermodynamics. To this end, we employ a background-independent regularization scheme which consists, in addition to the bulk action, of counterterms that depend on both extrinsic and intrinsic curvatures of the boundary (Kounterterm series). This procedure results in a consistent inclusion of the vacuum energy and chemical potential in the thermodynamic description for Einstein-Gauss-Bonnet AdS gravity regardless of the explicit form of the nonlinear electrodynamics Lagrangian.« less

  1. Period changes of two contact binaries: DF Hya and WZ And

    NASA Astrophysics Data System (ADS)

    Bulut, A.; Bulut, I.

    2018-02-01

    Orbital period variations of two contact binaries DF Hya and WZ And are analyzed with the least-squares method by using all available minima times. It is shown that the period variations of these systems are due mainly to the Light-Time Effect (LITE) due originates from gravitational influence of a third body. New LITE elements such as, orbital periods and minimum masses of possibility third bodies are given.

  2. Point processes in arbitrary dimension from fermionic gases, random matrix theory, and number theory

    NASA Astrophysics Data System (ADS)

    Torquato, Salvatore; Scardicchio, A.; Zachary, Chase E.

    2008-11-01

    It is well known that one can map certain properties of random matrices, fermionic gases, and zeros of the Riemann zeta function to a unique point process on the real line \\mathbb {R} . Here we analytically provide exact generalizations of such a point process in d-dimensional Euclidean space \\mathbb {R}^d for any d, which are special cases of determinantal processes. In particular, we obtain the n-particle correlation functions for any n, which completely specify the point processes in \\mathbb {R}^d . We also demonstrate that spin-polarized fermionic systems in \\mathbb {R}^d have these same n-particle correlation functions in each dimension. The point processes for any d are shown to be hyperuniform, i.e., infinite wavelength density fluctuations vanish, and the structure factor (or power spectrum) S(k) has a non-analytic behavior at the origin given by S(k)~|k| (k \\rightarrow 0 ). The latter result implies that the pair correlation function g2(r) tends to unity for large pair distances with a decay rate that is controlled by the power law 1/rd+1, which is a well-known property of bosonic ground states and more recently has been shown to characterize maximally random jammed sphere packings. We graphically display one-and two-dimensional realizations of the point processes in order to vividly reveal their 'repulsive' nature. Indeed, we show that the point processes can be characterized by an effective 'hard core' diameter that grows like the square root of d. The nearest-neighbor distribution functions for these point processes are also evaluated and rigorously bounded. Among other results, this analysis reveals that the probability of finding a large spherical cavity of radius r in dimension d behaves like a Poisson point process but in dimension d+1, i.e., this probability is given by exp[-κ(d)rd+1] for large r and finite d, where κ(d) is a positive d-dependent constant. We also show that as d increases, the point process behaves effectively like a sphere packing with a coverage fraction of space that is no denser than 1/2d. This coverage fraction has a special significance in the study of sphere packings in high-dimensional Euclidean spaces.

  3. A weighted least squares estimation of the polynomial regression model on paddy production in the area of Kedah and Perlis

    NASA Astrophysics Data System (ADS)

    Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd

    2017-08-01

    The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.

  4. Quality assessment of gasoline using comprehensive two-dimensional gas chromatography combined with unfolded partial least squares: A reliable approach for the detection of gasoline adulteration.

    PubMed

    Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan

    2016-01-01

    Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Newton's Experimentum Crucis Reconsidered

    ERIC Educational Resources Information Center

    Holtsmark, Torger

    1970-01-01

    Certain terminological inconsistencies in the teaching of optical theory at the elementary level are traced back to Newton who derived them from Euclidean geometrical optics. Discusses this terminological ambiguity which influenced later textbooks. (LS)

  6. Ab initio nanostructure determination

    NASA Astrophysics Data System (ADS)

    Gujarathi, Saurabh

    Reconstruction of complex structures is an inverse problem arising in virtually all areas of science and technology, from protein structure determination to bulk heterostructure solar cells and the structure of nanoparticles. This problem is cast as a complex network problem where the edges in a network have weights equal to the Euclidean distance between their endpoints. A method, called Tribond, for the reconstruction of the locations of the nodes of the network given only the edge weights of the Euclidean network is presented. The timing results indicate that the algorithm is a low order polynomial in the number of nodes in the network in two dimensions. Reconstruction of Euclidean networks in two dimensions of about one thousand nodes in approximately twenty four hours on a desktop computer using this implementation is done. In three dimensions, the computational cost for the reconstruction is a higher order polynomial in the number of nodes and reconstruction of small Euclidean networks in three dimensions is shown. If a starting network of size five is assumed to be given, then for a network of size 100, the remaining reconstruction can be done in about two hours on a desktop computer. In situations when we have less precise data, modifications of the method may be necessary and are discussed. A related problem in one dimension known as the Optimal Golomb ruler (OGR) is also studied. A statistical physics Hamiltonian to describe the OGR problem is introduced and the first order phase transition from a symmetric low constraint phase to a complex symmetry broken phase at high constraint is studied. Despite the fact that the Hamiltonian is not disordered, the asymmetric phase is highly irregular with geometric frustration. The phase diagram is obtained and it is seen that even at a very low temperature T there is a phase transition at finite and non-zero value of the constraint parameter gamma/mu. Analytic calculations for the scaling of the density and free energy of the ruler are done and they are compared with those from the mean field approach. A scaling law is also derived for the length of OGR, which is consistent with Erdos conjecture and with numerical results.

  7. Microwave-photonics direction finding system for interception of low probability of intercept radio frequency signals

    NASA Astrophysics Data System (ADS)

    Pace, Phillip Eric; Tan, Chew Kung; Ong, Chee K.

    2018-02-01

    Direction finding (DF) systems are fundamental electronic support measures for electronic warfare. A number of DF techniques have been developed over the years; however, these systems are limited in bandwidth and resolution and suffer from a complex design for frequency downconversion. The design of a photonic DF technique for the detection and DF of low probability of intercept (LPI) signals is investigated. Key advantages of this design include a small baseline, wide bandwidth, high resolution, minimal space, weight, and power requirement. A robust postprocessing algorithm that utilizes the minimum Euclidean distance detector provides consistence and accurate estimation of angle of arrival (AoA) for a wide range of LPI waveforms. Experimental tests using frequency modulation continuous wave (FMCW) and P4 modulation signals were conducted in an anechoic chamber to verify the system design. Test results showed that the photonic DF system is capable of measuring the AoA of the LPI signals with 1-deg resolution over a 180 deg field-of-view. For an FMCW signal, the AoA was determined with a RMS error of 0.29 deg at 1-deg resolution. For a P4 coded signal, the RMS error in estimating the AoA is 0.32 deg at 1-deg resolution.

  8. COSMIC monthly progress report

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Activities of the Computer Software Management and Information Center (COSMIC) are summarized for the month of August, 1993. Tables showing the current inventory of programs available from COSMIC are presented and program processing and evaluation activities are discussed. Ten articles were prepared for publication in the NASA Tech Brief Journal. These articles (included in this report) describe the following software items: (1) MOM3D - A Method of Moments Code for Electromagnetic Scattering (UNIX Version); (2) EM-Animate - Computer Program for Displaying and Animating the Steady-State Time-Harmonic Electromagnetic Near Field and Surface-Current Solutions; (3) MOM3D - A Method of Moments Code for Electromagnetic Scattering (IBM PC Version); (4) M414 - MIL-STD-414 Variable Sampling Procedures Computer Program; (5) MEDOF - Minimum Euclidean Distance Optimal Filter; (6) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (Macintosh Version); (7) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (IBM PC Version); (8) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (UNIX Version); (9) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (DEC VAX VMS Version); and (10) TFSSRA - Thick Frequency Selective Surface with Rectangular Apertures. Activities in the areas of marketing, customer service, benefits identification, maintenance and support, and dissemination are also described along with a budget summary.

  9. Scalar mixing in LES/PDF of a high-Ka premixed turbulent jet flame

    NASA Astrophysics Data System (ADS)

    You, Jiaping; Yang, Yue

    2016-11-01

    We report a large-eddy simulation (LES)/probability density function (PDF) study of a high-Ka premixed turbulent flame in the Lund University Piloted Jet (LUPJ) flame series, which has been investigated using direct numerical simulation (DNS) and experiments. The target flame, featuring broadened preheat and reaction zones, is categorized into the broken reaction zone regime. In the present study, three widely used mixing modes, namely the Interaction by Exchange with the Mean (IEM), Modified Curl (MC), and Euclidean Minimum Spanning Tree (EMST) models are applied to assess their performance through detailed a posteriori comparisons with DNS. A dynamic model for the time scale of scalar mixing is formulated to describe the turbulent mixing of scalars at small scales. Better quantitative agreement for the mean temperature and mean mass fractions of major and minor species are obtained with the MC and EMST models than with the IEM model. The multi-scalar mixing in composition space with the three models are analyzed to assess the modeling of the conditional molecular diffusion term. In addition, we demonstrate that the product of OH and CH2O concentrations can be a good surrogate of the local heat release rate in this flame. This work is supported by the National Natural Science Foundation of China (Grant Nos. 11521091 and 91541204).

  10. Rapid determination of crocins in saffron by near-infrared spectroscopy combined with chemometric techniques

    NASA Astrophysics Data System (ADS)

    Li, Shuailing; Shao, Qingsong; Lu, Zhonghua; Duan, Chengli; Yi, Haojun; Su, Liyang

    2018-02-01

    Saffron is an expensive spice. Its primary effective constituents are crocin I and II, and the contents of these compounds directly affect the quality and commercial value of saffron. In this study, near-infrared spectroscopy was combined with chemometric techniques for the determination of crocin I and II in saffron. Partial least squares regression models were built for the quantification of crocin I and II. By comparing different spectral ranges and spectral pretreatment methods (no pretreatment, vector normalization, subtract a straight line, multiplicative scatter correction, minimum-maximum normalization, eliminate the constant offset, first derivative, and second derivative), optimum models were developed. The root mean square error of cross-validation values of the best partial least squares models for crocin I and II were 1.40 and 0.30, respectively. The coefficients of determination for crocin I and II were 93.40 and 96.30, respectively. These results show that near-infrared spectroscopy can be combined with chemometric techniques to determine the contents of crocin I and II in saffron quickly and efficiently.

  11. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  12. Acidity measurement of iron ore powders using laser-induced breakdown spectroscopy with partial least squares regression.

    PubMed

    Hao, Z Q; Li, C M; Shen, M; Yang, X Y; Li, K H; Guo, L B; Li, X Y; Lu, Y F; Zeng, X Y

    2015-03-23

    Laser-induced breakdown spectroscopy (LIBS) with partial least squares regression (PLSR) has been applied to measuring the acidity of iron ore, which can be defined by the concentrations of oxides: CaO, MgO, Al₂O₃, and SiO₂. With the conventional internal standard calibration, it is difficult to establish the calibration curves of CaO, MgO, Al₂O₃, and SiO₂ in iron ore due to the serious matrix effects. PLSR is effective to address this problem due to its excellent performance in compensating the matrix effects. In this work, fifty samples were used to construct the PLSR calibration models for the above-mentioned oxides. These calibration models were validated by the 10-fold cross-validation method with the minimum root-mean-square errors (RMSE). Another ten samples were used as a test set. The acidities were calculated according to the estimated concentrations of CaO, MgO, Al₂O₃, and SiO₂ using the PLSR models. The average relative error (ARE) and RMSE of the acidity achieved 3.65% and 0.0048, respectively, for the test samples.

  13. Area distortion under certain classes of quasiconformal mappings.

    PubMed

    Hernández-Montes, Alfonso; Reséndis O, Lino F

    2017-01-01

    In this paper we study the hyperbolic and Euclidean area distortion of measurable sets under some classes of K -quasiconformal mappings from the upper half-plane and the unit disk onto themselves, respectively.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baleanu, Dumitru; Institute of Space Sciences, P.O. Box MG-6, Magurele-Bucharest

    The geodesic motion of pseudo-classical spinning particles in extended Euclidean Taub-NUT space was analyzed. The non-generic symmetries of Taub-NUT was investigated. We found new non-generic symmetries in the presence of electromagnetic field like a monopole.

  15. Optimum flight paths of turbojet aircraft

    NASA Technical Reports Server (NTRS)

    Miele, Angelo

    1955-01-01

    The climb of turbojet aircraft is analyzed and discussed including the accelerations. Three particular flight performances are examined: minimum time of climb, climb with minimum fuel consumption, and steepest climb. The theoretical results obtained from a previous study are put in a form that is suitable for application on the following simplifying assumptions: the Mach number is considered an independent variable instead of the velocity; the variations of the airplane mass due to fuel consumption are disregarded; the airplane polar is assumed to be parabolic; the path curvatures and the squares of the path angles are disregarded in the projection of the equation of motion on the normal to the path; lastly, an ideal turbojet with performance independent of the velocity is involved. The optimum Mach number for each flight condition is obtained from the solution of a sixth order equation in which the coefficients are functions of two fundamental parameters: the ratio of minimum drag in level flight to the thrust and the Mach number which represents the flight at constant altitude and maximum lift-drag ratio.

  16. When are surface plasmon polaritons excited in the Kretschmann-Raether configuration?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, IV, Jonathan J.; Harutyunyan, Hayk; Rosenmann, Daniel

    It is widely believed that the reflection minimum in a Kretschmann-Raether experiment results from direct coupling into surface plasmon polariton modes. Our experimental results provide a surprising discrepancy between the leakage radiation patterns of surface plasmon polaritons (SPPs) launched on a layered gold/germanium film compared to the K-R minimum, clearly challenging this belief. We provide definitive evidence that the reflectance dip in K-R experiments does not correlate with excitation of an SPP mode, but rather corresponds to a particular type of perfectly absorbing (PA) mode. Results from rigorous electrodynamics simulations show that the PA mode can only exist under externalmore » driving, whereas the SPP can exist in regions free from direct interaction with the driving field. These simulations show that it is possible to indirectly excite propagating SPPs guided by the reflectance minimum in a K-R experiment, but demonstrate the efficiency can be lower by more than a factor of 3. We find that optimal coupling into the SPP can be guided by the square magnitude of the Fresnel transmission amplitude.« less

  17. When are Surface Plasmon Polaritons Excited in the Kretschmann-Raether Configuration?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley IV, Jonathan J.; Harutyunyan, Hayk; Rosenmann, Daniel

    It is widely believed that the reflection minimum in a Kretschmann-Raether experiment results from direct coupling into surface plasmon polariton modes. Our experimental results provide a surprising discrepancy between the leakage radiation patterns of surface plasmon polaritons (SPPs) launched on a layered gold/germanium film compared to the K-R minimum, clearly challenging this belief. We provide definitive evidence that the reflectance dip in K-R experiments does not correlate with excitation of an SPP mode, but rather corresponds to a particular type of perfectly absorbing (PA) mode. Results from rigorous electrodynamics simulations show that the PA mode can only exist under externalmore » driving, whereas the SPP can exist in regions free from direct interaction with the driving field. These simulations show that it is possible to indirectly excite propagating SPPs guided by the reflectance minimum in a K-R experiment, but demonstrate the efficiency can be lower by more than a factor of 3. We find that optimal coupling into the SPP can be guided by the square magnitude of the Fresnel transmission amplitude.« less

  18. When are surface plasmon polaritons excited in the Kretschmann-Raether configuration?

    DOE PAGES

    Foley, IV, Jonathan J.; Harutyunyan, Hayk; Rosenmann, Daniel; ...

    2015-04-23

    It is widely believed that the reflection minimum in a Kretschmann-Raether experiment results from direct coupling into surface plasmon polariton modes. Our experimental results provide a surprising discrepancy between the leakage radiation patterns of surface plasmon polaritons (SPPs) launched on a layered gold/germanium film compared to the K-R minimum, clearly challenging this belief. We provide definitive evidence that the reflectance dip in K-R experiments does not correlate with excitation of an SPP mode, but rather corresponds to a particular type of perfectly absorbing (PA) mode. Results from rigorous electrodynamics simulations show that the PA mode can only exist under externalmore » driving, whereas the SPP can exist in regions free from direct interaction with the driving field. These simulations show that it is possible to indirectly excite propagating SPPs guided by the reflectance minimum in a K-R experiment, but demonstrate the efficiency can be lower by more than a factor of 3. We find that optimal coupling into the SPP can be guided by the square magnitude of the Fresnel transmission amplitude.« less

  19. Why conventional detection methods fail in identifying the existence of contamination events.

    PubMed

    Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han

    2016-04-15

    Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Linear time relational prototype based learning.

    PubMed

    Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara

    2012-10-01

    Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.

  1. Thermal dynamics on the lattice with exponentially improved accuracy

    NASA Astrophysics Data System (ADS)

    Pawlowski, Jan M.; Rothkopf, Alexander

    2018-03-01

    We present a novel simulation prescription for thermal quantum fields on a lattice that operates directly in imaginary frequency space. By distinguishing initial conditions from quantum dynamics it provides access to correlation functions also outside of the conventional Matsubara frequencies ωn = 2 πnT. In particular it resolves their frequency dependence between ω = 0 and ω1 = 2 πT, where the thermal physics ω ∼ T of e.g. transport phenomena is dominantly encoded. Real-time spectral functions are related to these correlators via an integral transform with rational kernel, so that their unfolding from the novel simulation data is exponentially improved compared to standard Euclidean simulations. We demonstrate this improvement within a non-trivial 0 + 1-dimensional quantum mechanical toy-model and show that spectral features inaccessible in standard Euclidean simulations are quantitatively captured.

  2. Experimental Non-Violation of the Bell Inequality

    NASA Astrophysics Data System (ADS)

    Palmer, Tim

    2018-05-01

    A finite non-classical framework for physical theory is described which challenges the conclusion that the Bell Inequality has been shown to have been violated experimentally, even approximately. This framework postulates the universe as a deterministic locally causal system evolving on a measure-zero fractal-like geometry $I_U$ in cosmological state space. Consistent with the assumed primacy of $I_U$, and $p$-adic number theory, a non-Euclidean (and hence non-classical) metric $g_p$ is defined on cosmological state space, where $p$ is a large but finite Pythagorean prime. Using number-theoretic properties of spherical triangles, the inequalities violated experimentally are shown to be $g_p$-distant from the CHSH inequality, whose violation would rule out local realism. This result fails in the singular limit $p=\\infty$, at which $g_p$ is Euclidean. Broader implications are discussed.

  3. Quantum Theory of Wormholes

    NASA Astrophysics Data System (ADS)

    González-Díaz, Pedro F.

    We re-explore the effects of multiply-connected wormholes on ordinary matter at low energies. It is obtained that the path integral that describes these effects is given in terms of a Planckian probability distribution for the Coleman α-parameters, rather than a classical Gaussian distribution law. This implies that the path integral over all low-energy fields with the wormhole effective interactions can no longer vary continuously, and that the quantities α2 are interpretable as the momenta of a quantum field. Using the new result that, rather than being given in terms of the Coleman-Hawking probability, the Euclidean action must equal negative entropy, the model predicts a very small but still nonzero cosmological constant and quite reasonable values for the pion and neutrino masses. The divergence problems of Euclidean quantum gravity are also discussed in the light of the above results.

  4. From Glass Formation to Icosahedral Ordering by Curving Three-Dimensional Space.

    PubMed

    Turci, Francesco; Tarjus, Gilles; Royall, C Patrick

    2017-05-26

    Geometric frustration describes the inability of a local molecular arrangement, such as icosahedra found in metallic glasses and in model atomic glass formers, to tile space. Local icosahedral order, however, is strongly frustrated in Euclidean space, which obscures any causal relationship with the observed dynamical slowdown. Here we relieve frustration in a model glass-forming liquid by curving three-dimensional space onto the surface of a 4-dimensional hypersphere. For sufficient curvature, frustration vanishes and the liquid "freezes" in a fully icosahedral structure via a sharp "transition." Frustration increases upon reducing the curvature, and the transition to the icosahedral state smoothens while glassy dynamics emerge. Decreasing the curvature leads to decoupling between dynamical and structural length scales and the decrease of kinetic fragility. This sheds light on the observed glass-forming behavior in Euclidean space.

  5. Emotion-independent face recognition

    NASA Astrophysics Data System (ADS)

    De Silva, Liyanage C.; Esther, Kho G. P.

    2000-12-01

    Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.

  6. Renormalized vacuum polarization of rotating black holes

    NASA Astrophysics Data System (ADS)

    Ferreira, Hugo R. C.

    2015-04-01

    Quantum field theory on rotating black hole spacetimes is plagued with technical difficulties. Here, we describe a general method to renormalize and compute the vacuum polarization of a quantum field in the Hartle-Hawking state on rotating black holes. We exemplify the technique with a massive scalar field on the warped AdS3 black hole solution to topologically massive gravity, a deformation of (2 + 1)-dimensional Einstein gravity. We use a "quasi-Euclidean" technique, which generalizes the Euclidean techniques used for static spacetimes, and we subtract the divergences by matching to a sum over mode solutions on Minkowski spacetime. This allows us, for the first time, to have a general method to compute the renormalized vacuum polarization, for a given quantum state, on a rotating black hole, such as the physically relevant case of the Kerr black hole in four dimensions.

  7. Random topologies and the emergence of cooperation: the role of short-cuts

    NASA Astrophysics Data System (ADS)

    Vilone, D.; Sánchez, A.; Gómez-Gardeñes, J.

    2011-04-01

    We study in detail the role of short-cuts in promoting the emergence of cooperation in a network of agents playing the Prisoner's Dilemma game (PDG). We introduce a model whose topology interpolates between the one-dimensional Euclidean lattice (a ring) and the complete graph by changing the value of one parameter (the probability p of adding a link between two nodes not already connected in the Euclidean configuration). We show that there is a region of values of p in which cooperation is greatly enhanced, whilst for smaller values of p only a few cooperators are present in the final state, and for p\\rightarrow 1^- cooperation is totally suppressed. We present analytical arguments that provide a very plausible interpretation of the simulation results, thus unveiling the mechanism by which short-cuts contribute to promoting (or suppressing) cooperation.

  8. Estimating Characteristics of a Maneuvering Reentry Vehicle Observed by Multiple Sensors

    DTIC Science & Technology

    2010-03-01

    instead of as one large data set. This method allowed the filter to respond to changing dynamics. Jackson and Farbman’s approach could be of...portion of the entire acceleration was due to drag. Lee and Liu adopted a more hybrid approach , combining a least squares and Kalman filters [9...grows again as the window approaches the end of the available data. Three values for minimum window size, window size, and maximum window size are

  9. Breakup of Solid Ice Covers Due to Rapid Water Level Variations,

    DTIC Science & Technology

    1982-02-01

    Larsen, and Dr. Devinder S. Sodhi for their valuable comments and reviews of the report. He also thanks Dr. Ashton and Guenther E. Frankenstein for the...for wave periods larger than about 10 seconds. What are the minimum wave lengths that might be generated by discharge variations at a hydro- electric ...Canadian Electrical Association, Research and Development, Suite 580, One Westmount Square, Montreal, Canada. 2. Ashton, G.D. (1974a) Entrainment of ice

  10. Modified multidimensional scaling approach to analyze financial markets.

    PubMed

    Yin, Yi; Shang, Pengjian

    2014-06-01

    Detrended cross-correlation coefficient (σDCCA) and dynamic time warping (DTW) are introduced as the dissimilarity measures, respectively, while multidimensional scaling (MDS) is employed to translate the dissimilarities between daily price returns of 24 stock markets. We first propose MDS based on σDCCA dissimilarity and MDS based on DTW dissimilarity creatively, while MDS based on Euclidean dissimilarity is also employed to provide a reference for comparisons. We apply these methods in order to further visualize the clustering between stock markets. Moreover, we decide to confront MDS with an alternative visualization method, "Unweighed Average" clustering method, for comparison. The MDS analysis and "Unweighed Average" clustering method are employed based on the same dissimilarity. Through the results, we find that MDS gives us a more intuitive mapping for observing stable or emerging clusters of stock markets with similar behavior, while the MDS analysis based on σDCCA dissimilarity can provide more clear, detailed, and accurate information on the classification of the stock markets than the MDS analysis based on Euclidean dissimilarity. The MDS analysis based on DTW dissimilarity indicates more knowledge about the correlations between stock markets particularly and interestingly. Meanwhile, it reflects more abundant results on the clustering of stock markets and is much more intensive than the MDS analysis based on Euclidean dissimilarity. In addition, the graphs, originated from applying MDS methods based on σDCCA dissimilarity and DTW dissimilarity, may also guide the construction of multivariate econometric models.

  11. A Riemannian framework for orientation distribution function computing.

    PubMed

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2009-01-01

    Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation.

  12. Constraint algebra in Smolin's G →0 limit of 4D Euclidean gravity

    NASA Astrophysics Data System (ADS)

    Varadarajan, Madhavan

    2018-05-01

    Smolin's generally covariant GNewton→0 limit of 4d Euclidean gravity is a useful toy model for the study of the constraint algebra in loop quantum gravity (LQG). In particular, the commutator between its Hamiltonian constraints has a metric dependent structure function. While a prior LQG-like construction of nontrivial anomaly free constraint commutators for the model exists, that work suffers from two defects. First, Smolin's remarks on the inability of the quantum dynamics to generate propagation effects apply. Second, the construction only yields the action of a single Hamiltonian constraint together with the action of its commutator through a continuum limit of corresponding discrete approximants; the continuum limit of a product of two or more constraints does not exist. Here, we incorporate changes in the quantum dynamics through structural modifications in the choice of discrete approximants to the quantum Hamiltonian constraint. The new structure is motivated by that responsible for propagation in an LQG-like quantization of paramatrized field theory and significantly alters the space of physical states. We study the off shell constraint algebra of the model in the context of these structural changes and show that the continuum limit action of multiple products of Hamiltonian constraints is (a) supported on an appropriate domain of states, (b) yields anomaly free commutators between pairs of Hamiltonian constraints, and (c) is diffeomorphism covariant. Many of our considerations seem robust enough to be applied to the setting of 4d Euclidean gravity.

  13. Principal component analysis and the locus of the Fréchet mean in the space of phylogenetic trees.

    PubMed

    Nye, Tom M W; Tang, Xiaoxian; Weyenberg, Grady; Yoshida, Ruriko

    2017-12-01

    Evolutionary relationships are represented by phylogenetic trees, and a phylogenetic analysis of gene sequences typically produces a collection of these trees, one for each gene in the analysis. Analysis of samples of trees is difficult due to the multi-dimensionality of the space of possible trees. In Euclidean spaces, principal component analysis is a popular method of reducing high-dimensional data to a low-dimensional representation that preserves much of the sample's structure. However, the space of all phylogenetic trees on a fixed set of species does not form a Euclidean vector space, and methods adapted to tree space are needed. Previous work introduced the notion of a principal geodesic in this space, analogous to the first principal component. Here we propose a geometric object for tree space similar to the [Formula: see text]th principal component in Euclidean space: the locus of the weighted Fréchet mean of [Formula: see text] vertex trees when the weights vary over the [Formula: see text]-simplex. We establish some basic properties of these objects, in particular showing that they have dimension [Formula: see text], and propose algorithms for projection onto these surfaces and for finding the principal locus associated with a sample of trees. Simulation studies demonstrate that these algorithms perform well, and analyses of two datasets, containing Apicomplexa and African coelacanth genomes respectively, reveal important structure from the second principal components.

  14. Model of Four-Dimensional Sub-Proton Euclidean Space with Real Time for Valence Quarks. Lagrangian Mechanics

    NASA Astrophysics Data System (ADS)

    Kreymer, E. L.

    2018-06-01

    The model of Euclidean space with imaginary time used in sub-hadron physics uses only part of it since this part is isomorphic to Minkowski space and has the velocity limit 0 ≤ ||v Ei|| ≤ 1. The model of four-dimensional Euclidean space with real time (E space), in which 0 ≤ ||v E|| ≤ ∞ is investigated. The vectors of this space have E-invariants, equal or analogous to the invariants of Minkowski space. All relations between physical quantities in E-space, after they are mapped into Minkowski space, satisfy the principles of SRT and are Lorentz-invariant, and the velocity of light corresponds to infinite velocity. Results obtained in the model are different from the physical laws in Minkowski space. Thus, from the model of the Lagrangian mechanics of quarks in a centrally symmetric attractive potential it follows that the energy-mass of a quark decreases with increase of the velocity and is equal to zero for v = ∞. This made it possible to establish the conditions of emission and absorption of gluons by quarks. The effect of emission of gluons by high-energy quarks was discovered experimentally significantly earlier. The model describes for the first time the dynamic coupling of the masses of constituent and current quarks and reveals new possibilities in the study of intrahardon space. The classical trajectory of the oscillation of quarks in protons is described.

  15. DNA methylation intratumor heterogeneity in localized lung adenocarcinomas.

    PubMed

    Quek, Kelly; Li, Jun; Estecio, Marcos; Zhang, Jiexin; Fujimoto, Junya; Roarty, Emily; Little, Latasha; Chow, Chi-Wan; Song, Xingzhi; Behrens, Carmen; Chen, Taiping; William, William N; Swisher, Stephen; Heymach, John; Wistuba, Ignacio; Zhang, Jianhua; Futreal, Andrew; Zhang, Jianjun

    2017-03-28

    Cancers are composed of cells with distinct molecular and phenotypic features within a given tumor, a phenomenon termed intratumor heterogeneity (ITH). Previously, we have demonstrated genomic ITH in localized lung adenocarcinomas; however, the nature of methylation ITH in lung cancers has not been well investigated. In this study, we generated methylation profiles of 48 spatially separated tumor regions from 11 localized lung adenocarcinomas and their matched normal lung tissues using Illumina Infinium Human Methylation 450K BeadChip array. We observed methylation ITH within the same tumors, but to a much less extent compared to inter-individual heterogeneity. On average, 25% of all differentially methylated probes compared to matched normal lung tissues were shared by all regions from the same tumors. This is in contrast to somatic mutations, of which approximately 77% were shared events amongst all regions of individual tumors, suggesting that while the majority of somatic mutations were early clonal events, the tumor-specific DNA methylation might be associated with later branched evolution of these 11 tumors. Furthermore, our data showed that a higher extent of DNA methylation ITH was associated with larger tumor size (average Euclidean distance of 35.64 (> 3cm, median size) versus 27.24 (<= 3cm), p = 0.014), advanced age (average Euclidean distance of 34.95 (above 65) verse 28.06 (below 65), p = 0.046) and increased risk of postsurgical recurrence (average Euclidean distance of 35.65 (relapsed patients) versus 29.03 (patients without relapsed), p = 0.039).

  16. Optimal Alignment of Structures for Finite and Periodic Systems.

    PubMed

    Griffiths, Matthew; Niblett, Samuel P; Wales, David J

    2017-10-10

    Finding the optimal alignment between two structures is important for identifying the minimum root-mean-square distance (RMSD) between them and as a starting point for calculating pathways. Most current algorithms for aligning structures are stochastic, scale exponentially with the size of structure, and the performance can be unreliable. We present two complementary methods for aligning structures corresponding to isolated clusters of atoms and to condensed matter described by a periodic cubic supercell. The first method (Go-PERMDIST), a branch and bound algorithm, locates the global minimum RMSD deterministically in polynomial time. The run time increases for larger RMSDs. The second method (FASTOVERLAP) is a heuristic algorithm that aligns structures by finding the global maximum kernel correlation between them using fast Fourier transforms (FFTs) and fast SO(3) transforms (SOFTs). For periodic systems, FASTOVERLAP scales with the square of the number of identical atoms in the system, reliably finds the best alignment between structures that are not too distant, and shows significantly better performance than existing algorithms. The expected run time for Go-PERMDIST is longer than FASTOVERLAP for periodic systems. For finite clusters, the FASTOVERLAP algorithm is competitive with existing algorithms. The expected run time for Go-PERMDIST to find the global RMSD between two structures deterministically is generally longer than for existing stochastic algorithms. However, with an earlier exit condition, Go-PERMDIST exhibits similar or better performance.

  17. The Unified Levelling Network of Sarawak and its Adjustment

    NASA Astrophysics Data System (ADS)

    Som, Z. A. M.; Yazid, A. M.; Ming, T. K.; Yazid, N. M.

    2016-09-01

    The height reference network of Sarawak has seen major improvement in over the past two decades. The most significant improvement was the establishment of extended precise leveling network of which is now able to connect all three major datum points at Pulau Lakei, Original and Bintulu. Datum by following the major accessible routes across Sarawak. This means the leveling network in Sarawak has now been inter-connected and unified. By having such a unified network leads to the possibility of having a common single least squares adjustment been performed for the first time. The least squares adjustment of this unified levelling network was attempted in order to compute the height of all Bench Marks established in the entire levelling network. The adjustment was done by using MoreFix levelling adjustment package developed at FGHT UTM. The computational procedure adopted is linear parametric adjustment by minimum constraint. Since Sarawak has three separate datums therefore three separate adjustments were implemented by utilizing datum at Pulau Lakei, Original Miri and Bintulu Datum respectively. Results of the MoreFix unified adjustment agreed very well with adjustment repeated using Starnet. Further the results were compared with solution given by Jupem and they are in good agreement as well. The difference in height analysed were within 10mm for the case of minimum constraint at Pulau Lakei datum and with much better agreement in the case of Original Miri Datum.

  18. Kinetic analysis of hyperpolarized data with minimum a priori knowledge: Hybrid maximum entropy and nonlinear least squares method (MEM/NLS).

    PubMed

    Mariotti, Erika; Veronese, Mattia; Dunn, Joel T; Southworth, Richard; Eykyn, Thomas R

    2015-06-01

    To assess the feasibility of using a hybrid Maximum-Entropy/Nonlinear Least Squares (MEM/NLS) method for analyzing the kinetics of hyperpolarized dynamic data with minimum a priori knowledge. A continuous distribution of rates obtained through the Laplace inversion of the data is used as a constraint on the NLS fitting to derive a discrete spectrum of rates. Performance of the MEM/NLS algorithm was assessed through Monte Carlo simulations and validated by fitting the longitudinal relaxation time curves of hyperpolarized [1-(13) C] pyruvate acquired at 9.4 Tesla and at three different flip angles. The method was further used to assess the kinetics of hyperpolarized pyruvate-lactate exchange acquired in vitro in whole blood and to re-analyze the previously published in vitro reaction of hyperpolarized (15) N choline with choline kinase. The MEM/NLS method was found to be adequate for the kinetic characterization of hyperpolarized in vitro time-series. Additional insights were obtained from experimental data in blood as well as from previously published (15) N choline experimental data. The proposed method informs on the compartmental model that best approximate the biological system observed using hyperpolarized (13) C MR especially when the metabolic pathway assessed is complex or a new hyperpolarized probe is used. © 2014 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc.

  19. Phase-unwrapping algorithm by a rounding-least-squares approach

    NASA Astrophysics Data System (ADS)

    Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin

    2014-02-01

    A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.

  20. Angle-resolved spin wave band diagrams of square antidot lattices studied by Brillouin light scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gubbiotti, G.; Tacchi, S.; Montoncello, F.

    2015-06-29

    The Brillouin light scattering technique has been exploited to study the angle-resolved spin wave band diagrams of squared Permalloy antidot lattice. Frequency dispersion of spin waves has been measured for a set of fixed wave vector magnitudes, while varying the wave vector in-plane orientation with respect to the applied magnetic field. The magnonic band gap between the two most dispersive modes exhibits a minimum value at an angular position, which exclusively depends on the product between the selected wave vector magnitude and the lattice constant of the array. The experimental data are in very good agreement with predictions obtained bymore » dynamical matrix method calculations. The presented results are relevant for magnonic devices where the antidot lattice, acting as a diffraction grating, is exploited to achieve multidirectional spin wave emission.« less

Top