Shi, Xiaoping; Wu, Yuehua; Rao, Calyampudi Radhakrishna
2018-06-05
The change-point detection has been carried out in terms of the Euclidean minimum spanning tree (MST) and shortest Hamiltonian path (SHP), with successful applications in the determination of authorship of a classic novel, the detection of change in a network over time, the detection of cell divisions, etc. However, these Euclidean graph-based tests may fail if a dataset contains random interferences. To solve this problem, we present a powerful non-Euclidean SHP-based test, which is consistent and distribution-free. The simulation shows that the test is more powerful than both Euclidean MST- and SHP-based tests and the non-Euclidean MST-based test. Its applicability in detecting both landing and departure times in video data of bees' flower visits is illustrated.
Robustness of mission plans for unmanned aircraft
NASA Astrophysics Data System (ADS)
Niendorf, Moritz
This thesis studies the robustness of optimal mission plans for unmanned aircraft. Mission planning typically involves tactical planning and path planning. Tactical planning refers to task scheduling and in multi aircraft scenarios also includes establishing a communication topology. Path planning refers to computing a feasible and collision-free trajectory. For a prototypical mission planning problem, the traveling salesman problem on a weighted graph, the robustness of an optimal tour is analyzed with respect to changes to the edge costs. Specifically, the stability region of an optimal tour is obtained, i.e., the set of all edge cost perturbations for which that tour is optimal. The exact stability region of solutions to variants of the traveling salesman problems is obtained from a linear programming relaxation of an auxiliary problem. Edge cost tolerances and edge criticalities are derived from the stability region. For Euclidean traveling salesman problems, robustness with respect to perturbations to vertex locations is considered and safe radii and vertex criticalities are introduced. For weighted-sum multi-objective problems, stability regions with respect to changes in the objectives, weights, and simultaneous changes are given. Most critical weight perturbations are derived. Computing exact stability regions is intractable for large instances. Therefore, tractable approximations are desirable. The stability region of solutions to relaxations of the traveling salesman problem give under approximations and sets of tours give over approximations. The application of these results to the two-neighborhood and the minimum 1-tree relaxation are discussed. Bounds on edge cost tolerances and approximate criticalities are obtainable likewise. A minimum spanning tree is an optimal communication topology for minimizing the cumulative transmission power in multi aircraft missions. The stability region of a minimum spanning tree is given and tolerances, stability balls, and criticalities are derived. This analysis is extended to Euclidean minimum spanning trees. This thesis aims at enabling increased mission performance by providing means of assessing the robustness and optimality of a mission and methods for identifying critical elements. Examples of the application to mission planning in contested environments, cargo aircraft mission planning, multi-objective mission planning, and planning optimal communication topologies for teams of unmanned aircraft are given.
NASA Astrophysics Data System (ADS)
Sneath, P. H. A.
A BASIC program is presented for significance tests to determine whether a dendrogram is derived from clustering of points that belong to a single multivariate normal distribution. The significance tests are based on statistics of the Kolmogorov—Smirnov type, obtained by comparing the observed cumulative graph of branch levels with a graph for the hypothesis of multivariate normality. The program also permits testing whether the dendrogram could be from a cluster of lower dimensionality due to character correlations. The program makes provision for three similarity coefficients, (1) Euclidean distances, (2) squared Euclidean distances, and (3) Simple Matching Coefficients, and for five cluster methods (1) WPGMA, (2) UPGMA, (3) Single Linkage (or Minimum Spanning Trees), (4) Complete Linkage, and (5) Ward's Increase in Sums of Squares. The program is entitled DENBRAN.
Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.
Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing
2017-04-20
The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.
Exact and Approximate Stability of Solutions to Traveling Salesman Problems.
Niendorf, Moritz; Girard, Anouck R
2018-02-01
This paper presents the stability analysis of an optimal tour for the symmetric traveling salesman problem (TSP) by obtaining stability regions. The stability region of an optimal tour is the set of all cost changes for which that solution remains optimal and can be understood as the margin of optimality for a solution with respect to perturbations in the problem data. It is known that it is not possible to test in polynomial time whether an optimal tour remains optimal after the cost of an arbitrary set of edges changes. Therefore, this paper develops tractable methods to obtain under and over approximations of stability regions based on neighborhoods and relaxations. The application of the results to the two-neighborhood and the minimum 1 tree (M1T) relaxation are discussed in detail. For Euclidean TSPs, stability regions with respect to vertex location perturbations and the notion of safe radii and location criticalities are introduced. Benefits of this paper include insight into robustness properties of tours, minimum spanning trees, M1Ts, and fast methods to evaluate optimality after perturbations occur. Numerical examples are given to demonstrate the methods and achievable approximation quality.
Zourmand, Alireza; Ting, Hua-Nong; Mirhassani, Seyed Mostafa
2013-03-01
Speech is one of the prevalent communication mediums for humans. Identifying the gender of a child speaker based on his/her speech is crucial in telecommunication and speech therapy. This article investigates the use of fundamental and formant frequencies from sustained vowel phonation to distinguish the gender of Malay children aged between 7 and 12 years. The Euclidean minimum distance and multilayer perceptron were used to classify the gender of 360 Malay children based on different combinations of fundamental and formant frequencies (F0, F1, F2, and F3). The Euclidean minimum distance with normalized frequency data achieved a classification accuracy of 79.44%, which was higher than that of the nonnormalized frequency data. Age-dependent modeling was used to improve the accuracy of gender classification. The Euclidean distance method obtained 84.17% based on the optimal classification accuracy for all age groups. The accuracy was further increased to 99.81% using multilayer perceptron based on mel-frequency cepstral coefficients. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Lazy orbits: An optimization problem on the sphere
NASA Astrophysics Data System (ADS)
Vincze, Csaba
2018-01-01
Non-transitive subgroups of the orthogonal group play an important role in the non-Euclidean geometry. If G is a closed subgroup in the orthogonal group such that the orbit of a single Euclidean unit vector does not cover the (Euclidean) unit sphere centered at the origin then there always exists a non-Euclidean Minkowski functional such that the elements of G preserve the Minkowskian length of vectors. In other words the Minkowski geometry is an alternative of the Euclidean geometry for the subgroup G. It is rich of isometries if G is "close enough" to the orthogonal group or at least to one of its transitive subgroups. The measure of non-transitivity is related to the Hausdorff distances of the orbits under the elements of G to the Euclidean sphere. Its maximum/minimum belongs to the so-called lazy/busy orbits, i.e. they are the solutions of an optimization problem on the Euclidean sphere. The extremal distances allow us to characterize the reducible/irreducible subgroups. We also formulate an upper and a lower bound for the ratio of the extremal distances. As another application of the analytic tools we introduce the rank of a closed non-transitive group G. We shall see that if G is of maximal rank then it is finite or reducible. Since the reducible and the finite subgroups form two natural prototypes of non-transitive subgroups, the rank seems to be a fundamental notion in their characterization. Closed, non-transitive groups of rank n - 1 will be also characterized. Using the general results we classify all their possible types in lower dimensional cases n = 2 , 3 and 4. Finally we present some applications of the results to the holonomy group of a metric linear connection on a connected Riemannian manifold.
Fuzzy α-minimum spanning tree problem: definition and solutions
NASA Astrophysics Data System (ADS)
Zhou, Jian; Chen, Lu; Wang, Ke; Yang, Fan
2016-04-01
In this paper, the minimum spanning tree problem is investigated on the graph with fuzzy edge weights. The notion of fuzzy ? -minimum spanning tree is presented based on the credibility measure, and then the solutions of the fuzzy ? -minimum spanning tree problem are discussed under different assumptions. First, we respectively, assume that all the edge weights are triangular fuzzy numbers and trapezoidal fuzzy numbers and prove that the fuzzy ? -minimum spanning tree problem can be transformed to a classical problem on a crisp graph in these two cases, which can be solved by classical algorithms such as the Kruskal algorithm and the Prim algorithm in polynomial time. Subsequently, as for the case that the edge weights are general fuzzy numbers, a fuzzy simulation-based genetic algorithm using Prüfer number representation is designed for solving the fuzzy ? -minimum spanning tree problem. Some numerical examples are also provided for illustrating the effectiveness of the proposed solutions.
ERIC Educational Resources Information Center
Brusco, Michael J.
2007-01-01
The study of human performance on discrete optimization problems has a considerable history that spans various disciplines. The two most widely studied problems are the Euclidean traveling salesperson problem and the quadratic assignment problem. The purpose of this paper is to outline a program of study for the measurement of human performance on…
NASA Astrophysics Data System (ADS)
Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen
2012-03-01
In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.
Dong, Wei-Feng; Canil, Sarah; Lai, Raymond; Morel, Didier; Swanson, Paul E.; Izevbaye, Iyare
2018-01-01
A new automated MYC IHC classifier based on bivariate logistic regression is presented. The predictor relies on image analysis developed with the open-source ImageJ platform. From a histologic section immunostained for MYC protein, 2 dimensionless quantitative variables are extracted: (a) relative distance between nuclei positive for MYC IHC based on euclidean minimum spanning tree graph and (b) coefficient of variation of the MYC IHC stain intensity among MYC IHC-positive nuclei. Distance between positive nuclei is suggested to inversely correlate MYC gene rearrangement status, whereas coefficient of variation is suggested to inversely correlate physiological regulation of MYC protein expression. The bivariate classifier was compared with 2 other MYC IHC classifiers (based on percentage of MYC IHC positive nuclei), all tested on 113 lymphomas including mostly diffuse large B-cell lymphomas with known MYC fluorescent in situ hybridization (FISH) status. The bivariate classifier strongly outperformed the “percentage of MYC IHC-positive nuclei” methods to predict MYC+ FISH status with 100% sensitivity (95% confidence interval, 94-100) associated with 80% specificity. The test is rapidly performed and might at a minimum provide primary IHC screening for MYC gene rearrangement status in diffuse large B-cell lymphomas. Furthermore, as this bivariate classifier actually predicts “permanent overexpressed MYC protein status,” it might identify nontranslocation-related chromosomal anomalies missed by FISH. PMID:27093450
Towards a PTAS for the generalized TSP in grid clusters
NASA Astrophysics Data System (ADS)
Khachay, Michael; Neznakhina, Katherine
2016-10-01
The Generalized Traveling Salesman Problem (GTSP) is a combinatorial optimization problem, which is to find a minimum cost cycle visiting one point (city) from each cluster exactly. We consider a geometric case of this problem, where n nodes are given inside the integer grid (in the Euclidean plane), each grid cell is a unit square. Clusters are induced by cells `populated' by nodes of the given instance. Even in this special setting, the GTSP remains intractable enclosing the classic Euclidean TSP on the plane. Recently, it was shown that the problem has (1.5+8√2+ɛ)-approximation algorithm with complexity bound depending on n and k polynomially, where k is the number of clusters. In this paper, we propose two approximation algorithms for the Euclidean GTSP on grid clusters. For any fixed k, both algorithms are PTAS. Time complexity of the first one remains polynomial for k = O(log n) while the second one is a PTAS, when k = n - O(log n).
Euclidean bridge to the relativistic constituent quark model
NASA Astrophysics Data System (ADS)
Hobbs, T. J.; Alberg, Mary; Miller, Gerald A.
2017-03-01
Background: Knowledge of nucleon structure is today ever more of a precision science, with heightened theoretical and experimental activity expected in coming years. At the same time, a persistent gap lingers between theoretical approaches grounded in Euclidean methods (e.g., lattice QCD, Dyson-Schwinger equations [DSEs]) as opposed to traditional Minkowski field theories (such as light-front constituent quark models). Purpose: Seeking to bridge these complementary world views, we explore the potential of a Euclidean constituent quark model (ECQM). This formalism enables us to study the gluonic dressing of the quark-level axial-vector vertex, which we undertake as a test of the framework. Method: To access its indispensable elements with a minimum of inessential detail, we develop our ECQM using the simplified quark + scalar diquark picture of the nucleon. We construct a hyperspherical formalism involving polynomial expansions of diquark propagators to marry our ECQM with the results of Bethe-Salpeter equation (BSE) analyses, and constrain model parameters by fitting electromagnetic form factor data. Results: From this formalism, we define and compute a new quantity—the Euclidean density function (EDF)—an object that characterizes the nucleon's various charge distributions as functions of the quark's Euclidean momentum. Applying this technology and incorporating information from BSE analyses, we find the quenched dressing effect on the proton's axial-singlet charge to be small in magnitude and consistent with zero, while use of recent determinations of unquenched BSEs results in a large suppression. Conclusions: The quark + scalar diquark ECQM is a step toward a realistic quark model in Euclidean space, and needs additional refinements. The substantial effect we obtain for the impact on the axial-singlet charge of the unquenched dressed vertex compared to the quenched demands further investigation.
Multi-level bandwidth efficient block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1989-01-01
The multilevel technique is investigated for combining block coding and modulation. There are four parts. In the first part, a formulation is presented for signal sets on which modulation codes are to be constructed. Distance measures on a signal set are defined and their properties are developed. In the second part, a general formulation is presented for multilevel modulation codes in terms of component codes with appropriate Euclidean distances. The distance properties, Euclidean weight distribution and linear structure of multilevel modulation codes are investigated. In the third part, several specific methods for constructing multilevel block modulation codes with interdependency among component codes are proposed. Given a multilevel block modulation code C with no interdependency among the binary component codes, the proposed methods give a multilevel block modulation code C which has the same rate as C, a minimum squared Euclidean distance not less than that of code C, a trellis diagram with the same number of states as that of C and a smaller number of nearest neighbor codewords than that of C. In the last part, error performance of block modulation codes is analyzed for an AWGN channel based on soft-decision maximum likelihood decoding. Error probabilities of some specific codes are evaluated based on their Euclidean weight distributions and simulation results.
Human Performance on Hard Non-Euclidean Graph Problems: Vertex Cover
ERIC Educational Resources Information Center
Carruthers, Sarah; Masson, Michael E. J.; Stege, Ulrike
2012-01-01
Recent studies on a computationally hard visual optimization problem, the Traveling Salesperson Problem (TSP), indicate that humans are capable of finding close to optimal solutions in near-linear time. The current study is a preliminary step in investigating human performance on another hard problem, the Minimum Vertex Cover Problem, in which…
SLE as a Mating of Trees in Euclidean Geometry
NASA Astrophysics Data System (ADS)
Holden, Nina; Sun, Xin
2018-05-01
The mating of trees approach to Schramm-Loewner evolution (SLE) in the random geometry of Liouville quantum gravity (LQG) has been recently developed by Duplantier et al. (Liouville quantum gravity as a mating of trees, 2014. arXiv:1409.7055). In this paper we consider the mating of trees approach to SLE in Euclidean geometry. Let {η} be a whole-plane space-filling SLE with parameter {κ > 4} , parameterized by Lebesgue measure. The main observable in the mating of trees approach is the contour function, a two-dimensional continuous process describing the evolution of the Minkowski content of the left and right frontier of {η} . We prove regularity properties of the contour function and show that (as in the LQG case) it encodes all the information about the curve {η} . We also prove that the uniform spanning tree on {Z^2} converges to SLE8 in the natural topology associated with the mating of trees approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giampaolo, Salvatore M.; CNR-INFM Coherentia, Naples; CNISM Unita di Salerno and INFN Sezione di Napoli, Gruppo collegato di Salerno, Baronissi
2007-10-15
We investigate the geometric characterization of pure state bipartite entanglement of (2xD)- and (3xD)-dimensional composite quantum systems. To this aim, we analyze the relationship between states and their images under the action of particular classes of local unitary operations. We find that invariance of states under the action of single-qubit and single-qutrit transformations is a necessary and sufficient condition for separability. We demonstrate that in the (2xD)-dimensional case the von Neumann entropy of entanglement is a monotonic function of the minimum squared Euclidean distance between states and their images over the set of single qubit unitary transformations. Moreover, both inmore » the (2xD)- and in the (3xD)-dimensional cases the minimum squared Euclidean distance exactly coincides with the linear entropy [and thus as well with the tangle measure of entanglement in the (2xD)-dimensional case]. These results provide a geometric characterization of entanglement measures originally established in informational frameworks. Consequences and applications of the formalism to quantum critical phenomena in spin systems are discussed.« less
Ghadie, Mohamed A; Japkowicz, Nathalie; Perkins, Theodore J
2015-08-15
Stem cell differentiation is largely guided by master transcriptional regulators, but it also depends on the expression of other types of genes, such as cell cycle genes, signaling genes, metabolic genes, trafficking genes, etc. Traditional approaches to understanding gene expression patterns across multiple conditions, such as principal components analysis or K-means clustering, can group cell types based on gene expression, but they do so without knowledge of the differentiation hierarchy. Hierarchical clustering can organize cell types into a tree, but in general this tree is different from the differentiation hierarchy itself. Given the differentiation hierarchy and gene expression data at each node, we construct a weighted Euclidean distance metric such that the minimum spanning tree with respect to that metric is precisely the given differentiation hierarchy. We provide a set of linear constraints that are provably sufficient for the desired construction and a linear programming approach to identify sparse sets of weights, effectively identifying genes that are most relevant for discriminating different parts of the tree. We apply our method to microarray gene expression data describing 38 cell types in the hematopoiesis hierarchy, constructing a weighted Euclidean metric that uses just 175 genes. However, we find that there are many alternative sets of weights that satisfy the linear constraints. Thus, in the style of random-forest training, we also construct metrics based on random subsets of the genes and compare them to the metric of 175 genes. We then report on the selected genes and their biological functions. Our approach offers a new way to identify genes that may have important roles in stem cell differentiation. tperkins@ohri.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Guo, Hao; Liu, Lei; Chen, Junjie; Xu, Yong; Jie, Xiang
2017-01-01
Functional magnetic resonance imaging (fMRI) is one of the most useful methods to generate functional connectivity networks of the brain. However, conventional network generation methods ignore dynamic changes of functional connectivity between brain regions. Previous studies proposed constructing high-order functional connectivity networks that consider the time-varying characteristics of functional connectivity, and a clustering method was performed to decrease computational cost. However, random selection of the initial clustering centers and the number of clusters negatively affected classification accuracy, and the network lost neurological interpretability. Here we propose a novel method that introduces the minimum spanning tree method to high-order functional connectivity networks. As an unbiased method, the minimum spanning tree simplifies high-order network structure while preserving its core framework. The dynamic characteristics of time series are not lost with this approach, and the neurological interpretation of the network is guaranteed. Simultaneously, we propose a multi-parameter optimization framework that involves extracting discriminative features from the minimum spanning tree high-order functional connectivity networks. Compared with the conventional methods, our resting-state fMRI classification method based on minimum spanning tree high-order functional connectivity networks greatly improved the diagnostic accuracy for Alzheimer's disease. PMID:29249926
Complex networks in the Euclidean space of communicability distances
NASA Astrophysics Data System (ADS)
Estrada, Ernesto
2012-06-01
We study the properties of complex networks embedded in a Euclidean space of communicability distances. The communicability distance between two nodes is defined as the difference between the weighted sum of walks self-returning to the nodes and the weighted sum of walks going from one node to the other. We give some indications that the communicability distance identifies the least crowded routes in networks where simultaneous submission of packages is taking place. We define an index Q based on communicability and shortest path distances, which allows reinterpreting the “small-world” phenomenon as the region of minimum Q in the Watts-Strogatz model. It also allows the classification and analysis of networks with different efficiency of spatial uses. Consequently, the communicability distance displays unique features for the analysis of complex networks in different scenarios.
2014-09-18
Operations and Developing Issues . . . . . . . . . . . . . . . . . . 6 2.1.2 Next-Generation Air Transportation System (NextGen...Air Traffic Management ESP Euclidean Shortest Path FAA Federal Aviation Administration FCFS First-Come-First-Served HCS Hybrid Control System KKT...Karush-Kuhn-Tucker LGR Legendre-Gauss-Radau MLD Minimum Lateral Distance NAS National Airspace System NASA National Aeronautics and Space Administration
Traveling salesman problem, conformal invariance, and dense polymers.
Jacobsen, J L; Read, N; Saleur, H
2004-07-16
We propose that the statistics of the optimal tour in the planar random Euclidean traveling salesman problem is conformally invariant on large scales. This is exhibited in the power-law behavior of the probabilities for the tour to zigzag repeatedly between two regions, and in subleading corrections to the length of the tour. The universality class should be the same as for dense polymers and minimal spanning trees. The conjectures for the length of the tour on a cylinder are tested numerically.
Scalar mixing in LES/PDF of a high-Ka premixed turbulent jet flame
NASA Astrophysics Data System (ADS)
You, Jiaping; Yang, Yue
2016-11-01
We report a large-eddy simulation (LES)/probability density function (PDF) study of a high-Ka premixed turbulent flame in the Lund University Piloted Jet (LUPJ) flame series, which has been investigated using direct numerical simulation (DNS) and experiments. The target flame, featuring broadened preheat and reaction zones, is categorized into the broken reaction zone regime. In the present study, three widely used mixing modes, namely the Interaction by Exchange with the Mean (IEM), Modified Curl (MC), and Euclidean Minimum Spanning Tree (EMST) models are applied to assess their performance through detailed a posteriori comparisons with DNS. A dynamic model for the time scale of scalar mixing is formulated to describe the turbulent mixing of scalars at small scales. Better quantitative agreement for the mean temperature and mean mass fractions of major and minor species are obtained with the MC and EMST models than with the IEM model. The multi-scalar mixing in composition space with the three models are analyzed to assess the modeling of the conditional molecular diffusion term. In addition, we demonstrate that the product of OH and CH2O concentrations can be a good surrogate of the local heat release rate in this flame. This work is supported by the National Natural Science Foundation of China (Grant Nos. 11521091 and 91541204).
Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm
NASA Astrophysics Data System (ADS)
Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.
2014-11-01
minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several cities optimally or connecting all cities with minimum total road length.
Multivariate Spectral Analysis to Extract Materials from Multispectral Data
1993-09-01
Euclidean minimum distance and conventional Bayesian classifier suggest some fundamental instabilities. Two candidate sources are (1) inadequate...Coacete Water 2 TOTAL Cetu¢t1te 0 0 0 0 34 0 0 34 TZC10 0 0 0 0 0 26 0 26 hpem ~d I 0 0 to 0 0 0 0 60 Seb~ s 0 0 0 0 4 24 0 28 Mwal 0 0 0 0 33 29 0 62 Ihwid
Transported PDF Modeling of Nonpremixed Turbulent CO/H-2/N-2 Jet Flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, xinyu; Haworth, D. C.; Huckaby, E. David
2012-01-01
Turbulent CO/H{sub 2}/N{sub 2} (“syngas”) flames are simulated using a transported composition probability density function (PDF) method. A consistent hybrid Lagrangian particle/Eulerian mesh algorithm is used to solve the modeled PDF transport equation. The model includes standard k–ϵ turbulence, gradient transport for scalars, and Euclidean minimum spanning tree (EMST) mixing. Sensitivities of model results to variations in the turbulence model, the treatment of radiation heat transfer, the choice of chemical mechanism, and the PDF mixing model are explored. A baseline model reproduces the measured mean and rms temperature, major species, and minor species profiles reasonably well, and captures the scalingmore » that is observed in the experiments. Both our results and the literature suggest that further improvements can be realized with adjustments in the turbulence model, the radiation heat transfer model, and the chemical mechanism. Although radiation effects are relatively small in these flames, consideration of radiation is important for accurate NO prediction. Chemical mechanisms that have been developed specifically for fuels with high concentrations of CO and H{sub 2} perform better than a methane mechanism that was not designed for this purpose. It is important to account explicitly for turbulence–chemistry interactions, although the details of the mixing model do not make a large difference in the results, within reasonable limits.« less
Steiner trees and spanning trees in six-pin soap films
NASA Astrophysics Data System (ADS)
Dutta, Prasun; Khastgir, S. Pratik; Roy, Anushree
2010-02-01
The problem of finding minimum (local as well as absolute) path lengths joining given points (or terminals) on a plane is known as the Steiner problem. The Steiner problem arises in finding the minimum total road length joining several towns and cities. We study the Steiner tree problem using six-pin soap films. Experimentally, we observe spanning trees as well as Steiner trees partly by varying the pin diameter. We propose a possibly exact expression for the length of a spanning tree or a Steiner tree, which fails mysteriously in certain cases.
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1991-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.
On the complexity and approximability of some Euclidean optimal summing problems
NASA Astrophysics Data System (ADS)
Eremeev, A. V.; Kel'manov, A. V.; Pyatkin, A. V.
2016-10-01
The complexity status of several well-known discrete optimization problems with the direction of optimization switching from maximum to minimum is analyzed. The task is to find a subset of a finite set of Euclidean points (vectors). In these problems, the objective functions depend either only on the norm of the sum of the elements from the subset or on this norm and the cardinality of the subset. It is proved that, if the dimension of the space is a part of the input, then all these problems are strongly NP-hard. Additionally, it is shown that, if the space dimension is fixed, then all the problems are NP-hard even for dimension 2 (on a plane) and there are no approximation algorithms with a guaranteed accuracy bound for them unless P = NP. It is shown that, if the coordinates of the input points are integer, then all the problems can be solved in pseudopolynomial time in the case of a fixed space dimension.
NASA Astrophysics Data System (ADS)
Li, H. W.; Pan, Z. Y.; Ren, Y. B.; Wang, J.; Gan, Y. L.; Zheng, Z. Z.; Wang, W.
2018-03-01
According to the radial operation characteristics in distribution systems, this paper proposes a new method based on minimum spanning trees method for optimal capacitor switching. Firstly, taking the minimal active power loss as objective function and not considering the capacity constraints of capacitors and source, this paper uses Prim algorithm among minimum spanning trees algorithms to get the power supply ranges of capacitors and source. Then with the capacity constraints of capacitors considered, capacitors are ranked by the method of breadth-first search. In term of the order from high to low of capacitor ranking, capacitor compensation capacity based on their power supply range is calculated. Finally, IEEE 69 bus system is adopted to test the accuracy and practicality of the proposed algorithm.
The Development of Euclidean and Non-Euclidean Cosmologies
ERIC Educational Resources Information Center
Norman, P. D.
1975-01-01
Discusses early Euclidean cosmologies, inadequacies in classical Euclidean cosmology, and the development of non-Euclidean cosmologies. Explains the present state of the theory of cosmology including the work of Dirac, Sandage, and Gott. (CP)
Hyperspectral feature mapping classification based on mathematical morphology
NASA Astrophysics Data System (ADS)
Liu, Chang; Li, Junwei; Wang, Guangping; Wu, Jingli
2016-03-01
This paper proposed a hyperspectral feature mapping classification algorithm based on mathematical morphology. Without the priori information such as spectral library etc., the spectral and spatial information can be used to realize the hyperspectral feature mapping classification. The mathematical morphological erosion and dilation operations are performed respectively to extract endmembers. The spectral feature mapping algorithm is used to carry on hyperspectral image classification. The hyperspectral image collected by AVIRIS is applied to evaluate the proposed algorithm. The proposed algorithm is compared with minimum Euclidean distance mapping algorithm, minimum Mahalanobis distance mapping algorithm, SAM algorithm and binary encoding mapping algorithm. From the results of the experiments, it is illuminated that the proposed algorithm's performance is better than that of the other algorithms under the same condition and has higher classification accuracy.
C-semiring Frameworks for Minimum Spanning Tree Problems
NASA Astrophysics Data System (ADS)
Bistarelli, Stefano; Santini, Francesco
In this paper we define general algebraic frameworks for the Minimum Spanning Tree problem based on the structure of c-semirings. We propose general algorithms that can compute such trees by following different cost criteria, which must be all specific instantiation of c-semirings. Our algorithms are extensions of well-known procedures, as Prim or Kruskal, and show the expressivity of these algebraic structures. They can deal also with partially-ordered costs on the edges.
NASA Technical Reports Server (NTRS)
Jones, R. T.
1950-01-01
The problem of the minimum induced drag of wings having a given lift and a given span is extended to include cases in which the bending moment to be supported by the wing is also given. The theory is limited to lifting surfaces traveling at subsonic speeds. It is found that the required shape of the downwash distribution can be obtained in an elementary way which is applicable to a variety of such problems. Expressions for the minimum drag and the corresponding spanwise load distributions are also given for the case in which the lift and the bending moment about the wing root are fixed while the span is allowed to vary. The results show a 15-percent reduction of the induced drag with a 15-percent increase in span as compared with results for an elliptically loaded wing having the same total lift and bending moment.
Minimum Covers of Fixed Cardinality in Weighted Graphs.
ERIC Educational Resources Information Center
White, Lee J.
Reported is the result of research on combinatorial and algorithmic techniques for information processing. A method is discussed for obtaining minimum covers of specified cardinality from a given weighted graph. By the indicated method, it is shown that the family of minimum covers of varying cardinality is related to the minimum spanning tree of…
Prediction of acoustic feature parameters using myoelectric signals.
Lee, Ki-Seung
2010-07-01
It is well-known that a clear relationship exists between human voices and myoelectric signals (MESs) from the area of the speaker's mouth. In this study, we utilized this information to implement a speech synthesis scheme in which MES alone was used to predict the parameters characterizing the vocal-tract transfer function of specific speech signals. Several feature parameters derived from MES were investigated to find the optimal feature for maximization of the mutual information between the acoustic and the MES features. After the optimal feature was determined, an estimation rule for the acoustic parameters was proposed, based on a minimum mean square error (MMSE) criterion. In a preliminary study, 60 isolated words were used for both objective and subjective evaluations. The results showed that the average Euclidean distance between the original and predicted acoustic parameters was reduced by about 30% compared with the average Euclidean distance of the original parameters. The intelligibility of the synthesized speech signals using the predicted features was also evaluated. A word-level identification ratio of 65.5% and a syllable-level identification ratio of 73% were obtained through a listening test.
Intermediate Templates Guided Groupwise Registration of Diffusion Tensor Images
Jia, Hongjun; Yap, Pew-Thian; Wu, Guorong; Wang, Qian; Shen, Dinggang
2010-01-01
Registration of a population of diffusion tensor images (DTIs) is one of the key steps in medical image analysis, and it plays an important role in the statistical analysis of white matter related neurological diseases. However, pairwise registration with respect to a pre-selected template may not give precise results if the selected template deviates significantly from the distribution of images. To cater for more accurate and consistent registration, a novel framework is proposed for groupwise registration with the guidance from one or more intermediate templates determined from the population of images. Specifically, we first use a Euclidean distance, defined as a combinative measure based on the FA map and ADC map, for gauging the similarity of each pair of DTIs. A fully connected graph is then built with each node denoting an image and each edge denoting the distance between a pair of images. The root template image is determined automatically as the image with the overall shortest path length to all other images on the minimum spanning tree (MST) of the graph. Finally, a sequence of registration steps is applied to progressively warping each image towards the root template image with the help of intermediate templates distributed along its path to the root node on the MST. Extensive experimental results using diffusion tensor images of real subjects indicate that registration accuracy and fiber tract alignment are significantly improved, compared with the direct registration from each image to the root template image. PMID:20851197
A proof of the theorem regarding the distribution of lift over the span for minimum induced drag
NASA Technical Reports Server (NTRS)
Durand, W F
1931-01-01
The proof of the theorem that the elliptical distribution of lift over the span is that which will give rise to the minimum induced drag has been given in a variety of ways, generally speaking too difficult to be readily followed by the graduate of the average good technical school of the present day. In the form of proof this report makes an effort to bring the matter more readily within the grasp of this class of readers.
An algorithm for calculating minimum Euclidean distance between two geographic features
NASA Astrophysics Data System (ADS)
Peuquet, Donna J.
1992-09-01
An efficient algorithm is presented for determining the shortest Euclidean distance between two features of arbitrary shape that are represented in quadtree form. These features may be disjoint point sets, lines, or polygons. It is assumed that the features do not overlap. Features also may be intertwined and polygons may be complex (i.e. have holes). Utilizing a spatial divide-and-conquer approach inherent in the quadtree data model, the basic rationale is to narrow-in on portions of each feature quickly that are on a facing edge relative to the other feature, and to minimize the number of point-to-point Euclidean distance calculations that must be performed. Besides offering an efficient, grid-based alternative solution, another unique and useful aspect of the current algorithm is that is can be used for rapidly calculating distance approximations at coarser levels of resolution. The overall process can be viewed as a top-down parallel search. Using one list of leafcode addresses for each of the two features as input, the algorithm is implemented by successively dividing these lists into four sublists for each descendant quadrant. The algorithm consists of two primary phases. The first determines facing adjacent quadrant pairs where part or all of the two features are separated between the two quadrants, respectively. The second phase then determines the closest pixel-level subquadrant pairs within each facing quadrant pair at the lowest level. The key element of the second phase is a quick estimate distance heuristic for further elimination of locations that are not as near as neighboring locations.
Limits to Open Class Performance?
NASA Technical Reports Server (NTRS)
Bowers, Albion H.
2008-01-01
This presentation discusses open or unlimited class aircraft performance limitations and design solutions. Limitations in this class of aircraft include slow climbing flight which requires low wing loading, high cruise speed which requires high wing loading, gains in induced or viscous drag alone which result in only half the gain overall and other structural problems (yaw inertia and spins, flutter and static loads integrity). Design solutions include introducing minimum induced drag for a given span (elliptical span load or winglets) and introducing minimum induced drag for a bell shaped span load. It is concluded that open class performance limits (under current rules and technologies) is very close to absolute limits, though some gains remain to be made from unexplored areas and new technologies.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1990-01-01
An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Giampaolo, Salvatore M.; Illuminati, Fabrizio
2007-10-01
We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1×M bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself and the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a , uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.
Riemannian geometric approach to human arm dynamics, movement optimization, and invariance
NASA Astrophysics Data System (ADS)
Biess, Armin; Flash, Tamar; Liebermann, Dario G.
2011-03-01
We present a generally covariant formulation of human arm dynamics and optimization principles in Riemannian configuration space. We extend the one-parameter family of mean-squared-derivative (MSD) cost functionals from Euclidean to Riemannian space, and we show that they are mathematically identical to the corresponding dynamic costs when formulated in a Riemannian space equipped with the kinetic energy metric. In particular, we derive the equivalence of the minimum-jerk and minimum-torque change models in this metric space. Solutions of the one-parameter family of MSD variational problems in Riemannian space are given by (reparametrized) geodesic paths, which correspond to movements with least muscular effort. Finally, movement invariants are derived from symmetries of the Riemannian manifold. We argue that the geometrical structure imposed on the arm’s configuration space may provide insights into the emerging properties of the movements generated by the motor system.
Enjoyment of Euclidean Planar Triangles
ERIC Educational Resources Information Center
Srinivasan, V. K.
2013-01-01
This article adopts the following classification for a Euclidean planar [triangle]ABC, purely based on angles alone. A Euclidean planar triangle is said to be acute angled if all the three angles of the Euclidean planar [triangle]ABC are acute angles. It is said to be right angled at a specific vertex, say B, if the angle ?ABC is a right angle…
Gifted Mathematicians Constructing Their Own Geometries--Changes in Knowledge and Attitude.
ERIC Educational Resources Information Center
Shillor, Irith
1997-01-01
Using Taxi-Cab Geometry (a non-Euclidean geometry program) as the starting point, 14 mathematically gifted British secondary students (ages 12-14) were asked to consider the differences between Euclidean and Non-Euclidean geometries, then to construct their own geometry and to consider the non-Euclidean elements within it. The positive effects of…
Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei
2013-10-01
The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Walwyn, Amy L.; Navarro, Daniel J.
2010-01-01
An experiment is reported comparing human performance on two kinds of visually presented traveling salesperson problems (TSPs), those reliant on Euclidean geometry and those reliant on city block geometry. Across multiple array sizes, human performance was near-optimal in both geometries, but was slightly better in the Euclidean format. Even so,…
Jothi, R; Mohanty, Sraban Kumar; Ojha, Aparajita
2016-04-01
Gene expression data clustering is an important biological process in DNA microarray analysis. Although there have been many clustering algorithms for gene expression analysis, finding a suitable and effective clustering algorithm is always a challenging problem due to the heterogeneous nature of gene profiles. Minimum Spanning Tree (MST) based clustering algorithms have been successfully employed to detect clusters of varying shapes and sizes. This paper proposes a novel clustering algorithm using Eigenanalysis on Minimum Spanning Tree based neighborhood graph (E-MST). As MST of a set of points reflects the similarity of the points with their neighborhood, the proposed algorithm employs a similarity graph obtained from k(') rounds of MST (k(')-MST neighborhood graph). By studying the spectral properties of the similarity matrix obtained from k(')-MST graph, the proposed algorithm achieves improved clustering results. We demonstrate the efficacy of the proposed algorithm on 12 gene expression datasets. Experimental results show that the proposed algorithm performs better than the standard clustering algorithms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dynamic hyperbolic geometry: building intuition and understanding mediated by a Euclidean model
NASA Astrophysics Data System (ADS)
Moreno-Armella, Luis; Brady, Corey; Elizondo-Ramirez, Rubén
2018-05-01
This paper explores a deep transformation in mathematical epistemology and its consequences for teaching and learning. With the advent of non-Euclidean geometries, direct, iconic correspondences between physical space and the deductive structures of mathematical inquiry were broken. For non-Euclidean ideas even to become thinkable the mathematical community needed to accumulate over twenty centuries of reflection and effort: a precious instance of distributed intelligence at the cultural level. In geometry education after this crisis, relations between intuitions and geometrical reasoning must be established philosophically, rather than taken for granted. One approach seeks intuitive supports only for Euclidean explorations, viewing non-Euclidean inquiry as fundamentally non-intuitive in nature. We argue for moving beyond such an impoverished approach, using dynamic geometry environments to develop new intuitions even in the extremely challenging setting of hyperbolic geometry. Our efforts reverse the typical direction, using formal structures as a source for a new family of intuitions that emerge from exploring a digital model of hyperbolic geometry. This digital model is elaborated within a Euclidean dynamic geometry environment, enabling a conceptual dance that re-configures Euclidean knowledge as a support for building intuitions in hyperbolic space-intuitions based not directly on physical experience but on analogies extending Euclidean concepts.
On the Minimum Induced Drag of Wings
NASA Technical Reports Server (NTRS)
Bowers, Albion H.
2010-01-01
Of all the types of drag, induced drag is associated with the creation and generation of lift over wings. Induced drag is directly driven by the span load that the aircraft is flying at. The tools by which to calculate and predict induced drag we use were created by Ludwig Prandtl in 1903. Within a decade after Prandtl created a tool for calculating induced drag, Prandtl and his students had optimized the problem to solve the minimum induced drag for a wing of a given span, formalized and written about in 1920. This solution is quoted in textbooks extensively today. Prandtl did not stop with this first solution, and came to a dramatically different solution in 1932. Subsequent development of this 1932 solution solves several aeronautics design difficulties simultaneously, including maximum performance, minimum structure, minimum drag loss due to control input, and solution to adverse yaw without a vertical tail. This presentation lists that solution by Prandtl, and the refinements by Horten, Jones, Kline, Viswanathan, and Whitcomb
On the Minimum Induced Drag of Wings -or- Thinking Outside the Box
NASA Technical Reports Server (NTRS)
Bowers, Albion H.
2011-01-01
Of all the types of drag, induced drag is associated with the creation and generation of lift over wings. Induced drag is directly driven by the span load that the aircraft is flying at. The tools by which to calculate and predict induced drag we use were created by Ludwig Prandtl in 1903. Within a decade after Prandtl created a tool for calculating induced drag, Prandtl and his students had optimized the problem to solve the minimum induced drag for a wing of a given span, formalized and written about in 1920. This solution is quoted in textbooks extensively today. Prandtl did not stop with this first solution, and came to a dramatically different solution in 1932. Subsequent development of this 1932 solution solves several aeronautics design difficulties simultaneously, including maximum performance, minimum structure, minimum drag loss due to control input, and solution to adverse yaw without a vertical tail. This presentation lists that solution by Prandtl, and the refinements by Horten, Jones, Kline, Viswanathan, and Whitcomb.
On the Minimum Induced Drag of Wings
NASA Technical Reports Server (NTRS)
Bowers, Albion H.
2011-01-01
Of all the types of drag, induced drag is associated with the creation and generation of lift over wings. Induced drag is directly driven by the span load that the aircraft is flying at. The tools by which to calculate and predict induced drag we use were created by Ludwig Prandtl in 1903. Within a decade after Prandtl created a tool for calculating induced drag, Prandtl and his students had optimized the problem to solve the minimum induced drag for a wing of a given span, formalized and written about in 1920. This solution is quoted in textbooks extensively today. Prandtl did not stop with this first solution, and came to a dramatically different solution in 1932. Subsequent development of this 1932 solution solves several aeronautics design difficulties simultaneously, including maximum performance, minimum structure, minimum drag loss due to control input, and solution to adverse yaw without a vertical tail. This presentation lists that solution by Prandtl, and the refinements by Horten, Jones, Kline, Viswanathan, and Whitcomb.
A minimum spanning forest based classification method for dedicated breast CT images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei, E-mail: bfei@emory.edu
Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting modelmore » used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging.« less
Euclidean sections of protein conformation space and their implications in dimensionality reduction
Duan, Mojie; Li, Minghai; Han, Li; Huo, Shuanghong
2014-01-01
Dimensionality reduction is widely used in searching for the intrinsic reaction coordinates for protein conformational changes. We find the dimensionality–reduction methods using the pairwise root–mean–square deviation as the local distance metric face a challenge. We use Isomap as an example to illustrate the problem. We believe that there is an implied assumption for the dimensionality–reduction approaches that aim to preserve the geometric relations between the objects: both the original space and the reduced space have the same kind of geometry, such as Euclidean geometry vs. Euclidean geometry or spherical geometry vs. spherical geometry. When the protein free energy landscape is mapped onto a 2D plane or 3D space, the reduced space is Euclidean, thus the original space should also be Euclidean. For a protein with N atoms, its conformation space is a subset of the 3N-dimensional Euclidean space R3N. We formally define the protein conformation space as the quotient space of R3N by the equivalence relation of rigid motions. Whether the quotient space is Euclidean or not depends on how it is parameterized. When the pairwise root–mean–square deviation is employed as the local distance metric, implicit representations are used for the protein conformation space, leading to no direct correspondence to a Euclidean set. We have demonstrated that an explicit Euclidean-based representation of protein conformation space and the local distance metric associated to it improve the quality of dimensionality reduction in the tetra-peptide and β–hairpin systems. PMID:24913095
Panel flutter optimization by gradient projection
NASA Technical Reports Server (NTRS)
Pierson, B. L.
1975-01-01
A gradient projection optimal control algorithm incorporating conjugate gradient directions of search is described and applied to several minimum weight panel design problems subject to a flutter speed constraint. New numerical solutions are obtained for both simply-supported and clamped homogeneous panels of infinite span for various levels of inplane loading and minimum thickness. The minimum thickness inequality constraint is enforced by a simple transformation of variables.
2016-03-02
Nyquist tiles and sampling groups in Euclidean geometry, and discussed the extension of these concepts to hyperbolic and spherical geometry and...hyperbolic or spherical spaces. We look to develop a structure for the tiling of frequency spaces in both Euclidean and non-Euclidean domains. In particular...we establish Nyquist tiles and sampling groups in Euclidean geometry, and discuss the extension of these concepts to hyperbolic and spherical geometry
Minimum depth of soil cover above long-span soil-steel railway bridges
NASA Astrophysics Data System (ADS)
Esmaeili, Morteza; Zakeri, Jabbar Ali; Abdulrazagh, Parisa Haji
2013-12-01
Recently, soil-steel bridges have become more commonly used as railway-highway crossings because of their economical advantages and short construction period compared with traditional bridges. The currently developed formula for determining the minimum depth of covers by existing codes is typically based on vehicle loads and non-stiffened panels and takes into consideration the geometrical shape of the metal structure to avoid the failure of soil cover above a soil-steel bridge. The effects of spans larger than 8 m or more stiffened panels due to railway loads that maintain a safe railway track have not been accounted for in the minimum cover formulas and are the subject of this paper. For this study, two-dimensional finite element (FE) analyses of four low-profile arches and four box culverts with spans larger than 8 m were performed to develop new patterns for the minimum depth of soil cover by considering the serviceability criterion of the railway track. Using the least-squares method, new formulas were then developed for low-profile arches and box culverts and were compared with Canadian Highway Bridge Design Code formulas. Finally, a series of three-dimensional (3D) finite element FE analyses were carried out to control the out-of-plane buckling in the steel plates due to the 3D pattern of train loads. The results show that the out-of-plane bending does not control the buckling behavior of the steel plates, so the proposed equations for minimum depth of cover can be appropriately used for practical purposes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... is any day the unit combusts any municipal or institutional solid waste. (d) If you do not obtain the..., calibration checks, or zero and span checks keep you from collecting the minimum amount of data. ...
Comovements in government bond markets: A minimum spanning tree analysis
NASA Astrophysics Data System (ADS)
Gilmore, Claire G.; Lucey, Brian M.; Boscia, Marian W.
2010-11-01
The concept of a minimum spanning tree (MST) is used to study patterns of comovements for a set of twenty government bond market indices for developed North American, European, and Asian countries. We show how the MST and its related hierarchical tree evolve over time and describe the dynamic development of market linkages. Over the sample period, 1993-2008, linkages between markets have decreased somewhat. However, a subset of European Union (EU) bond markets does show increasing levels of comovements. The evolution of distinct groups within the Eurozone is also examined. The implications of our findings for portfolio diversification benefits are outlined.
40 CFR 60.13 - Monitoring requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... operators of a CEMS installed in accordance with the provisions of this part, must check the zero (or low...) calibration drifts at least once each operating day in accordance with a written procedure. The zero and span must, at a minimum, be adjusted whenever either the 24-hour zero drift or the 24-hour span drift...
40 CFR 60.13 - Monitoring requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... operators of a CEMS installed in accordance with the provisions of this part, must check the zero (or low...) calibration drifts at least once daily in accordance with a written procedure. The zero and span must, as a minimum, be adjusted whenever either the 24-hour zero drift or the 24-hour span drift exceeds two times...
40 CFR 60.13 - Monitoring requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... operators of a CEMS installed in accordance with the provisions of this part, must check the zero (or low...) calibration drifts at least once daily in accordance with a written procedure. The zero and span must, as a minimum, be adjusted whenever either the 24-hour zero drift or the 24-hour span drift exceeds two times...
40 CFR 60.13 - Monitoring requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... operators of a CEMS installed in accordance with the provisions of this part, must check the zero (or low...) calibration drifts at least once daily in accordance with a written procedure. The zero and span must, as a minimum, be adjusted whenever either the 24-hour zero drift or the 24-hour span drift exceeds two times...
40 CFR 60.13 - Monitoring requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operators of a CEMS installed in accordance with the provisions of this part, must check the zero (or low...) calibration drifts at least once daily in accordance with a written procedure. The zero and span must, as a minimum, be adjusted whenever either the 24-hour zero drift or the 24-hour span drift exceeds two times...
NASA Astrophysics Data System (ADS)
Kobylkin, Konstantin
2016-10-01
Computational complexity and approximability are studied for the problem of intersecting of a set of straight line segments with the smallest cardinality set of disks of fixed radii r > 0 where the set of segments forms straight line embedding of possibly non-planar geometric graph. This problem arises in physical network security analysis for telecommunication, wireless and road networks represented by specific geometric graphs defined by Euclidean distances between their vertices (proximity graphs). It can be formulated in a form of known Hitting Set problem over a set of Euclidean r-neighbourhoods of segments. Being of interest computational complexity and approximability of Hitting Set over so structured sets of geometric objects did not get much focus in the literature. Strong NP-hardness of the problem is reported over special classes of proximity graphs namely of Delaunay triangulations, some of their connected subgraphs, half-θ6 graphs and non-planar unit disk graphs as well as APX-hardness is given for non-planar geometric graphs at different scales of r with respect to the longest graph edge length. Simple constant factor approximation algorithm is presented for the case where r is at the same scale as the longest edge length.
NASA Astrophysics Data System (ADS)
Biess, Armin
2013-01-01
The study of the kinematic and dynamic features of human arm movements provides insights into the computational strategies underlying human motor control. In this paper a differential geometric approach to movement control is taken by endowing arm configuration space with different non-Euclidean metric structures to study the predictions of the generalized minimum-jerk (MJ) model in the resulting Riemannian manifold for different types of human arm movements. For each metric space the solution of the generalized MJ model is given by reparametrized geodesic paths. This geodesic model is applied to a variety of motor tasks ranging from three-dimensional unconstrained movements of a four degree of freedom arm between pointlike targets to constrained movements where the hand location is confined to a surface (e.g., a sphere) or a curve (e.g., an ellipse). For the latter speed-curvature relations are derived depending on the boundary conditions imposed (periodic or nonperiodic) and the compatibility with the empirical one-third power law is shown. Based on these theoretical studies and recent experimental findings, I argue that geodesics may be an emergent property of the motor system and that the sensorimotor system may shape arm configuration space by learning metric structures through sensorimotor feedback.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adesso, Gerardo; CNR-INFM Coherentia, Naples; CNISM, Unita di Salerno, Salerno
2007-10-15
We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1xM bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself andmore » the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a, uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.« less
NASA Astrophysics Data System (ADS)
Zhu, Yi-Jun; Liang, Wang-Feng; Wang, Chao; Wang, Wen-Ya
2017-01-01
In this paper, space-collaborative constellations (SCCs) for indoor multiple-input multiple-output (MIMO) visible light communication (VLC) systems are considered. Compared with traditional VLC MIMO techniques, such as repetition coding (RC), spatial modulation (SM) and spatial multiplexing (SMP), SCC achieves the minimum average optical power for a fixed minimum Euclidean distance. We have presented a unified SCC structure for 2×2 MIMO VLC systems and extended it to larger MIMO VLC systems with more transceivers. Specifically for 2×2 MIMO VLC, a fast decoding algorithm is developed with decoding complexity almost linear in terms of the square root of the cardinality of SCC, and the expressions of symbol error rate of SCC are presented. In addition, bit mappings similar to Gray mapping are proposed for SCC. Computer simulations are performed to verify the fast decoding algorithm and the performance of SCC, and the results demonstrate that the performance of SCC is better than those of RC, SM and SMP for indoor channels in general.
Trellis coding with multidimensional QAM signal sets
NASA Technical Reports Server (NTRS)
Pietrobon, Steven S.; Costello, Daniel J.
1993-01-01
Trellis coding using multidimensional QAM signal sets is investigated. Finite-size 2D signal sets are presented that have minimum average energy, are 90-deg rotationally symmetric, and have from 16 to 1024 points. The best trellis codes using the finite 16-QAM signal set with two, four, six, and eight dimensions are found by computer search (the multidimensional signal set is constructed from the 2D signal set). The best moderate complexity trellis codes for infinite lattices with two, four, six, and eight dimensions are also found. The minimum free squared Euclidean distance and number of nearest neighbors for these codes were used as the selection criteria. Many of the multidimensional codes are fully rotationally invariant and give asymptotic coding gains up to 6.0 dB. From the infinite lattice codes, the best codes for transmitting J, J + 1/4, J + 1/3, J + 1/2, J + 2/3, and J + 3/4 bit/sym (J an integer) are presented.
Dynamic Hyperbolic Geometry: Building Intuition and Understanding Mediated by a Euclidean Model
ERIC Educational Resources Information Center
Moreno-Armella, Luis; Brady, Corey; Elizondo-Ramirez, Rubén
2018-01-01
This paper explores a deep transformation in mathematical epistemology and its consequences for teaching and learning. With the advent of non-Euclidean geometries, direct, iconic correspondences between physical space and the deductive structures of mathematical inquiry were broken. For non-Euclidean ideas even to become "thinkable" the…
Can A "Hyperspace" Really Exist?
NASA Technical Reports Server (NTRS)
Zampino, Edward J.
1999-01-01
The idea of "hyperspace" is suggested as a possible approach to faster-than-light (FTL) motion. A brief summary of a 1986 study on the Euclidean representation of space-time by the author is presented. Some new calculations on the relativistic momentum and energy of a free particle in Euclidean "hyperspace" are now added and discussed. The superimposed Energy-Momentum curves for subluminal particles, tachyons, and particles in Euclidean "hyperspace" are presented. It is shown that in Euclidean "hyperspace", instead of a relativistic time dilation there is a time "compression" effect. Some fundamental questions are presented,
Spacetime and Euclidean geometry
NASA Astrophysics Data System (ADS)
Brill, Dieter; Jacobson, Ted
2006-04-01
Using only the principle of relativity and Euclidean geometry we show in this pedagogical article that the square of proper time or length in a two-dimensional spacetime diagram is proportional to the Euclidean area of the corresponding causal domain. We use this relation to derive the Minkowski line element by two geometric proofs of the spacetime Pythagoras theorem.
Students Discovering Spherical Geometry Using Dynamic Geometry Software
ERIC Educational Resources Information Center
Guven, Bulent; Karatas, Ilhan
2009-01-01
Dynamic geometry software (DGS) such as Cabri and Geometers' Sketchpad has been regularly used worldwide for teaching and learning Euclidean geometry for a long time. The DGS with its inductive nature allows students to learn Euclidean geometry via explorations. However, with respect to non-Euclidean geometries, do we need to introduce them to…
A Case Example of Insect Gymnastics: How Is Non-Euclidean Geometry Learned?
ERIC Educational Resources Information Center
Junius, Premalatha
2008-01-01
The focus of the article is on the complex cognitive process involved in learning the concept of "straightness" in Non-Euclidean geometry. Learning new material is viewed through a conflict resolution framework, as a student questions familiar assumptions understood in Euclidean geometry. A case study reveals how mathematization of the straight…
Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei
2012-12-01
Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.
Aras, N; Altinel, I K; Oommen, J
2003-01-01
In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.
Squared Euclidean distance: a statistical test to evaluate plant community change
Raymond D. Ratliff; Sylvia R. Mori
1993-01-01
The concepts and a procedure for evaluating plant community change using the squared Euclidean distance (SED) resemblance function are described. Analyses are based on the concept that Euclidean distances constitute a sample from a population of distances between sampling units (SUs) for a specific number of times and SUs. With different times, the distances will be...
Gómez, Daviel; Hernández, L Ázaro; Yabor, Lourdes; Beemster, Gerrit T S; Tebbe, Christoph C; Papenbrock, Jutta; Lorenzo, José Carlos
2018-03-15
Plant scientists usually record several indicators in their abiotic factor experiments. The common statistical management involves univariate analyses. Such analyses generally create a split picture of the effects of experimental treatments since each indicator is addressed independently. The Euclidean distance combined with the information of the control treatment could have potential as an integrating indicator. The Euclidean distance has demonstrated its usefulness in many scientific fields but, as far as we know, it has not yet been employed for plant experimental analyses. To exemplify the use of the Euclidean distance in this field, we performed an experiment focused on the effects of mannitol on sugarcane micropropagation in temporary immersion bioreactors. Five mannitol concentrations were compared: 0, 50, 100, 150 and 200 mM. As dependent variables we recorded shoot multiplication rate, fresh weight, and levels of aldehydes, chlorophylls, carotenoids and phenolics. The statistical protocol which we then carried out integrated all dependent variables to easily identify the mannitol concentration that produced the most remarkable integral effect. Results provided by the Euclidean distance demonstrate a gradually increasing distance from the control in function of increasing mannitol concentrations. 200 mM mannitol caused the most significant alteration of sugarcane biochemistry and physiology under the experimental conditions described here. This treatment showed the longest statistically significant Euclidean distance to the control treatment (2.38). In contrast, 50 and 100 mM mannitol showed the lowest Euclidean distances (0.61 and 0.84, respectively) and thus poor integrated effects of mannitol. The analysis shown here indicates that the use of the Euclidean distance can contribute to establishing a more integrated evaluation of the contrasting mannitol treatments.
M-AMST: an automatic 3D neuron tracing method based on mean shift and adapted minimum spanning tree.
Wan, Zhijiang; He, Yishan; Hao, Ming; Yang, Jian; Zhong, Ning
2017-03-29
Understanding the working mechanism of the brain is one of the grandest challenges for modern science. Toward this end, the BigNeuron project was launched to gather a worldwide community to establish a big data resource and a set of the state-of-the-art of single neuron reconstruction algorithms. Many groups contributed their own algorithms for the project, including our mean shift and minimum spanning tree (M-MST). Although M-MST is intuitive and easy to implement, the MST just considers spatial information of single neuron and ignores the shape information, which might lead to less precise connections between some neuron segments. In this paper, we propose an improved algorithm, namely M-AMST, in which a rotating sphere model based on coordinate transformation is used to improve the weight calculation method in M-MST. Two experiments are designed to illustrate the effect of adapted minimum spanning tree algorithm and the adoptability of M-AMST in reconstructing variety of neuron image datasets respectively. In the experiment 1, taking the reconstruction of APP2 as reference, we produce the four difference scores (entire structure average (ESA), different structure average (DSA), percentage of different structure (PDS) and max distance of neurons' nodes (MDNN)) by comparing the neuron reconstruction of the APP2 and the other 5 competing algorithm. The result shows that M-AMST gets lower difference scores than M-MST in ESA, PDS and MDNN. Meanwhile, M-AMST is better than N-MST in ESA and MDNN. It indicates that utilizing the adapted minimum spanning tree algorithm which took the shape information of neuron into account can achieve better neuron reconstructions. In the experiment 2, 7 neuron image datasets are reconstructed and the four difference scores are calculated by comparing the gold standard reconstruction and the reconstructions produced by 6 competing algorithms. Comparing the four difference scores of M-AMST and the other 5 algorithm, we can conclude that M-AMST is able to achieve the best difference score in 3 datasets and get the second-best difference score in the other 2 datasets. We develop a pathway extraction method using a rotating sphere model based on coordinate transformation to improve the weight calculation approach in MST. The experimental results show that M-AMST utilizes the adapted minimum spanning tree algorithm which takes the shape information of neuron into account can achieve better neuron reconstructions. Moreover, M-AMST is able to get good neuron reconstruction in variety of image datasets.
The Common Evolution of Geometry and Architecture from a Geodetic Point of View
NASA Astrophysics Data System (ADS)
Bellone, T.; Fiermonte, F.; Mussio, L.
2017-05-01
Throughout history the link between geometry and architecture has been strong and while architects have used mathematics to construct their buildings, geometry has always been the essential tool allowing them to choose spatial shapes which are aesthetically appropriate. Sometimes it is geometry which drives architectural choices, but at other times it is architectural innovation which facilitates the emergence of new ideas in geometry. Among the best known types of geometry (Euclidean, projective, analytical, Topology, descriptive, fractal,…) those most frequently employed in architectural design are: - Euclidean Geometry - Projective Geometry - The non-Euclidean geometries. Entire architectural periods are linked to specific types of geometry. Euclidean geometry, for example, was the basis for architectural styles from Antiquity through to the Romanesque period. Perspective and Projective geometry, for their part, were important from the Gothic period through the Renaissance and into the Baroque and Neo-classical eras, while non-Euclidean geometries characterize modern architecture.
40 CFR 60.2945 - Is there a minimum amount of operating parameter monitoring data I must obtain?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Standards of Performance for Other Solid Waste Incineration Units for Which Construction is Commenced After... activities (including, as applicable, calibration checks and required zero and span adjustments of the... municipal or institutional solid waste. (c) If you do not obtain the minimum data required in paragraphs (a...
40 CFR 60.2945 - Is there a minimum amount of operating parameter monitoring data I must obtain?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Standards of Performance for Other Solid Waste Incineration Units for Which Construction is Commenced After... activities (including, as applicable, calibration checks and required zero and span adjustments of the... municipal or institutional solid waste. (c) If you do not obtain the minimum data required in paragraphs (a...
Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios
2017-02-01
Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.
Sovereign debt crisis in the European Union: A minimum spanning tree approach
NASA Astrophysics Data System (ADS)
Dias, João
2012-03-01
In the wake of the financial crisis, sovereign debt crisis has emerged and is severely affecting some countries in the European Union, threatening the viability of the euro and even the EU itself. This paper applies recent developments in econophysics, in particular the minimum spanning tree approach and the associate hierarchical tree, to analyze the asynchronization between the four most affected countries and other resilient countries in the euro area. For this purpose, daily government bond yield rates are used, covering the period from April 2007 to October 2010, thus including yield rates before, during and after the financial crises. The results show an increasing separation of the two groups of euro countries with the deepening of the government bond crisis.
NASA Astrophysics Data System (ADS)
Dong, Keqiang; Zhang, Hong; Gao, You
2017-01-01
Identifying the mutual interaction in aero-engine gas path system is a crucial problem that facilitates the understanding of emerging structures in complex system. By employing the multiscale multifractal detrended cross-correlation analysis method to aero-engine gas path system, the cross-correlation characteristics between gas path system parameters are established. Further, we apply multiscale multifractal detrended cross-correlation distance matrix and minimum spanning tree to investigate the mutual interactions of gas path variables. The results can infer that the low-spool rotor speed (N1) and engine pressure ratio (EPR) are main gas path parameters. The application of proposed method contributes to promote our understanding of the internal mechanisms and structures of aero-engine dynamics.
Orientation estimation of anatomical structures in medical images for object recognition
NASA Astrophysics Data System (ADS)
Bağci, Ulaş; Udupa, Jayaram K.; Chen, Xinjian
2011-03-01
Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
..., contact Boeing Commercial Airplanes, Attention: Data & Services Management, P.O. Box 3707, MC 2H-65... contain a provision excluding inspections of areas that are covered by repairs that span a minimum of... a repair even if it does not span a potential scribe by 3 or more fastener rows and there is no...
Dynamics of investor spanning trees around dot-com bubble.
Ranganathan, Sindhuja; Kivelä, Mikko; Kanniainen, Juho
2018-01-01
We identify temporal investor networks for Nokia stock by constructing networks from correlations between investor-specific net-volumes and analyze changes in the networks around dot-com bubble. The analysis is conducted separately for households, financial, and non-financial institutions. Our results indicate that spanning tree measures for households reflected the boom and crisis: the maximum spanning tree measures had a clear upward tendency in the bull markets when the bubble was building up, and, even more importantly, the minimum spanning tree measures pre-reacted the burst of the bubble. At the same time, we find less clear reactions in the minimal and maximal spanning trees of non-financial and financial institutions around the bubble, which suggests that household investors can have a greater herding tendency around bubbles.
Dynamics of investor spanning trees around dot-com bubble
Kivelä, Mikko; Kanniainen, Juho
2018-01-01
We identify temporal investor networks for Nokia stock by constructing networks from correlations between investor-specific net-volumes and analyze changes in the networks around dot-com bubble. The analysis is conducted separately for households, financial, and non-financial institutions. Our results indicate that spanning tree measures for households reflected the boom and crisis: the maximum spanning tree measures had a clear upward tendency in the bull markets when the bubble was building up, and, even more importantly, the minimum spanning tree measures pre-reacted the burst of the bubble. At the same time, we find less clear reactions in the minimal and maximal spanning trees of non-financial and financial institutions around the bubble, which suggests that household investors can have a greater herding tendency around bubbles. PMID:29897973
Urban noise and the cultural evolution of bird songs.
Luther, David; Baptista, Luis
2010-02-07
In urban environments, anthropogenic noise can interfere with animal communication. Here we study the influence of urban noise on the cultural evolution of bird songs. We studied three adjacent dialects of white-crowned sparrow songs over a 30-year time span. Urban noise, which is louder at low frequencies, increased during our study period and therefore should have created a selection pressure for songs with higher frequencies. We found that the minimum frequency of songs increased both within and between dialects during the 30-year time span. For example, the dialect with the highest minimum frequency is in the process of replacing another dialect that has lower frequency songs. Songs with the highest minimum frequency were favoured in this environment and should have the most effective transmission properties. We suggest that one mechanism that influences how dialects, and cultural traits in general, are selected and transmitted from one generation to the next is the dialect's ability to be effectively communicated in the local environment.
ERIC Educational Resources Information Center
Hossain, Md. Mokter
2012-01-01
This mixed methods study examined preservice secondary mathematics teachers' perceptions of a blogging activity used as a supportive teaching-learning tool in a college Euclidean Geometry course. The effect of a 12-week blogging activity that was a standard component of a college Euclidean Geometry course offered for preservice secondary…
Code of Federal Regulations, 2014 CFR
2014-07-01
... and Compliance Times for Other Solid Waste Incineration Units That Commenced Construction On or Before.... An operating day is any day the unit combusts any municipal or institutional solid waste. (d) If you... malfunction or when repairs, calibration checks, or zero and span checks keep you from collecting the minimum...
Code of Federal Regulations, 2012 CFR
2012-07-01
... and Compliance Times for Other Solid Waste Incineration Units That Commenced Construction On or Before.... An operating day is any day the unit combusts any municipal or institutional solid waste. (d) If you... malfunction or when repairs, calibration checks, or zero and span checks keep you from collecting the minimum...
Code of Federal Regulations, 2013 CFR
2013-07-01
... and Compliance Times for Other Solid Waste Incineration Units That Commenced Construction On or Before.... An operating day is any day the unit combusts any municipal or institutional solid waste. (d) If you... malfunction or when repairs, calibration checks, or zero and span checks keep you from collecting the minimum...
Estimating gene function with least squares nonnegative matrix factorization.
Wang, Guoli; Ochs, Michael F
2007-01-01
Nonnegative matrix factorization is a machine learning algorithm that has extracted information from data in a number of fields, including imaging and spectral analysis, text mining, and microarray data analysis. One limitation with the method for linking genes through microarray data in order to estimate gene function is the high variance observed in transcription levels between different genes. Least squares nonnegative matrix factorization uses estimates of the uncertainties on the mRNA levels for each gene in each condition, to guide the algorithm to a local minimum in normalized chi2, rather than a Euclidean distance or divergence between the reconstructed data and the data itself. Herein, application of this method to microarray data is demonstrated in order to predict gene function.
Multi-resolution analysis for ear recognition using wavelet features
NASA Astrophysics Data System (ADS)
Shoaib, M.; Basit, A.; Faye, I.
2016-11-01
Security is very important and in order to avoid any physical contact, identification of human when they are moving is necessary. Ear biometric is one of the methods by which a person can be identified using surveillance cameras. Various techniques have been proposed to increase the ear based recognition systems. In this work, a feature extraction method for human ear recognition based on wavelet transforms is proposed. The proposed features are approximation coefficients and specific details of level two after applying various types of wavelet transforms. Different wavelet transforms are applied to find the suitable wavelet. Minimum Euclidean distance is used as a matching criterion. Results achieved by the proposed method are promising and can be used in real time ear recognition system.
Code Samples Used for Complexity and Control
NASA Astrophysics Data System (ADS)
Ivancevic, Vladimir G.; Reid, Darryn J.
2015-11-01
The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents
Low Density Parity Check Codes: Bandwidth Efficient Channel Coding
NASA Technical Reports Server (NTRS)
Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu
2003-01-01
Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.
ERIC Educational Resources Information Center
Vaughan, Herbert E.; Szabo, Steven
This is the teacher's edition of a text for the second year of a two-year high school geometry course. The course bases plane and solid geometry and trigonometry on the fact that the translations of a Euclidean space constitute a vector space which has an inner product. Congruence is a geometric topic reserved for Volume 2. Volume 2 opens with an…
Anomalously soft non-Euclidean spring
NASA Astrophysics Data System (ADS)
Levin, Ido; Sharon, Eran
In this work we study the mechanical properties of a frustrated elastic ribbon spring - the non-Euclidean minimal spring. This spring belongs to the family of non-Euclidean plates: it has no spontaneous curvature, but its lateral intrinsic geometry is described by a non-Euclidean reference metric. The reference metric of the minimal spring is hyperbolic, and can be embedded as a minimal surface. We argue that the existence of a continuous set of such isometric minimal surfaces with different extensions leads to a complete degeneracy of the bulk elastic energy of the minimal spring under elongation. This degeneracy is removed only by boundary layer effects. As a result, the mechanical properties of the minimal spring are unusual: the spring is ultra-soft with rigidity that depends on the thickness, t , as t raise 0 . 7 ex 7
Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.
Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli
2016-05-01
Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.
ERIC Educational Resources Information Center
Bilardello, Nicholas; Valdes, Linda
1998-01-01
Introduces a method for constructing phylogenies using molecular traits and elementary graph theory. Discusses analyzing molecular data and using weighted graphs, minimum-weight spanning trees, and rooted cube phylogenies to display the data. (DDR)
Image Segmentation Using Minimum Spanning Tree
NASA Astrophysics Data System (ADS)
Dewi, M. P.; Armiati, A.; Alvini, S.
2018-04-01
This research aim to segmented the digital image. The process of segmentation is to separate the object from the background. So the main object can be processed for the other purposes. Along with the development of technology in digital image processing application, the segmentation process becomes increasingly necessary. The segmented image which is the result of the segmentation process should accurate due to the next process need the interpretation of the information on the image. This article discussed the application of minimum spanning tree on graph in segmentation process of digital image. This method is able to separate an object from the background and the image will change to be the binary images. In this case, the object that being the focus is set in white, while the background is black or otherwise.
Currency crises and the evolution of foreign exchange market: Evidence from minimum spanning tree
NASA Astrophysics Data System (ADS)
Jang, Wooseok; Lee, Junghoon; Chang, Woojin
2011-02-01
We examined the time series properties of the foreign exchange market for 1990-2008 in relation to the history of the currency crises using the minimum spanning tree (MST) approach and made several meaningful observations about the MST of currencies. First, around currency crises, the mean correlation coefficient between currencies decreased whereas the normalized tree length increased. The mean correlation coefficient dropped dramatically passing through the Asian crisis and remained at the lowered level after that. Second, the Euro and the US dollar showed a strong negative correlation after 1997, implying that the prices of the two currencies moved in opposite directions. Third, we observed that Asian countries and Latin American countries moved away from the cluster center (USA) passing through the Asian crisis and Argentine crisis, respectively.
Evolutionary Topology of a Currency Network in Asia
NASA Astrophysics Data System (ADS)
Feng, Xiaobing; Wang, Xiaofan
Although recently there are extensive research on currency network using minimum spanning trees approach, the knowledge about the actual evolution of a currency web in Asia is still limited. In the paper, we study the structural evolution of an Asian network using daily exchange rate data. It was found that the correlation between Asian currencies and US Dollar, the previous regional key currency has become weaker and the intra-Asia interactions have increased. This becomes more salient after the exchange rate reform of China. Different from the previous studies, we further reveal that it is the trade volume, national wealth gap and countries growth cycle that has contributed to the evolutionary topology of the minimum spanning tree. These findings provide a valuable platform for theoretical modeling and further analysis.
NASA Astrophysics Data System (ADS)
Briceño, Raúl A.; Hansen, Maxwell T.; Monahan, Christopher J.
2017-07-01
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate that the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Finally we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.
Briceno, Raul A.; Hansen, Maxwell T.; Monahan, Christopher J.
2017-07-11
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate thatmore » the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Lastly, we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.« less
Anomalously Soft Non-Euclidean Springs
NASA Astrophysics Data System (ADS)
Levin, Ido; Sharon, Eran
2016-01-01
In this work we study the mechanical properties of a frustrated elastic ribbon spring—the non-Euclidean minimal spring. This spring belongs to the family of non-Euclidean plates: it has no spontaneous curvature, but its lateral intrinsic geometry is described by a non-Euclidean reference metric. The reference metric of the minimal spring is hyperbolic, and can be embedded as a minimal surface. We argue that the existence of a continuous set of such isometric minimal surfaces with different extensions leads to a complete degeneracy of the bulk elastic energy of the minimal spring under elongation. This degeneracy is removed only by boundary layer effects. As a result, the mechanical properties of the minimal spring are unusual: the spring is ultrasoft with a rigidity that depends on the thickness t as t7 /2 and does not explicitly depend on the ribbon's width. Moreover, we show that as the ribbon is widened, the rigidity may even decrease. These predictions are confirmed by a numerical study of a constrained spring. This work is the first to address the unusual mechanical properties of constrained non-Euclidean elastic objects.
Modification of Prim’s algorithm on complete broadcasting graph
NASA Astrophysics Data System (ADS)
Dairina; Arif, Salmawaty; Munzir, Said; Halfiani, Vera; Ramli, Marwan
2017-09-01
Broadcasting is an information dissemination from one object to another object through communication between two objects in a network. Broadcasting for n objects can be solved by n - 1 communications and minimum time unit defined by ⌈2log n⌉ In this paper, weighted graph broadcasting is considered. The minimum weight of a complete broadcasting graph will be determined. Broadcasting graph is said to be complete if every vertex is connected. Thus to determine the minimum weight of complete broadcasting graph is equivalent to determine the minimum spanning tree of a complete graph. The Kruskal’s and Prim’s algorithm will be used to determine the minimum weight of a complete broadcasting graph regardless the minimum time unit ⌈2log n⌉ and modified Prim’s algorithm for the problems of the minimum time unit ⌈2log n⌉ is done. As an example case, here, the training of trainer problem is solved using these algorithms.
Euclideanization of Maxwell-Chern-Simons theory
NASA Astrophysics Data System (ADS)
Bowman, Daniel Alan
We quantize the theory of electromagnetism in 2 + 1-spacetime dimensions with the addition of the topological Chern-Simons term using an indefinite metric formalism. In the process, we also quantize the Proca and pure Maxwell theories, which are shown to be related to the Maxwell-Chern-Simons theory. Next, we Euclideanize these three theories, obtaining path space formulae and investigating Osterwalder-Schrader positivity in each case. Finally, we obtain a characterization of those Euclidean states that correspond to physical states in the relativistic theories.
Optimal control of multiplicative control systems arising from cancer therapy
NASA Technical Reports Server (NTRS)
Bahrami, K.; Kim, M.
1975-01-01
This study deals with ways of curtailing the rapid growth of cancer cell populations. The performance functional that measures the size of the population at the terminal time as well as the control effort is devised. With use of the discrete maximum principle, the Hamiltonian for this problem is determined and the condition for optimal solutions are developed. The optimal strategy is shown to be a bang-bang control. It is shown that the optimal control for this problem must be on the vertices of an N-dimensional cube contained in the N-dimensional Euclidean space. An algorithm for obtaining a local minimum of the performance function in an orderly fashion is developed. Application of the algorithm to the design of antitumor drug and X-irradiation schedule is discussed.
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Motkova, A. V.
2018-01-01
A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.
Comparing minimum spanning trees of the Italian stock market using returns and volumes
NASA Astrophysics Data System (ADS)
Coletti, Paolo
2016-12-01
We have built the network of the top 100 Italian quoted companies in the decade 2001-2011 using four different methods, comparing the resulting minimum spanning trees for methods and industry sectors. Our starting method is based on Person's correlation of log-returns used by several other authors in the last decade. The second one is based on the correlation of symbolized log-returns, the third of log-returns and traded money and the fourth one uses a combination of log-returns with traded money. We show that some sectors correspond to the network's clusters while others are scattered, in particular the trading and apparel sectors. We analyze the different graph's measures for the four methods showing that the introduction of volumes induces larger distances and more homogeneous trees without big clusters.
Finding minimum spanning trees more efficiently for tile-based phase unwrapping
NASA Astrophysics Data System (ADS)
Sawaf, Firas; Tatam, Ralph P.
2006-06-01
The tile-based phase unwrapping method employs an algorithm for finding the minimum spanning tree (MST) in each tile. We first examine the properties of a tile's representation from a graph theory viewpoint, observing that it is possible to make use of a more efficient class of MST algorithms. We then describe a novel linear time algorithm which reduces the size of the MST problem by half at the least, and solves it completely at best. We also show how this algorithm can be applied to a tile using a sliding window technique. Finally, we show how the reduction algorithm can be combined with any other standard MST algorithm to achieve a more efficient hybrid, using Prim's algorithm for empirical comparison and noting that the reduction algorithm takes only 0.1% of the time taken by the overall hybrid.
A New Computational Method to Fit the Weighted Euclidean Distance Model.
ERIC Educational Resources Information Center
De Leeuw, Jan; Pruzansky, Sandra
1978-01-01
A computational method for weighted euclidean distance scaling (a method of multidimensional scaling) which combines aspects of an "analytic" solution with an approach using loss functions is presented. (Author/JKS)
ERIC Educational Resources Information Center
Rogers, Pat
1972-01-01
Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)
NASA Astrophysics Data System (ADS)
de Wit, Bernard; Reys, Valentin
2017-12-01
Supergravity with eight supercharges in a four-dimensional Euclidean space is constructed at the full non-linear level by performing an off-shell time-like reduction of five-dimensional supergravity. The resulting four-dimensional theory is realized off-shell with the Weyl, vector and tensor supermultiplets and a corresponding multiplet calculus. Hypermultiplets are included as well, but they are themselves only realized with on-shell supersymmetry. We also briefly discuss the non-linear supermultiplet. The off-shell reduction leads to a full understanding of the Euclidean theory. A complete multiplet calculus is presented along the lines of the Minkowskian theory. Unlike in Minkowski space, chiral and anti-chiral multiplets are real and supersymmetric actions are generally unbounded from below. Precisely as in the Minkowski case, where one has different formulations of Poincaré supergravity upon introducing different compensating supermultiplets, one can also obtain different versions of Euclidean supergravity.
Flexible intuitions of Euclidean geometry in an Amazonian indigene group
Izard, Véronique; Pica, Pierre; Spelke, Elizabeth S.; Dehaene, Stanislas
2011-01-01
Kant argued that Euclidean geometry is synthesized on the basis of an a priori intuition of space. This proposal inspired much behavioral research probing whether spatial navigation in humans and animals conforms to the predictions of Euclidean geometry. However, Euclidean geometry also includes concepts that transcend the perceptible, such as objects that are infinitely small or infinitely large, or statements of necessity and impossibility. We tested the hypothesis that certain aspects of nonperceptible Euclidian geometry map onto intuitions of space that are present in all humans, even in the absence of formal mathematical education. Our tests probed intuitions of points, lines, and surfaces in participants from an indigene group in the Amazon, the Mundurucu, as well as adults and age-matched children controls from the United States and France and younger US children without education in geometry. The responses of Mundurucu adults and children converged with that of mathematically educated adults and children and revealed an intuitive understanding of essential properties of Euclidean geometry. For instance, on a surface described to them as perfectly planar, the Mundurucu's estimations of the internal angles of triangles added up to ∼180 degrees, and when asked explicitly, they stated that there exists one single parallel line to any given line through a given point. These intuitions were also partially in place in the group of younger US participants. We conclude that, during childhood, humans develop geometrical intuitions that spontaneously accord with the principles of Euclidean geometry, even in the absence of training in mathematics. PMID:21606377
NASA Technical Reports Server (NTRS)
Lamar, J. E.
1976-01-01
A new subsonic method has been developed by which the mean camber surface can be determined for trimmed noncoplanar planforms with minimum vortex drag. This method uses a vortex lattice and overcomes previous difficulties with chord loading specification. A Trefftz plane analysis is utilized to determine the optimum span loading for minimum drag, then solved for the mean camber surface of the wing, which provides the required loading. Sensitivity studies, comparisons with other theories, and applications to configurations which include a tandem wing and a wing winglet combination have been made and are presented.
NASA Technical Reports Server (NTRS)
Dowker, Fay; Gregory, Ruth; Traschen, Jennie
1991-01-01
We argue the existence of solutions of the Euclidean Einstein equations that correspond to a vortex sitting at the horizon of a black hole. We find the asymptotic behaviors, at the horizon and at infinity, of vortex solutions for the gauge and scalar fields in an abelian Higgs model on a Euclidean Schwarzschild background and interpolate between them by integrating the equations numerically. Calculating the backreaction shows that the effect of the vortex is to cut a slice out of the Schwarzschild geometry. Consequences of these solutions for black hole thermodynamics are discussed.
Authenticating concealed private data while maintaining concealment
Thomas, Edward V [Albuquerque, NM; Draelos, Timothy J [Albuquerque, NM
2007-06-26
A method of and system for authenticating concealed and statistically varying multi-dimensional data comprising: acquiring an initial measurement of an item, wherein the initial measurement is subject to measurement error; applying a transformation to the initial measurement to generate reference template data; acquiring a subsequent measurement of an item, wherein the subsequent measurement is subject to measurement error; applying the transformation to the subsequent measurement; and calculating a Euclidean distance metric between the transformed measurements; wherein the calculated Euclidean distance metric is identical to a Euclidean distance metric between the measurement prior to transformation.
Goldstein, R.M.; Meador, M.R.
2005-01-01
We used species traits to examine the variation in fish assemblages for 21 streams in the Northern Lakes and Forests Ecoregion along a gradient of habitat disturbance. Fish species were classified based on five species trait-classes (trophic ecology, substrate preference, geomorphic preference, locomotion morphology, and reproductive strategy) and 29 categories within those classes. We used a habitat quality index to define a reference stream and then calculated Euclidean distances between the reference and each of the other sites for the five traits. Three levels of species trait analyses were conducted: (1) a composite measure (the sum of Euclidean distances across all five species traits), (2) Euclidean distances for the five individual species trait-classes, and (3) frequencies of occurrence of individual trait categories. The composite Euclidean distance was significantly correlated to the habitat index (r = -0.81; P = 0.001), as were the Euclidean distances for four of the five individual species traits (substrate preference: r = -0.70, P = 0.001; geomorphic preference: r = -0.69, P = 0.001; trophic ecology: r = -0.73, P = 0.001; and reproductive strategy: r = -0.64, P = 0.002). Although Euclidean distances for locomotion morphology were not significantly correlated to habitat index scores (r = -0.21; P = 0.368), analysis of variance and principal components analysis indicated that Euclidean distances for locomotion morphology contributed to significant variation in the fish assemblages among sites. Examination of trait categories indicated that low habitat index scores (degraded streams) were associated with changes in frequency of occurrence within the categories of all five of the species traits. Though the objectives and spatial scale of a study will dictate the level of species trait information required, our results suggest that species traits can provide critical information at multiple levels of data analysis. ?? Copyright by the American Fisheries Society 2005.
Batchelder, Kendra A; Tanenbaum, Aaron B; Albert, Seth; Guimond, Lyne; Kestener, Pierre; Arneodo, Alain; Khalil, Andre
2014-01-01
The 2D Wavelet-Transform Modulus Maxima (WTMM) method was used to detect microcalcifications (MC) in human breast tissue seen in mammograms and to characterize the fractal geometry of benign and malignant MC clusters. This was done in the context of a preliminary analysis of a small dataset, via a novel way to partition the wavelet-transform space-scale skeleton. For the first time, the estimated 3D fractal structure of a breast lesion was inferred by pairing the information from two separate 2D projected mammographic views of the same breast, i.e. the cranial-caudal (CC) and mediolateral-oblique (MLO) views. As a novelty, we define the "CC-MLO fractal dimension plot", where a "fractal zone" and "Euclidean zones" (non-fractal) are defined. 118 images (59 cases, 25 malignant and 34 benign) obtained from a digital databank of mammograms with known radiologist diagnostics were analyzed to determine which cases would be plotted in the fractal zone and which cases would fall in the Euclidean zones. 92% of malignant breast lesions studied (23 out of 25 cases) were in the fractal zone while 88% of the benign lesions were in the Euclidean zones (30 out of 34 cases). Furthermore, a Bayesian statistical analysis shows that, with 95% credibility, the probability that fractal breast lesions are malignant is between 74% and 98%. Alternatively, with 95% credibility, the probability that Euclidean breast lesions are benign is between 76% and 96%. These results support the notion that the fractal structure of malignant tumors is more likely to be associated with an invasive behavior into the surrounding tissue compared to the less invasive, Euclidean structure of benign tumors. Finally, based on indirect 3D reconstructions from the 2D views, we conjecture that all breast tumors considered in this study, benign and malignant, fractal or Euclidean, restrict their growth to 2-dimensional manifolds within the breast tissue.
Fuzzy Euclidean wormholes in de Sitter space
NASA Astrophysics Data System (ADS)
Chen, Pisin; Hu, Yao-Chieh; Yeom, Dong-han
2017-07-01
We investigate Euclidean wormholes in Einstein gravity with a massless scalar field in de Sitter space. Euclidean wormholes are possible due to the analytic continuation of the time as well as complexification of fields, where we need to impose the classicality after the Wick-rotation to the Lorentzian signatures. For some parameters, wormholes are preferred than Hawking-Moss instantons, and hence wormholes can be more fundamental than Hawking-Moss type instantons. Euclidean wormholes can be interpreted in three ways: (1) classical big bounce, (2) either tunneling from a small to a large universe or a creation of a collapsing and an expanding universe from nothing, and (3) either a transition from a contracting to a bouncing phase or a creation of two expanding universes from nothing. These various interpretations shed some light on challenges of singularities. In addition, these will help to understand tensions between various kinds of quantum gravity theories.
Fuzzy Euclidean wormholes in de Sitter space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Pisin; Hu, Yao-Chieh; Yeom, Dong-han, E-mail: pisinchen@phys.ntu.edu.tw, E-mail: r04244003@ntu.edu.tw, E-mail: innocent.yeom@gmail.com
We investigate Euclidean wormholes in Einstein gravity with a massless scalar field in de Sitter space. Euclidean wormholes are possible due to the analytic continuation of the time as well as complexification of fields, where we need to impose the classicality after the Wick-rotation to the Lorentzian signatures. For some parameters, wormholes are preferred than Hawking-Moss instantons, and hence wormholes can be more fundamental than Hawking-Moss type instantons. Euclidean wormholes can be interpreted in three ways: (1) classical big bounce, (2) either tunneling from a small to a large universe or a creation of a collapsing and an expanding universemore » from nothing, and (3) either a transition from a contracting to a bouncing phase or a creation of two expanding universes from nothing. These various interpretations shed some light on challenges of singularities. In addition, these will help to understand tensions between various kinds of quantum gravity theories.« less
Fuzzy Euclidean wormholes in de Sitter space
Chen, Pisin; Hu, Yao-Chieh; Yeom, Dong-han
2017-07-03
Here, we investigate Euclidean wormholes in Einstein gravity with a massless scalar field in de Sitter space. Euclidean wormholes are possible due to the analytic continuation of the time as well as complexification of fields, where we need to impose the classicality after the Wick-rotation to the Lorentzian signatures. Furthermore, we prefer wormholes for some parameters, rather than Hawking-Moss instantons, and hence wormholes can be more fundamental than Hawking-Moss type instantons. Euclidean wormholes can be interpreted in three ways: (1) classical big bounce, (2) either tunneling from a small to a large universe or a creation of a collapsing andmore » an expanding universe from nothing, and (3) either a transition from a contracting to a bouncing phase or a creation of two expanding universes from nothing. These various interpretations shed some light on challenges of singularities. In addition, these will help to understand tensions between various kinds of quantum gravity theories.« less
Fuzzy Euclidean wormholes in de Sitter space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Pisin; Hu, Yao-Chieh; Yeom, Dong-han
Here, we investigate Euclidean wormholes in Einstein gravity with a massless scalar field in de Sitter space. Euclidean wormholes are possible due to the analytic continuation of the time as well as complexification of fields, where we need to impose the classicality after the Wick-rotation to the Lorentzian signatures. Furthermore, we prefer wormholes for some parameters, rather than Hawking-Moss instantons, and hence wormholes can be more fundamental than Hawking-Moss type instantons. Euclidean wormholes can be interpreted in three ways: (1) classical big bounce, (2) either tunneling from a small to a large universe or a creation of a collapsing andmore » an expanding universe from nothing, and (3) either a transition from a contracting to a bouncing phase or a creation of two expanding universes from nothing. These various interpretations shed some light on challenges of singularities. In addition, these will help to understand tensions between various kinds of quantum gravity theories.« less
Contracted time and expanded space: The impact of circumnavigation on judgements of space and time.
Brunec, Iva K; Javadi, Amir-Homayoun; Zisch, Fiona E L; Spiers, Hugo J
2017-09-01
The ability to estimate distance and time to spatial goals is fundamental for survival. In cases where a region of space must be navigated around to reach a location (circumnavigation), the distance along the path is greater than the straight-line Euclidean distance. To explore how such circumnavigation impacts on estimates of distance and time, we tested participants on their ability to estimate travel time and Euclidean distance to learned destinations in a virtual town. Estimates for approximately linear routes were compared with estimates for routes requiring circumnavigation. For all routes, travel times were significantly underestimated, and Euclidean distances overestimated. For routes requiring circumnavigation, travel time was further underestimated and the Euclidean distance further overestimated. Thus, circumnavigation appears to enhance existing biases in representations of travel time and distance. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Rényi indices of financial minimum spanning trees
NASA Astrophysics Data System (ADS)
Nie, Chun-Xiao; Song, Fu-Tie; Li, Sai-Ping
2016-02-01
The Rényi index is used here to describe topological structures of minimum spanning trees (MSTs) of financial markets. We categorize the topological structures of MSTs as dragon, star and super-star types. The MST based on Geometric Brownian motion is of dragon type, the MST constructed by One-Factor Model is super-star type, and most MSTs based on real market data belong to the star type. The Rényi index of the MST corresponding to S&P500 is evaluated, and the result shows that the Rényi index varies significantly in different time periods. In particular, it rose during crises and dropped when the S&P500 index rose significantly. A comparison study between the CSI300 index of the Chinese market and the S&P500 index shows that the MST structure of the CSI300 index varies more dramatically than the MST structure of the S&P500.
A tool for filtering information in complex systems
NASA Astrophysics Data System (ADS)
Tumminello, M.; Aste, T.; Di Matteo, T.; Mantegna, R. N.
2005-07-01
We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties. This paper was submitted directly (Track II) to the PNAS office.Abbreviations: MST, minimum spanning tree; PMFG, Planar Maximally Filtered Graph; r-clique, clique of r elements.
Pruning a minimum spanning tree
NASA Astrophysics Data System (ADS)
Sandoval, Leonidas
2012-04-01
This work employs various techniques in order to filter random noise from the information provided by minimum spanning trees obtained from the correlation matrices of international stock market indices prior to and during times of crisis. The first technique establishes a threshold above which connections are considered affected by noise, based on the study of random networks with the same probability density distribution of the original data. The second technique is to judge the strength of a connection by its survival rate, which is the amount of time a connection between two stock market indices endures. The idea is that true connections will survive for longer periods of time, and that random connections will not. That information is then combined with the information obtained from the first technique in order to create a smaller network, in which most of the connections are either strong or enduring in time.
Metrics in Keplerian orbits quotient spaces
NASA Astrophysics Data System (ADS)
Milanov, Danila V.
2018-03-01
Quotient spaces of Keplerian orbits are important instruments for the modelling of orbit samples of celestial bodies on a large time span. We suppose that variations of the orbital eccentricities, inclinations and semi-major axes remain sufficiently small, while arbitrary perturbations are allowed for the arguments of pericentres or longitudes of the nodes, or both. The distance between orbits or their images in quotient spaces serves as a numerical criterion for such problems of Celestial Mechanics as search for common origin of meteoroid streams, comets, and asteroids, asteroid families identification, and others. In this paper, we consider quotient sets of the non-rectilinear Keplerian orbits space H. Their elements are identified irrespective of the values of pericentre arguments or node longitudes. We prove that distance functions on the quotient sets, introduced in Kholshevnikov et al. (Mon Not R Astron Soc 462:2275-2283, 2016), satisfy metric space axioms and discuss theoretical and practical importance of this result. Isometric embeddings of the quotient spaces into R^n, and a space of compact subsets of H with Hausdorff metric are constructed. The Euclidean representations of the orbits spaces find its applications in a problem of orbit averaging and computational algorithms specific to Euclidean space. We also explore completions of H and its quotient spaces with respect to corresponding metrics and establish a relation between elements of the extended spaces and rectilinear trajectories. Distance between an orbit and subsets of elliptic and hyperbolic orbits is calculated. This quantity provides an upper bound for the metric value in a problem of close orbits identification. Finally the invariance of the equivalence relations in H under coordinates change is discussed.
A mixing timescale model for TPDF simulations of turbulent premixed flames
Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.; ...
2017-02-06
Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less
A numerical study of mixing in stationary, nonpremixed, turbulent reacting flows
NASA Astrophysics Data System (ADS)
Overholt, Matthew Ryan
1998-10-01
In this work a detailed numerical study is made of a statistically-stationary, non-premixed, turbulent reacting model flow known as Periodic Reaction Zones. The mixture fraction-progress variable approach is used, with a mean gradient in the mixture fraction and a model, single-step, reversible, finite-rate thermochemistry, yielding both stationary and local extinction behavior. The passive scalar is studied first, using a statistical forcing scheme to achieve stationarity of the velocity field. Multiple independent direct numerical simulations (DNS) are performed for a wide range of Reynolds numbers with a number of results including a bilinear model for scalar mixing jointly conditioned on the scalar and x2-component of velocity, Gaussian scalar probability density function tails which were anticipated to be exponential, and the quantification of the dissipation of scalar flux. A new deterministic forcing scheme for DNS is then developed which yields reduced fluctuations in many quantities and a more natural evolution of the velocity fields. This forcing method is used for the final portion of this work. DNS results for Periodic Reaction Zones are compared with the Conditional Moment Closure (CMC) model, the Quasi-Equilibrium Distributed Reaction (QEDR) model, and full probability density function (PDF) simulations using the Euclidean Minimum Spanning Tree (EMST) and the Interaction by Exchange with the Mean (IEM) mixing models. It is shown that CMC and QEDR results based on the local scalar dissipation match DNS wherever local extinction is not present. However, due to the large spatial variations of scalar dissipation, and hence local Damkohler number, local extinction is present even when the global Damkohler number is twenty-five times the critical value for extinction. Finally, in the PDF simulations the EMST mixing model closely reproduces CMC and DNS results when local extinction is not present, whereas the IEM model results in large error.
A mixing timescale model for TPDF simulations of turbulent premixed flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.
Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less
Combined-probability space and certainty or uncertainty relations for a finite-level quantum system
NASA Astrophysics Data System (ADS)
Sehrawat, Arun
2017-08-01
The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.
Wind Observations of Anomalous Cosmic Rays from Solar Minimum to Maximum
NASA Technical Reports Server (NTRS)
Reames, D. V.; McDonald, F. B.
2003-01-01
We report the first observation near Earth of the time behavior of anomalous cosmic-ray N, O, and Ne ions through the period surrounding the maximum of the solar cycle. These observations were made by the Wind spacecraft during the 1995-2002 period spanning times from solar minimum through solar maximum. Comparison of anomalous and galactic cosmic rays provides a powerful tool for the study of the physics of solar modulation throughout the solar cycle.
Deficiency mapping of quantitative trait loci affecting longevity in Drosophila melanogaster.
Pasyukova, E G; Vieira, C; Mackay, T F
2000-01-01
In a previous study, sex-specific quantitative trait loci (QTL) affecting adult longevity were mapped by linkage to polymorphic roo transposable element markers, in a population of recombinant inbred lines derived from the Oregon and 2b strains of Drosophila melanogaster. Two life span QTL were each located on chromosomes 2 and 3, within sections 33E-46C and 65D-85F on the cytological map, respectively. We used quantitative deficiency complementation mapping to further resolve the locations of life span QTL within these regions. The Oregon and 2b strains were each crossed to 47 deficiencies spanning cytological regions 32F-44E and 64C-76B, and quantitative failure of the QTL alleles to complement the deficiencies was assessed. We initially detected a minimum of five and four QTL in the chromosome 2 and 3 regions, respectively, illustrating that multiple linked factors contribute to each QTL detected by recombination mapping. The QTL locations inferred from deficiency mapping did not generally correspond to those of candidate genes affecting oxidative and thermal stress or glucose metabolism. The chromosome 2 QTL in the 35B-E region was further resolved to a minimum of three tightly linked QTL, containing six genetically defined loci, 24 genes, and predicted genes that are positional candidates corresponding to life span QTL. This region was also associated with quantitative variation in life span in a sample of 10 genotypes collected from nature. Quantitative deficiency complementation is an efficient method for fine-scale QTL mapping in Drosophila and can be further improved by controlling the background genotype of the strains to be tested. PMID:11063689
NASA Technical Reports Server (NTRS)
Janich, Karl W.
2005-01-01
The At-Least version of the Generalized Minimum Spanning Tree Problem (L-GMST) is a problem in which the optimal solution connects all defined clusters of nodes in a given network at a minimum cost. The L-GMST is NPHard; therefore, metaheuristic algorithms have been used to find reasonable solutions to the problem as opposed to computationally feasible exact algorithms, which many believe do not exist for such a problem. One such metaheuristic uses a swarm-intelligent Ant Colony System (ACS) algorithm, in which agents converge on a solution through the weighing of local heuristics, such as the shortest available path and the number of agents that recently used a given path. However, in a network using a solution derived from the ACS algorithm, some nodes may move around to different clusters and cause small changes in the network makeup. Rerunning the algorithm from the start would be somewhat inefficient due to the significance of the changes, so a genetic algorithm based on the top few solutions found in the ACS algorithm is proposed to quickly and efficiently adapt the network to these small changes.
The Effective Dynamics of the Volume Preserving Mean Curvature Flow
NASA Astrophysics Data System (ADS)
Chenn, Ilias; Fournodavlos, G.; Sigal, I. M.
2018-04-01
We consider the dynamics of small closed submanifolds (`bubbles') under the volume preserving mean curvature flow. We construct a map from (n+1 )-dimensional Euclidean space into a given (n+1 )-dimensional Riemannian manifold which characterizes the existence, stability and dynamics of constant mean curvature submanifolds. This is done in terms of a reduced area function on the Euclidean space, which is given constructively and can be computed perturbatively. This allows us to derive adiabatic and effective dynamics of the bubbles. The results can be mapped by rescaling to the dynamics of fixed size bubbles in almost Euclidean Riemannian manifolds.
Conroy-Beam, Daniel; Buss, David M.
2016-01-01
Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294) we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection. PMID:27276030
Conroy-Beam, Daniel; Buss, David M
2016-01-01
Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294) we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection.
Orthogonal Array Testing for Transmit Precoding based Codebooks in Space Shift Keying Systems
NASA Astrophysics Data System (ADS)
Al-Ansi, Mohammed; Alwee Aljunid, Syed; Sourour, Essam; Mat Safar, Anuar; Rashidi, C. B. M.
2018-03-01
In Space Shift Keying (SSK) systems, transmit precoding based codebook approaches have been proposed to improve the performance in limited feedback channels. The receiver performs an exhaustive search in a predefined Full-Combination (FC) codebook to select the optimal codeword that maximizes the Minimum Euclidean Distance (MED) between the received constellations. This research aims to reduce the codebook size with the purpose of minimizing the selection time and the number of feedback bits. Therefore, we propose to construct the codebooks based on Orthogonal Array Testing (OAT) methods due to their powerful inherent properties. These methods allow to acquire a short codebook where the codewords are sufficient to cover almost all the possible effects included in the FC codebook. Numerical results show the effectiveness of the proposed OAT codebooks in terms of the system performance and complexity.
Door Security using Face Detection and Raspberry Pi
NASA Astrophysics Data System (ADS)
Bhutra, Venkatesh; Kumar, Harshav; Jangid, Santosh; Solanki, L.
2018-03-01
With the world moving towards advanced technologies, security forms a crucial part in daily life. Among the many techniques used for this purpose, Face Recognition stands as effective means of authentication and security. This paper deals with the user of principal component and security. PCA is a statistical approach used to simplify a data set. The minimum Euclidean distance found from the PCA technique is used to recognize the face. Raspberry Pi a low cost ARM based computer on a small circuit board, controls the servo motor and other sensors. The servo-motor is in turn attached to the doors of home and opens up when the face is recognized. The proposed work has been done using a self-made training database of students from B.K. Birla Institute of Engineering and Technology, Pilani, Rajasthan, India.
The depth estimation of 3D face from single 2D picture based on manifold learning constraints
NASA Astrophysics Data System (ADS)
Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia
2018-04-01
The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.
Stellar Magnetic Activity Cycles, and Hunting for Maunder Minimum-like Events among Sun-like Stars
NASA Astrophysics Data System (ADS)
Wright, J. T.
2016-12-01
Since 1966, astronomers have been making measurements of the chromospheric activity levels of Sun-like stars. Recently, the decades-long Mount Wilson data became public (spanning 1966-1995) complementing the published measurements from the California & Carnegie Planet Survey (1995-2011) and ongoing measurements ancillary to radial velocity planet searches at Keck Observatory. I will discuss what these long time series reveal about stellar magnetic activity cycles, and the prevalence of stars in states analogous to the Sun's Maunder Minimum.
Kim, Dajeong; Kyung, Jangbeen; Park, Dongsun; Choi, Ehn-Kyoung; Kim, Kwang Sei; Shin, Kyungha; Lee, Hangyoung; Shin, Il Seob; Kang, Sung Keun
2015-01-01
Aging brings about the progressive decline in cognitive function and physical activity, along with losses of stem cell population and function. Although transplantation of muscle-derived stem/progenitor cells extended the health span and life span of progeria mice, such effects in normal animals were not confirmed. Human amniotic membrane-derived mesenchymal stem cells (AMMSCs) or adipose tissue-derived mesenchymal stem cells (ADMSCs) (1 × 106 cells per rat) were intravenously transplanted to 10-month-old male F344 rats once a month throughout their lives. Transplantation of AMMSCs and ADMSCs improved cognitive and physical functions of naturally aging rats, extending life span by 23.4% and 31.3%, respectively. The stem cell therapy increased the concentration of acetylcholine and recovered neurotrophic factors in the brain and muscles, leading to restoration of microtubule-associated protein 2, cholinergic and dopaminergic nervous systems, microvessels, muscle mass, and antioxidative capacity. The results indicate that repeated transplantation of AMMSCs and ADMSCs elongate both health span and life span, which could be a starting point for antiaging or rejuvenation effects of allogeneic or autologous stem cells with minimum immune rejection. Significance This study demonstrates that repeated treatment with stem cells in normal animals has antiaging potential, extending health span and life span. Because antiaging and prolonged life span are issues currently of interest, these results are significant for readers and investigators. PMID:26315571
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.
1987-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.
Approximability of the d-dimensional Euclidean capacitated vehicle routing problem
NASA Astrophysics Data System (ADS)
Khachay, Michael; Dubinin, Roman
2016-10-01
Capacitated Vehicle Routing Problem (CVRP) is the well known intractable combinatorial optimization problem, which remains NP-hard even in the Euclidean plane. Since the introduction of this problem in the middle of the 20th century, many researchers were involved into the study of its approximability. Most of the results obtained in this field are based on the well known Iterated Tour Partition heuristic proposed by M. Haimovich and A. Rinnoy Kan in their celebrated paper, where they construct the first Polynomial Time Approximation Scheme (PTAS) for the single depot CVRP in ℝ2. For decades, this result was extended by many authors to numerous useful modifications of the problem taking into account multiple depots, pick up and delivery options, time window restrictions, etc. But, to the best of our knowledge, almost none of these results go beyond the Euclidean plane. In this paper, we try to bridge this gap and propose a EPTAS for the Euclidean CVRP for any fixed dimension.
Can rodents conceive hyperbolic spaces?
Urdapilleta, Eugenio; Troiani, Francesca; Stella, Federico; Treves, Alessandro
2015-01-01
The grid cells discovered in the rodent medial entorhinal cortex have been proposed to provide a metric for Euclidean space, possibly even hardwired in the embryo. Yet, one class of models describing the formation of grid unit selectivity is entirely based on developmental self-organization, and as such it predicts that the metric it expresses should reflect the environment to which the animal has adapted. We show that, according to self-organizing models, if raised in a non-Euclidean hyperbolic cage rats should be able to form hyperbolic grids. For a given range of grid spacing relative to the radius of negative curvature of the hyperbolic surface, such grids are predicted to appear as multi-peaked firing maps, in which each peak has seven neighbours instead of the Euclidean six, a prediction that can be tested in experiments. We thus demonstrate that a useful universal neuronal metric, in the sense of a multi-scale ruler and compass that remain unaltered when changing environments, can be extended to other than the standard Euclidean plane. PMID:25948611
Antipodal correlation on the meron wormhole and a bang-crunch universe
NASA Astrophysics Data System (ADS)
Betzios, Panagiotis; Gaddam, Nava; Papadoulaki, Olga
2018-06-01
We present a covariant Euclidean wormhole solution to Einstein Yang-Mills system and study scalar perturbations analytically. The fluctuation operator has a positive definite spectrum. We compute the Euclidean Green's function, which displays maximal antipodal correlation on the smallest three sphere at the center of the throat. Upon analytic continuation, it corresponds to the Feynman propagator on a compact bang-crunch universe. We present the connection matrix that relates past and future modes. We thoroughly discuss the physical implications of the antipodal map in both the Euclidean and Lorentzian geometries and give arguments on how to assign a physical probability to such solutions.
What if? Exploring the multiverse through Euclidean wormholes.
Bouhmadi-López, Mariam; Krämer, Manuel; Morais, João; Robles-Pérez, Salvador
2017-01-01
We present Euclidean wormhole solutions describing possible bridges within the multiverse. The study is carried out in the framework of third quantisation. The matter content is modelled through a scalar field which supports the existence of a whole collection of universes. The instanton solutions describe Euclidean solutions that connect baby universes with asymptotically de Sitter universes. We compute the tunnelling probability of these processes. Considering the current bounds on the energy scale of inflation and assuming that all the baby universes are nucleated with the same probability, we draw some conclusions about which universes are more likely to tunnel and therefore undergo a standard inflationary era.
What if? Exploring the multiverse through Euclidean wormholes
NASA Astrophysics Data System (ADS)
Bouhmadi-López, Mariam; Krämer, Manuel; Morais, João; Robles-Pérez, Salvador
2017-10-01
We present Euclidean wormhole solutions describing possible bridges within the multiverse. The study is carried out in the framework of third quantisation. The matter content is modelled through a scalar field which supports the existence of a whole collection of universes. The instanton solutions describe Euclidean solutions that connect baby universes with asymptotically de Sitter universes. We compute the tunnelling probability of these processes. Considering the current bounds on the energy scale of inflation and assuming that all the baby universes are nucleated with the same probability, we draw some conclusions about which universes are more likely to tunnel and therefore undergo a standard inflationary era.
Using BMDP and SPSS for a Q factor analysis.
Tanner, B A; Koning, S M
1980-12-01
While Euclidean distances and Q factor analysis may sometimes be preferred to correlation coefficients and cluster analysis for developing a typology, commercially available software does not always facilitate their use. Commands are provided for using BMDP and SPSS in a Q factor analysis with Euclidean distances.
Exploring New Geometric Worlds
ERIC Educational Resources Information Center
Nirode, Wayne
2015-01-01
When students work with a non-Euclidean distance formula, geometric objects such as circles and segment bisectors can look very different from their Euclidean counterparts. Students and even teachers can experience the thrill of creative discovery when investigating these differences among geometric worlds. In this article, the author describes a…
Euclidean, Spherical, and Hyperbolic Shadows
ERIC Educational Resources Information Center
Hoban, Ryan
2013-01-01
Many classical problems in elementary calculus use Euclidean geometry. This article takes such a problem and solves it in hyperbolic and in spherical geometry instead. The solution requires only the ability to compute distances and intersections of points in these geometries. The dramatically different results we obtain illustrate the effect…
Extremal functions for singular Trudinger-Moser inequalities in the entire Euclidean space
NASA Astrophysics Data System (ADS)
Li, Xiaomeng; Yang, Yunyan
2018-04-01
In a previous work (Adimurthi and Yang, 2010 [2]), Adimurthi-Yang proved a singular Trudinger-Moser inequality in the entire Euclidean space RN (N ≥ 2). Precisely, if 0 ≤ β < 1 and 0 < γ ≤ 1 - β, then there holds for any τ > 0,
Teaching Activity-Based Taxicab Geometry
ERIC Educational Resources Information Center
Ada, Tuba
2013-01-01
This study aimed on the process of teaching taxicab geometry, a non-Euclidean geometry that is easy to understand and similar to Euclidean geometry with its axiomatic structure. In this regard, several teaching activities were designed such as measuring taxicab distance, defining a taxicab circle, finding a geometric locus in taxicab geometry, and…
Project-Based Learning to Explore Taxicab Geometry
ERIC Educational Resources Information Center
Ada, Tuba; Kurtulus, Aytac
2012-01-01
In Turkey, the content of the geometry course in the Primary School Mathematics Education, which is developed by The Council of Higher Education (YOK), comprises Euclidean and non-Euclidean types of geometry. In this study, primary mathematics teacher candidates compared these two geometries by focusing on Taxicab geometry among non-Euclidean…
A Latent Class Approach to Fitting the Weighted Euclidean Model, CLASCAL.
ERIC Educational Resources Information Center
Winsberg, Suzanne; De Soete, Geert
1993-01-01
A weighted Euclidean distance model is proposed that incorporates a latent class approach (CLASCAL). The contribution to the distance function between two stimuli is per dimension weighted identically by all subjects in the same latent class. A model selection strategy is proposed and illustrated. (SLD)
A Minimum Spanning Forest Based Method for Noninvasive Cancer Detection with Hyperspectral Imaging
Pike, Robert; Lu, Guolan; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei
2016-01-01
Goal The purpose of this paper is to develop a classification method that combines both spectral and spatial information for distinguishing cancer from healthy tissue on hyperspectral images in an animal model. Methods An automated algorithm based on a minimum spanning forest (MSF) and optimal band selection has been proposed to classify healthy and cancerous tissue on hyperspectral images. A support vector machine (SVM) classifier is trained to create a pixel-wise classification probability map of cancerous and healthy tissue. This map is then used to identify markers that are used to compute mutual information for a range of bands in the hyperspectral image and thus select the optimal bands. An MSF is finally grown to segment the image using spatial and spectral information. Conclusion The MSF based method with automatically selected bands proved to be accurate in determining the tumor boundary on hyperspectral images. Significance Hyperspectral imaging combined with the proposed classification technique has the potential to provide a noninvasive tool for cancer detection. PMID:26285052
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
Surface Design Based on Discrete Conformal Transformations
NASA Astrophysics Data System (ADS)
Duque, Carlos; Santangelo, Christian; Vouga, Etienne
Conformal transformations are angle-preserving maps from one domain to another. Although angles are preserved, the lengths between arbitrary points are not generally conserved. As a consequence there is always a given amount of distortion associated to any conformal map. Different uses of such transformations can be found in various fields, but have been used by us to program non-uniformly swellable gel sheets to buckle into prescribed three dimensional shapes. In this work we apply circle packings as a kind of discrete conformal map in order to find conformal maps from the sphere to the plane that can be used as nearly uniform swelling patterns to program non-Euclidean sheets to buckle into spheres. We explore the possibility of tuning the area distortion to fit the experimental range of minimum and maximum swelling by modifying the boundary of the planar domain through the introduction of different cutting schemes.
Lattice corrections to the quark quasidistribution at one loop
Carlson, Carl E.; Freid, Michael
2017-05-12
Here, we calculate radiative corrections to the quark quasidistribution in lattice perturbation theory at one loop to leading orders in the lattice spacing. We also consider one-loop corrections in continuum Euclidean space. We find that the infrared behavior of the corrections in Euclidean and Minkowski space are different. Furthermore, we explore features of momentum loop integrals and demonstrate why loop corrections from the lattice perturbation theory and Euclidean continuum do not correspond with their Minkowski brethren, and comment on a recent suggestion for transcending the differences in the results. Finally, we examine the role of the lattice spacing a andmore » of the r parameter in the Wilson action in these radiative corrections.« less
Lattice corrections to the quark quasidistribution at one loop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, Carl E.; Freid, Michael
Here, we calculate radiative corrections to the quark quasidistribution in lattice perturbation theory at one loop to leading orders in the lattice spacing. We also consider one-loop corrections in continuum Euclidean space. We find that the infrared behavior of the corrections in Euclidean and Minkowski space are different. Furthermore, we explore features of momentum loop integrals and demonstrate why loop corrections from the lattice perturbation theory and Euclidean continuum do not correspond with their Minkowski brethren, and comment on a recent suggestion for transcending the differences in the results. Finally, we examine the role of the lattice spacing a andmore » of the r parameter in the Wilson action in these radiative corrections.« less
Late-time structure of the Bunch-Davies FRW wavefunction
NASA Astrophysics Data System (ADS)
Konstantinidis, George; Mahajan, Raghu; Shaghoulian, Edgar
2016-10-01
In this short note we organize a perturbation theory for the Bunch-Davies wavefunction in flat, accelerating cosmologies. The calculational technique avoids the in-in formalism and instead uses an analytic continuation from Euclidean signature. We will consider both massless and conformally coupled self-interacting scalars. These calculations explicitly illustrate two facts. The first is that IR divergences get sharper as the acceleration slows. The second is that UV-divergent contact terms in the Euclidean computation can contribute to the absolute value of the wavefunction in Lorentzian signature. Here UV divergent refers to terms involving inverse powers of the radial cutoff in the Euclidean computation. In Lorentzian signature such terms encode physical time dependence of the wavefunction.
Modular assembly of synthetic proteins that span the plasma membrane in mammalian cells.
Qudrat, Anam; Truong, Kevin
2016-12-09
To achieve synthetic control over how a cell responds to other cells or the extracellular environment, it is important to reliably engineer proteins that can traffic and span the plasma membrane. Using a modular approach to assemble proteins, we identified the minimum necessary components required to engineer such membrane-spanning proteins with predictable orientation in mammalian cells. While a transmembrane domain (TM) fused to the N-terminus of a protein is sufficient to traffic it to the endoplasmic reticulum (ER), an additional signal peptidase cleavage site downstream of this TM enhanced sorting out of the ER. Next, a second TM in the synthetic protein helped anchor and accumulate the membrane-spanning protein on the plasma membrane. The orientation of the components of the synthetic protein were determined through measuring intracellular Ca 2+ signaling using the R-GECO biosensor and through measuring extracellular quenching of yellow fluorescent protein variants by saturating acidic and salt conditions. This work forms the basis of engineering novel proteins that span the plasma membrane to potentially control intracellular responses to extracellular conditions.
Kim, Dajeong; Kyung, Jangbeen; Park, Dongsun; Choi, Ehn-Kyoung; Kim, Kwang Sei; Shin, Kyungha; Lee, Hangyoung; Shin, Il Seob; Kang, Sung Keun; Ra, Jeong Chan; Kim, Yun-Bae
2015-10-01
Aging brings about the progressive decline in cognitive function and physical activity, along with losses of stem cell population and function. Although transplantation of muscle-derived stem/progenitor cells extended the health span and life span of progeria mice, such effects in normal animals were not confirmed. Human amniotic membrane-derived mesenchymal stem cells (AMMSCs) or adipose tissue-derived mesenchymal stem cells (ADMSCs) (1×10(6) cells per rat) were intravenously transplanted to 10-month-old male F344 rats once a month throughout their lives. Transplantation of AMMSCs and ADMSCs improved cognitive and physical functions of naturally aging rats, extending life span by 23.4% and 31.3%, respectively. The stem cell therapy increased the concentration of acetylcholine and recovered neurotrophic factors in the brain and muscles, leading to restoration of microtubule-associated protein 2, cholinergic and dopaminergic nervous systems, microvessels, muscle mass, and antioxidative capacity. The results indicate that repeated transplantation of AMMSCs and ADMSCs elongate both health span and life span, which could be a starting point for antiaging or rejuvenation effects of allogeneic or autologous stem cells with minimum immune rejection. This study demonstrates that repeated treatment with stem cells in normal animals has antiaging potential, extending health span and life span. Because antiaging and prolonged life span are issues currently of interest, these results are significant for readers and investigators. ©AlphaMed Press.
Teaching Geometry According to Euclid.
ERIC Educational Resources Information Center
Hartshorne, Robin
2000-01-01
This essay contains some reflections and questions arising from encounters with the text of Euclid's Elements. The reflections arise out of the teaching of a course in Euclidean and non-Euclidean geometry to undergraduates. It is concluded that teachers of such courses should read Euclid and ask questions, then teach a course on Euclid and later…
Peripatetic and Euclidean theories of the visual ray.
Jones, A
1994-01-01
The visual ray of Euclid's Optica is endowed with properties that reveal the concept to be an abstraction of a specific physical account of vision. The evolution of a physical theory of vision compatible with the Euclidean model can be traced in Peripatetic writings of the late fourth and third centuries B.C.
Nearest Neighbor Classification Using a Density Sensitive Distance Measurement
2009-09-01
both the proposed density sensitive distance measurement and Euclidean distance are compared on the Wisconsin Diagnostic Breast Cancer dataset and...proposed density sensitive distance measurement and Euclidean distance are compared on the Wisconsin Diagnostic Breast Cancer dataset and the MNIST...35 1. The Wisconsin Diagnostic Breast Cancer (WDBC) Dataset..........35 2. The
The Role of Structure in Learning Non-Euclidean Geometry
ERIC Educational Resources Information Center
Asmuth, Jennifer A.
2009-01-01
How do people learn novel mathematical information that contradicts prior knowledge? The focus of this thesis is the role of structure in the acquisition of knowledge about hyperbolic geometry, a non-Euclidean geometry. In a series of three experiments, I contrast a more holistic structure--training based on closed figures--with a mathematically…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briceno, Raul A.; Hansen, Maxwell T.; Monahan, Christopher J.
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate thatmore » the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Lastly, we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.« less
Fixed-topology Lorentzian triangulations: Quantum Regge Calculus in the Lorentzian domain
NASA Astrophysics Data System (ADS)
Tate, Kyle; Visser, Matt
2011-11-01
A key insight used in developing the theory of Causal Dynamical Triangu-lations (CDTs) is to use the causal (or light-cone) structure of Lorentzian manifolds to restrict the class of geometries appearing in the Quantum Gravity (QG) path integral. By exploiting this structure the models developed in CDTs differ from the analogous models developed in the Euclidean domain, models of (Euclidean) Dynamical Triangulations (DT), and the corresponding Lorentzian results are in many ways more "physical". In this paper we use this insight to formulate a Lorentzian signature model that is anal-ogous to the Quantum Regge Calculus (QRC) approach to Euclidean Quantum Gravity. We exploit another crucial fact about the structure of Lorentzian manifolds, namely that certain simplices are not constrained by the triangle inequalities present in Euclidean signa-ture. We show that this model is not related to QRC by a naive Wick rotation; this serves as another demonstration that the sum over Lorentzian geometries is not simply related to the sum over Euclidean geometries. By removing the triangle inequality constraints, there is more freedom to perform analytical calculations, and in addition numerical simulations are more computationally efficient. We first formulate the model in 1 + 1 dimensions, and derive scaling relations for the pure gravity path integral on the torus using two different measures. It appears relatively easy to generate "large" universes, both in spatial and temporal extent. In addition, loopto-loop amplitudes are discussed, and a transfer matrix is derived. We then also discuss the model in higher dimensions.
INFORMATION-THEORETIC INEQUALITIES ON UNIMODULAR LIE GROUPS
Chirikjian, Gregory S.
2010-01-01
Classical inequalities used in information theory such as those of de Bruijn, Fisher, Cramér, Rao, and Kullback carry over in a natural way from Euclidean space to unimodular Lie groups. These are groups that possess an integration measure that is simultaneously invariant under left and right shifts. All commutative groups are unimodular. And even in noncommutative cases unimodular Lie groups share many of the useful features of Euclidean space. The rotation and Euclidean motion groups, which are perhaps the most relevant Lie groups to problems in geometric mechanics, are unimodular, as are the unitary groups that play important roles in quantum computing. The extension of core information theoretic inequalities defined in the setting of Euclidean space to this broad class of Lie groups is potentially relevant to a number of problems relating to information gathering in mobile robotics, satellite attitude control, tomographic image reconstruction, biomolecular structure determination, and quantum information theory. In this paper, several definitions are extended from the Euclidean setting to that of Lie groups (including entropy and the Fisher information matrix), and inequalities analogous to those in classical information theory are derived and stated in the form of fifteen small theorems. In all such inequalities, addition of random variables is replaced with the group product, and the appropriate generalization of convolution of probability densities is employed. An example from the field of robotics demonstrates how several of these results can be applied to quantify the amount of information gained by pooling different sensory inputs. PMID:21113416
Spanning trees and the Eurozone crisis
NASA Astrophysics Data System (ADS)
Dias, João
2013-12-01
The sovereign debt crisis in the euro area has not yet been solved and recent developments in Spain and Italy have further deteriorated the situation. In this paper we develop a new approach to analyze the ongoing Eurozone crisis. Firstly, we use Maximum Spanning Trees to analyze the topological properties of government bond rates’ dynamics. Secondly, we combine the information given by both Maximum and Minimum Spanning Trees to obtain a measure of market dissimilarity or disintegration. Thirdly, we extend this measure to include a convenient distance not limited to the interval [0, 2]. Our empirical results show that Maximum Spanning Tree gives an adequate description of the separation of the euro area into two distinct groups: those countries strongly affected by the crisis and those that have remained resilient during this period. The measures of market dissimilarity also reveal a persistent separation of these two groups and, according to our second measure, this separation strongly increased during the period July 2009-March 2012.
NASA Astrophysics Data System (ADS)
Sanhouse-García, Antonio J.; Rangel-Peraza, Jesús Gabriel; Bustos-Terrones, Yaneth; García-Ferrer, Alfonso; Mesas-Carrascosa, Francisco J.
2016-02-01
Land cover classification is often based on different characteristics between their classes, but with great homogeneity within each one of them. This cover is obtained through field work or by mean of processing satellite images. Field work involves high costs; therefore, digital image processing techniques have become an important alternative to perform this task. However, in some developing countries and particularly in Casacoima municipality in Venezuela, there is a lack of geographic information systems due to the lack of updated information and high costs in software license acquisition. This research proposes a low cost methodology to develop thematic mapping of local land use and types of coverage in areas with scarce resources. Thematic mapping was developed from CBERS-2 images and spatial information available on the network using open source tools. The supervised classification method per pixel and per region was applied using different classification algorithms and comparing them among themselves. Classification method per pixel was based on Maxver algorithms (maximum likelihood) and Euclidean distance (minimum distance), while per region classification was based on the Bhattacharya algorithm. Satisfactory results were obtained from per region classification, where overall reliability of 83.93% and kappa index of 0.81% were observed. Maxver algorithm showed a reliability value of 73.36% and kappa index 0.69%, while Euclidean distance obtained values of 67.17% and 0.61% for reliability and kappa index, respectively. It was demonstrated that the proposed methodology was very useful in cartographic processing and updating, which in turn serve as a support to develop management plans and land management. Hence, open source tools showed to be an economically viable alternative not only for forestry organizations, but for the general public, allowing them to develop projects in economically depressed and/or environmentally threatened areas.
Discrimination of different sub-basins on Tajo River based on water influence factor
NASA Astrophysics Data System (ADS)
Bermudez, R.; Gascó, J. M.; Tarquis, A. M.; Saa-Requejo, A.
2009-04-01
Numeric taxonomy has been applied to classify Tajo basin water (Spain) till Portugal border. Several stations, a total of 52, that estimate 15 water variables have been used in this study. The different groups have been obtained applying a Euclidean distance among stations (distance classification) and a Euclidean distance between each station and the centroid estimated among them (centroid classification), varying the number of parameters and with or without variable typification. In order to compare the classification a log-log relation has been established, between number of groups created and distances, to select the best one. It has been observed that centroid classification is more appropriate following in a more logic way the natural constrictions than the minimum distance among stations. Variable typification doesn't improve the classification except when the centroid method is applied. Taking in consideration the ions and the sum of them as variables, the classification improved. Stations are grouped based on electric conductivity (CE), total anions (TA), total cations (TC) and ions ratio (Na/Ca and Mg/Ca). For a given classification and comparing the different groups created a certain variation in ions concentration and ions ratio are observed. However, the variation in each ion among groups is different depending on the case. For the last group, regardless the classification, the increase in all ions is general. Comparing the dendrograms, and groups that originated, Tajo river basin can be sub dived in five sub-basins differentiated by the main influence on water: 1. With a higher ombrogenic influence (rain fed). 2. With ombrogenic and pedogenic influence (rain and groundwater fed). 3. With pedogenic influence. 4. With lithogenic influence (geological bedrock). 5. With a higher ombrogenic and lithogenic influence added.
Using P-Stat, BMDP and SPSS for a cross-products factor analysis.
Tanner, B A; Leiman, J M
1983-06-01
The major disadvantage of the Q factor analysis with Euclidean distances described by Tanner and Koning [Comput. Progr. Biomed. 12 (1980) 201-202] is the considerable editing required. An alternative procedure with commercially distributed software, and with cross-products in place of Euclidean distances is described. This procedure does not require any editing.
On the Partitioning of Squared Euclidean Distance and Its Applications in Cluster Analysis.
ERIC Educational Resources Information Center
Carter, Randy L.; And Others
1989-01-01
The partitioning of squared Euclidean--E(sup 2)--distance between two vectors in M-dimensional space into the sum of squared lengths of vectors in mutually orthogonal subspaces is discussed. Applications to specific cluster analysis problems are provided (i.e., to design Monte Carlo studies for performance comparisons of several clustering methods…
Usability Evaluation of an Augmented Reality System for Teaching Euclidean Vectors
ERIC Educational Resources Information Center
Martin-Gonzalez, Anabel; Chi-Poot, Angel; Uc-Cetina, Victor
2016-01-01
Augmented reality (AR) is one of the emerging technologies that has demonstrated to be an efficient technological tool to enhance learning techniques. In this paper, we describe the development and evaluation of an AR system for teaching Euclidean vectors in physics and mathematics. The goal of this pedagogical tool is to facilitate user's…
ERIC Educational Resources Information Center
Tisdell, Christopher C.
2017-01-01
For over 50 years, the learning of teaching of "a priori" bounds on solutions to linear differential equations has involved a Euclidean approach to measuring the size of a solution. While the Euclidean approach to "a priori" bounds on solutions is somewhat manageable in the learning and teaching of the proofs involving…
Fusion And Inference From Multiple And Massive Disparate Distributed Dynamic Data Sets
2017-07-01
principled methodology for two-sample graph testing; designed a provably almost-surely perfect vertex clustering algorithm for block model graphs; proved...3.7 Semi-Supervised Clustering Methodology ...................................................................... 9 3.8 Robust Hypothesis Testing...dimensional Euclidean space – allows the full arsenal of statistical and machine learning methodology for multivariate Euclidean data to be deployed for
In a Class with Klein: Generating a Model of the Hyperbolic Plane
ERIC Educational Resources Information Center
Otten, Samuel; Zin, Christopher
2012-01-01
The emergence of non-Euclidean geometries in the 19th century rocked the foundations of mathematical knowledge and certainty. The tremors can still be felt in undergraduate mathematics today where encounters with non-Euclidean geometry are novel and often shocking to students. Because of its divergence from ordinary and comfortable notions of…
Complex networks: Effect of subtle changes in nature of randomness
NASA Astrophysics Data System (ADS)
Goswami, Sanchari; Biswas, Soham; Sen, Parongama
2011-03-01
In two different classes of network models, namely, the Watts Strogatz type and the Euclidean type, subtle changes have been introduced in the randomness. In the Watts Strogatz type network, rewiring has been done in different ways and although the qualitative results remain the same, finite differences in the exponents are observed. In the Euclidean type networks, where at least one finite phase transition occurs, two models differing in a similar way have been considered. The results show a possible shift in one of the phase transition points but no change in the values of the exponents. The WS and Euclidean type models are equivalent for extreme values of the parameters; we compare their behaviour for intermediate values.
Variational submanifolds of Euclidean spaces
NASA Astrophysics Data System (ADS)
Krupka, D.; Urban, Z.; Volná, J.
2018-03-01
Systems of ordinary differential equations (or dynamical forms in Lagrangian mechanics), induced by embeddings of smooth fibered manifolds over one-dimensional basis, are considered in the class of variational equations. For a given non-variational system, conditions assuring variationality (the Helmholtz conditions) of the induced system with respect to a submanifold of a Euclidean space are studied, and the problem of existence of these "variational submanifolds" is formulated in general and solved for second-order systems. The variational sequence theory on sheaves of differential forms is employed as a main tool for the analysis of local and global aspects (variationality and variational triviality). The theory is illustrated by examples of holonomic constraints (submanifolds of a configuration Euclidean space) which are variational submanifolds in geometry and mechanics.
Balancing Newtonian gravity and spin to create localized structures
NASA Astrophysics Data System (ADS)
Bush, Michael; Lindner, John
2015-03-01
Using geometry and Newtonian physics, we design localized structures that do not require electromagnetic or other forces to resist implosion or explosion. In two-dimensional Euclidean space, we find an equilibrium configuration of a rotating ring of massive dust whose inward gravity is the centripetal force that spins it. We find similar solutions in three-dimensional Euclidean and hyperbolic spaces, but only in the limit of vanishing mass. Finally, in three-dimensional Euclidean space, we generalize the two-dimensional result by finding an equilibrium configuration of a spherical shell of massive dust that supports itself against gravitational collapse by spinning isoclinically in four dimensions so its three-dimensional acceleration is everywhere inward. These Newtonian ``atoms'' illuminate classical physics and geometry.
40 CFR 60.2735 - Is there a minimum amount of monitoring data I must obtain?
Code of Federal Regulations, 2014 CFR
2014-07-01
... activities including, as applicable, calibration checks and required zero and span adjustments. A monitoring... monitoring system quality assurance or control activities in calculations used to report emissions or...-control periods, and required monitoring system quality assurance or quality control activities including...
40 CFR 60.2735 - Is there a minimum amount of monitoring data I must obtain?
Code of Federal Regulations, 2013 CFR
2013-07-01
... activities including, as applicable, calibration checks and required zero and span adjustments. A monitoring... monitoring system quality assurance or control activities in calculations used to report emissions or...-control periods, and required monitoring system quality assurance or quality control activities including...
Effect of multiple engine placement on aeroelastic trim and stability of flying wing aircraft
NASA Astrophysics Data System (ADS)
Mardanpour, Pezhman; Richards, Phillip W.; Nabipour, Omid; Hodges, Dewey H.
2014-01-01
Effects of multiple engine placement on flutter characteristics of a backswept flying wing resembling the HORTEN IV are investigated using the code NATASHA (Nonlinear Aeroelastic Trim And Stability of HALE Aircraft). Four identical engines with defined mass, inertia, and angular momentum are placed in different locations along the span with different offsets from the elastic axis while fixing the location of the aircraft c.g. The aircraft experiences body freedom flutter along with non-oscillatory instabilities that originate from flight dynamics. Multiple engine placement increases flutter speed particularly when the engines are placed in the outboard portion of the wing (60-70% span), forward of the elastic axis, while the lift to drag ratio is affected negligibly. The behavior of the sub- and supercritical eigenvalues is studied for two cases of engine placement. NATASHA captures a hump body-freedom flutter with low frequency for the clean wing case, which disappears as the engines are placed on the wings. In neither case is there any apparent coalescence between the unstable modes. NATASHA captures other non-oscillatory unstable roots with very small amplitude, apparently originating with flight dynamics. For the clean-wing case, in the absence of aerodynamic and gravitational forces, the regions of minimum kinetic energy density for the first and third bending modes are located around 60% span. For the second mode, this kinetic energy density has local minima around the 20% and 80% span. The regions of minimum kinetic energy of these modes are in agreement with calculations that show a noticeable increase in flutter speed if engines are placed forward of the elastic axis at these regions.
Wind adaptive modeling of transmission lines using minimum description length
NASA Astrophysics Data System (ADS)
Jaw, Yoonseok; Sohn, Gunho
2017-03-01
The transmission lines are moving objects, which positions are dynamically affected by wind-induced conductor motion while they are acquired by airborne laser scanners. This wind effect results in a noisy distribution of laser points, which often hinders accurate representation of transmission lines and thus, leads to various types of modeling errors. This paper presents a new method for complete 3D transmission line model reconstruction in the framework of inner and across span analysis. The highlighted fact is that the proposed method is capable of indirectly estimating noise scales, which corrupts the quality of laser observations affected by different wind speeds through a linear regression analysis. In the inner span analysis, individual transmission line models of each span are evaluated based on the Minimum Description Length theory and erroneous transmission line segments are subsequently replaced by precise transmission line models with wind-adaptive noise scale estimated. In the subsequent step of across span analysis, detecting the precise start and end positions of the transmission line models, known as the Point of Attachment, is the key issue for correcting partial modeling errors, as well as refining transmission line models. Finally, the geometric and topological completion of transmission line models are achieved over the entire network. A performance evaluation was conducted over 138.5 km long corridor data. In a modest wind condition, the results demonstrates that the proposed method can improve the accuracy of non-wind-adaptive initial models on an average of 48% success rate to produce complete transmission line models in the range between 85% and 99.5% with the positional accuracy of 9.55 cm transmission line models and 28 cm Point of Attachment in the root-mean-square error.
The remapping of space in motor learning and human-machine interfaces
Mussa-Ivaldi, F.A.; Danziger, Z.
2009-01-01
Studies of motor adaptation to patterns of deterministic forces have revealed the ability of the motor control system to form and use predictive representations of the environment. One of the most fundamental elements of our environment is space itself. This article focuses on the notion of Euclidean space as it applies to common sensory motor experiences. Starting from the assumption that we interact with the world through a system of neural signals, we observe that these signals are not inherently endowed with metric properties of the ordinary Euclidean space. The ability of the nervous system to represent these properties depends on adaptive mechanisms that reconstruct the Euclidean metric from signals that are not Euclidean. Gaining access to these mechanisms will reveal the process by which the nervous system handles novel sophisticated coordinate transformation tasks, thus highlighting possible avenues to create functional human-machine interfaces that can make that task much easier. A set of experiments is presented that demonstrate the ability of the sensory-motor system to reorganize coordination in novel geometrical environments. In these environments multiple degrees of freedom of body motions are used to control the coordinates of a point in a two-dimensional Euclidean space. We discuss how practice leads to the acquisition of the metric properties of the controlled space. Methods of machine learning based on the reduction of reaching errors are tested as a means to facilitate learning by adaptively changing he map from body motions to controlled device. We discuss the relevance of the results to the development of adaptive human machine interfaces and optimal control. PMID:19665553
ERIC Educational Resources Information Center
Curtis, Charles W.; And Others
These materials were developed to help high school teachers to become familiar with the approach to tenth-grade Euclidean geometry which was adopted by the School Mathematics Study Group (SMSG). It is emphasized that the materials are unsuitable as a high school textbook. Each document contains material too difficult for most high school students.…
Feature Extraction of High-Dimensional Structures for Exploratory Analytics
2013-04-01
Comparison of Euclidean vs. geodesic distance. LDRs use metric based on the Euclidean distance between two points, while the NLDRs are based on...geodesic distance. An NLDR successfully unrolls the curved manifold, whereas an LDR fails. ...........................3 1 1. Introduction An...and classical metric multidimensional scaling, are a linear DR ( LDR ). An LDR is based on a linear combination of
Euclidean Wilson loops and minimal area surfaces in lorentzian AdS 3
Irrgang, Andrew; Kruczenski, Martin
2015-12-14
The AdS/CFT correspondence relates Wilson loops in N=4 SYM theory to minimal area surfaces in AdS 5 × S 5 space. If the Wilson loop is Euclidean and confined to a plane (t, x) then the dual surface is Euclidean and lives in Lorentzian AdS 3 c AdS 5. In this paper we study such minimal area surfaces generalizing previous results obtained in the Euclidean case. Since the surfaces we consider have the topology of a disk, the holonomy of the flat current vanishes which is equivalent to the condition that a certain boundary Schrödinger equation has all its solutionsmore » anti-periodic. If the potential for that Schrödinger equation is found then reconstructing the surface and finding the area become simpler. In particular we write a formula for the Area in terms of the Schwarzian derivative of the contour. Finally an infinite parameter family of analytical solutions using Riemann Theta functions is described. In this case, both the area and the shape of the surface are given analytically and used to check the previous results.« less
Principal Curves on Riemannian Manifolds.
Hauberg, Soren
2016-09-01
Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.
Gravity dual for a model of perception
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakayama, Yu, E-mail: nakayama@berkeley.edu
2011-01-15
One of the salient features of human perception is its invariance under dilatation in addition to the Euclidean group, but its non-invariance under special conformal transformation. We investigate a holographic approach to the information processing in image discrimination with this feature. We claim that a strongly coupled analogue of the statistical model proposed by Bialek and Zee can be holographically realized in scale invariant but non-conformal Euclidean geometries. We identify the Bayesian probability distribution of our generalized Bialek-Zee model with the GKPW partition function of the dual gravitational system. We provide a concrete example of the geometric configuration based onmore » a vector condensation model coupled with the Euclidean Einstein-Hilbert action. From the proposed geometry, we study sample correlation functions to compute the Bayesian probability distribution.« less
NASA Astrophysics Data System (ADS)
Durato, M. V.; Albano, A. M.; Rapp, P. E.; Nawang, S. A.
2015-06-01
The validity of ERPs as indices of stable neurophysiological traits is partially dependent on their stability over time. Previous studies on ERP stability, however, have reported diverse stability estimates despite using the same component scoring methods. This present study explores a novel approach in investigating the longitudinal stability of average ERPs—that is, by treating the ERP waveform as a time series and then applying Euclidean Distance and Kolmogorov-Smirnov analyses to evaluate the similarity or dissimilarity between the ERP time series of different sessions or run pairs. Nonlinear dynamical analysis show that in the absence of a change in medical condition, the average ERPs of healthy human adults are highly longitudinally stable—as evaluated by both the Euclidean distance and the Kolmogorov-Smirnov test.
Molecular-Scale Description of SPAN80 Desorption from a Squalane-Water Interface.
Tan, L; Pratt, L R; Chaudhari, M I
2018-04-05
Extensive all-atom molecular dynamics calculations on the water-squalane interface for nine different loadings with sorbitan monooleate (SPAN80), at T = 300 K, are analyzed for the surface tension equation of state, desorption free-energy profiles as they depend on loading, and to evaluate escape times for adsorbed SPAN80 into the bulk phases. These results suggest that loading only weakly affects accommodation of a SPAN80 molecule by this squalane-water interface. Specifically, the surface tension equation of state is simple through the range of high tension to high loading studied, and the desorption free-energy profiles are weakly dependent on loading here. The perpendicular motion of the centroid of the SPAN80 headgroup ring is well-described by a diffusional model near the minimum of the desorption free-energy profile. Lateral diffusional motion is weakly dependent on loading. Escape times evaluated on the basis of a diffusional model and the desorption free energies are 7 × 10 -2 s (into the squalane) and 3 × 10 2 h (into the water). The latter value is consistent with desorption times of related lab-scale experimental work.
Balancing building and maintenance costs in growing transport networks
NASA Astrophysics Data System (ADS)
Bottinelli, Arianna; Louf, Rémi; Gherardi, Marco
2017-09-01
The costs associated to the length of links impose unavoidable constraints to the growth of natural and artificial transport networks. When future network developments cannot be predicted, the costs of building and maintaining connections cannot be minimized simultaneously, requiring competing optimization mechanisms. Here, we study a one-parameter nonequilibrium model driven by an optimization functional, defined as the convex combination of building cost and maintenance cost. By varying the coefficient of the combination, the model interpolates between global and local length minimization, i.e., between minimum spanning trees and a local version known as dynamical minimum spanning trees. We show that cost balance within this ensemble of dynamical networks is a sufficient ingredient for the emergence of tradeoffs between the network's total length and transport efficiency, and of optimal strategies of construction. At the transition between two qualitatively different regimes, the dynamics builds up power-law distributed waiting times between global rearrangements, indicating a point of nonoptimality. Finally, we use our model as a framework to analyze empirical ant trail networks, showing its relevance as a null model for cost-constrained network formation.
ERIC Educational Resources Information Center
Lynch, Beth Eloise
This study was conducted to determine whether the filmic coding elements of split screen, slow motion, generated line cues, the zoom of a camera, and rotation could aid in the development of the Euclidean space concepts of horizontality and verticality, and to explore presence and development of spatial skills involving these two concepts in…
Factorization approach to superintegrable systems: Formalism and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballesteros, Á., E-mail: angelb@ubu.es; Herranz, F. J., E-mail: fjherranz@ubu.es; Kuru, Ş., E-mail: kuru@science.ankara.edu.tr
2017-03-15
The factorization technique for superintegrable Hamiltonian systems is revisited and applied in order to obtain additional (higher-order) constants of the motion. In particular, the factorization approach to the classical anisotropic oscillator on the Euclidean plane is reviewed, and new classical (super) integrable anisotropic oscillators on the sphere are constructed. The Tremblay–Turbiner–Winternitz system on the Euclidean plane is also studied from this viewpoint.
Failure and recovery in dynamical networks.
Böttcher, L; Luković, M; Nagler, J; Havlin, S; Herrmann, H J
2017-02-03
Failure, damage spread and recovery crucially underlie many spatially embedded networked systems ranging from transportation structures to the human body. Here we study the interplay between spontaneous damage, induced failure and recovery in both embedded and non-embedded networks. In our model the network's components follow three realistic processes that capture these features: (i) spontaneous failure of a component independent of the neighborhood (internal failure), (ii) failure induced by failed neighboring nodes (external failure) and (iii) spontaneous recovery of a component. We identify a metastable domain in the global network phase diagram spanned by the model's control parameters where dramatic hysteresis effects and random switching between two coexisting states are observed. This dynamics depends on the characteristic link length of the embedded system. For the Euclidean lattice in particular, hysteresis and switching only occur in an extremely narrow region of the parameter space compared to random networks. We develop a unifying theory which links the dynamics of our model to contact processes. Our unifying framework may help to better understand controllability in spatially embedded and random networks where spontaneous recovery of components can mitigate spontaneous failure and damage spread in dynamical networks.
Implicit Large-Eddy Simulations of Zero-Pressure Gradient, Turbulent Boundary Layer
NASA Technical Reports Server (NTRS)
Sekhar, Susheel; Mansour, Nagi N.
2015-01-01
A set of direct simulations of zero-pressure gradient, turbulent boundary layer flows are conducted using various span widths (62-630 wall units), to document their influence on the generated turbulence. The FDL3DI code that solves compressible Navier-Stokes equations using high-order compact-difference scheme and filter, with the standard recycling/rescaling method of turbulence generation, is used. Results are analyzed at two different Re values (500 and 1,400), and compared with spectral DNS data. They show that a minimum span width is required for the mere initiation of numerical turbulence. Narrower domains ((is) less than 100 w.u.) result in relaminarization. Wider spans ((is) greater than 600 w.u.) are required for the turbulent statistics to match reference DNS. The upper-wall boundary condition for this setup spawns marginal deviations in the mean velocity and Reynolds stress profiles, particularly in the buffer region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomizawa, Shinya; Nozawa, Masato
2006-06-15
We study vacuum solutions of five-dimensional Einstein equations generated by the inverse scattering method. We reproduce the black ring solution which was found by Emparan and Reall by taking the Euclidean Levi-Civita metric plus one-dimensional flat space as a seed. This transformation consists of two successive processes; the first step is to perform the three-solitonic transformation of the Euclidean Levi-Civita metric with one-dimensional flat space as a seed. The resulting metric is the Euclidean C-metric with extra one-dimensional flat space. The second is to perform the two-solitonic transformation by taking it as a new seed. Our result may serve asmore » a stepping stone to find new exact solutions in higher dimensions.« less
Tackling higher derivative ghosts with the Euclidean path integral
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fontanini, Michele; Department of Physics, Syracuse University, Syracuse, New York 13244; Trodden, Mark
2011-05-15
An alternative to the effective field theory approach to treat ghosts in higher derivative theories is to attempt to integrate them out via the Euclidean path integral formalism. It has been suggested that this method could provide a consistent framework within which we might tolerate the ghost degrees of freedom that plague, among other theories, the higher derivative gravity models that have been proposed to explain cosmic acceleration. We consider the extension of this idea to treating a class of terms with order six derivatives, and find that for a general term the Euclidean path integral approach works in themore » most trivial background, Minkowski. Moreover we see that even in de Sitter background, despite some difficulties, it is possible to define a probability distribution for tensorial perturbations of the metric.« less
A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.
Revathy, M; Saravanan, R
2015-01-01
Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.
Space-time topology and quantum gravity.
NASA Astrophysics Data System (ADS)
Friedman, J. L.
Characteristic features are discussed of a theory of quantum gravity that allows space-time with a non-Euclidean topology. The review begins with a summary of the manifolds that can occur as classical vacuum space-times and as space-times with positive energy. Local structures with non-Euclidean topology - topological geons - collapse, and one may conjecture that in asymptotically flat space-times non-Euclidean topology is hiden from view. In the quantum theory, large diffeos can act nontrivially on the space of states, leading to state vectors that transform as representations of the corresponding symmetry group π0(Diff). In particular, in a quantum theory that, at energies E < EPlanck, is a theory of the metric alone, there appear to be ground states with half-integral spin, and in higher-dimensional gravity, with the kinematical quantum numbers of fundamental fermions.
Equivalence Testing of Complex Particle Size Distribution Profiles Based on Earth Mover's Distance.
Hu, Meng; Jiang, Xiaohui; Absar, Mohammad; Choi, Stephanie; Kozak, Darby; Shen, Meiyu; Weng, Yu-Ting; Zhao, Liang; Lionberger, Robert
2018-04-12
Particle size distribution (PSD) is an important property of particulates in drug products. In the evaluation of generic drug products formulated as suspensions, emulsions, and liposomes, the PSD comparisons between a test product and the branded product can provide useful information regarding in vitro and in vivo performance. Historically, the FDA has recommended the population bioequivalence (PBE) statistical approach to compare the PSD descriptors D50 and SPAN from test and reference products to support product equivalence. In this study, the earth mover's distance (EMD) is proposed as a new metric for comparing PSD particularly when the PSD profile exhibits complex distribution (e.g., multiple peaks) that is not accurately described by the D50 and SPAN descriptor. EMD is a statistical metric that measures the discrepancy (distance) between size distribution profiles without a prior assumption of the distribution. PBE is then adopted to perform statistical test to establish equivalence based on the calculated EMD distances. Simulations show that proposed EMD-based approach is effective in comparing test and reference profiles for equivalence testing and is superior compared to commonly used distance measures, e.g., Euclidean and Kolmogorov-Smirnov distances. The proposed approach was demonstrated by evaluating equivalence of cyclosporine ophthalmic emulsion PSDs that were manufactured under different conditions. Our results show that proposed approach can effectively pass an equivalent product (e.g., reference product against itself) and reject an inequivalent product (e.g., reference product against negative control), thus suggesting its usefulness in supporting bioequivalence determination of a test product to the reference product which both possess multimodal PSDs.
Parton physics on a Euclidean lattice.
Ji, Xiangdong
2013-06-28
I show that the parton physics related to correlations of quarks and gluons on the light cone can be studied through the matrix elements of frame-dependent, equal-time correlators in the large momentum limit. This observation allows practical calculations of parton properties on a Euclidean lattice. As an example, I demonstrate how to recover the leading-twist quark distribution by boosting an equal-time correlator to a large momentum.
Investigations into Novel Multi-Band Antenna Designs
2006-08-01
endeavouring to modify the designs to incorporate dual polarisation , building the antennas, as well as experimental work that will use the manufactured...based on the Koch, Minkowski and Hilbert curves. The merit in this approach is that non -Euclidean designs (i.e. fractals) are compared with Euclidean... polarisation . A number of possible changes to the current design need to be explored towards achieving the above objectives. Some of the suggested
Slow diffusion by Markov random flights
NASA Astrophysics Data System (ADS)
Kolesnik, Alexander D.
2018-06-01
We present a conception of the slow diffusion processes in the Euclidean spaces Rm , m ≥ 1, based on the theory of random flights with small constant speed that are driven by a homogeneous Poisson process of small rate. The slow diffusion condition that, on long time intervals, leads to the stationary distributions, is given. The stationary distributions of slow diffusion processes in some Euclidean spaces of low dimensions, are presented.
Quadratic String Method for Locating Instantons in Tunneling Splitting Calculations.
Cvitaš, Marko T
2018-03-13
The ring-polymer instanton (RPI) method is an efficient technique for calculating approximate tunneling splittings in high-dimensional molecular systems. In the RPI method, tunneling splitting is evaluated from the properties of the minimum action path (MAP) connecting the symmetric wells, whereby the extensive sampling of the full potential energy surface of the exact quantum-dynamics methods is avoided. Nevertheless, the search for the MAP is usually the most time-consuming step in the standard numerical procedures. Recently, nudged elastic band (NEB) and string methods, originaly developed for locating minimum energy paths (MEPs), were adapted for the purpose of MAP finding with great efficiency gains [ J. Chem. Theory Comput. 2016 , 12 , 787 ]. In this work, we develop a new quadratic string method for locating instantons. The Euclidean action is minimized by propagating the initial guess (a path connecting two wells) over the quadratic potential energy surface approximated by means of updated Hessians. This allows the algorithm to take many minimization steps between the potential/gradient calls with further reductions in the computational effort, exploiting the smoothness of potential energy surface. The approach is general, as it uses Cartesian coordinates, and widely applicable, with computational effort of finding the instanton usually lower than that of determining the MEP. It can be combined with expensive potential energy surfaces or on-the-fly electronic-structure methods to explore a wide variety of molecular systems.
Code of Federal Regulations, 2010 CFR
2010-07-01
... zero and span settings of the smokemeter. (If a recorder is used, a chart speed of approximately one... collection, it shall be run at a minimum chart speed of one inch per minute during the idle mode and... zero and full scale response may be rechecked and reset during the idle mode of each test sequence. (v...
40 CFR 60.3044 - Is there a minimum amount of operating parameter monitoring data I must obtain?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Emission Guidelines and Compliance Times for Other Solid Waste Incineration Units That Commenced... checks and required zero and span adjustments of the monitoring system), you must conduct all monitoring.... An operating day is any day the unit combusts any municipal or institutional solid waste. (c) If you...
40 CFR 60.3044 - Is there a minimum amount of operating parameter monitoring data I must obtain?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Emission Guidelines and Compliance Times for Other Solid Waste Incineration Units That Commenced... checks and required zero and span adjustments of the monitoring system), you must conduct all monitoring.... An operating day is any day the unit combusts any municipal or institutional solid waste. (c) If you...
40 CFR 60.3044 - Is there a minimum amount of operating parameter monitoring data I must obtain?
Code of Federal Regulations, 2012 CFR
2012-07-01
... Emission Guidelines and Compliance Times for Other Solid Waste Incineration Units That Commenced... checks and required zero and span adjustments of the monitoring system), you must conduct all monitoring.... An operating day is any day the unit combusts any municipal or institutional solid waste. (c) If you...
40 CFR 60.3044 - Is there a minimum amount of operating parameter monitoring data I must obtain?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Emission Guidelines and Compliance Times for Other Solid Waste Incineration Units That Commenced... checks and required zero and span adjustments of the monitoring system), you must conduct all monitoring.... An operating day is any day the unit combusts any municipal or institutional solid waste. (c) If you...
40 CFR 60.3044 - Is there a minimum amount of operating parameter monitoring data I must obtain?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Emission Guidelines and Compliance Times for Other Solid Waste Incineration Units That Commenced... checks and required zero and span adjustments of the monitoring system), you must conduct all monitoring.... An operating day is any day the unit combusts any municipal or institutional solid waste. (c) If you...
Banach spaces that realize minimal fillings
NASA Astrophysics Data System (ADS)
Bednov, B. B.; Borodin, P. A.
2014-04-01
It is proved that a real Banach space realizes minimal fillings for all its finite subsets (a shortest network spanning a fixed finite subset always exists and has the minimum possible length) if and only if it is a predual of L_1. The spaces L_1 are characterized in terms of Steiner points (medians). Bibliography: 25 titles.
ERIC Educational Resources Information Center
Karagiannis, P.; Markelis, I.; Paparrizos, K.; Samaras, N.; Sifaleras, A.
2006-01-01
This paper presents new web-based educational software (webNetPro) for "Linear Network Programming." It includes many algorithms for "Network Optimization" problems, such as shortest path problems, minimum spanning tree problems, maximum flow problems and other search algorithms. Therefore, webNetPro can assist the teaching process of courses such…
Code of Federal Regulations, 2010 CFR
2010-07-01
...—Requirements for Continuous Emission Monitoring Systems (CEMS) For the following pollutants Use the following span values for CEMS Use the following performance specifications in appendix B of this part for your CEMS If needed to meet minimum data requirements, use the folloiwng alternate methods in appendix A of...
Code of Federal Regulations, 2011 CFR
2011-07-01
...—Requirements for Continuous Emission Monitoring Systems (CEMS) For the following pollutants Use the following span values for CEMS Use the following performance specifications in appendix B of this part for your CEMS If needed to meet minimum data requirements, use the folloiwng alternate methods in appendix A of...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unseren, M.A.
This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the jointmore » velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.« less
Achieving spectrum conservation for the minimum-span and minimum-order frequency assignment problems
NASA Technical Reports Server (NTRS)
Heyward, Ann O.
1992-01-01
Effective and efficient solutions of frequency assignment problems assumes increasing importance as the radiofrequency spectrum experiences ever increasing utilization by diverse communications services, requiring that the most efficient use of this resource be achieved. The research presented explores a general approach to the frequency assignment problem, in which such problems are categorized by the appropriate spectrum conserving objective function, and are each treated as an N-job, M-machine scheduling problem appropriate for the objective. Results obtained and presented illustrate that such an approach presents an effective means of achieving spectrum conserving frequency assignments for communications systems in a variety of environments.
Aerodynamic design of a rotor blade for minimum noise radiation
NASA Technical Reports Server (NTRS)
Karamcheti, K.; Yu, Y. H.
1974-01-01
An analysis of the aerodynamic design of a hovering rotor blade for obtaining minimum aerodynamic rotor noise has been carried out. In this analysis, which is based on both acoustical and aerodynamic considerations, attention is given only to the rotational noise due to the pressure fluctuations on the blade surfaces. The lift distribution obtained in this analysis has different characteristics from those of the conventional distribution. The present distribution shows negative lift values over a quarter of the span from the blade tip, and a maximum lift at about the midspan. Results are presented to show that the noise field is considerably affected by the shape of the lift distribution along the blade and that noise reduction of about 5 dB may be obtained by designing the rotor blade to yield minimum noise.
Minimum trim drag design for interfering lifting surfaces using vortex-lattice methodology
NASA Technical Reports Server (NTRS)
Lamar, J. E.
1976-01-01
A new method has been developed by which the mean camber surface can be determined for trimmed noncoplanar planforms with minimum vortex drag under subsonic conditions. The method uses a vortex lattice and overcomes previous difficulties with chord loading specification; it uses a Trefftz plane analysis to determine the optimum span loading for minimum drag, then solves for the mean camber surface of the wing which will provide the required loading. Pitching-moment or root-bending-moment constraints can be employed as well at the design lift coefficient. Sensitivity studies of vortex-lattice arrangement have been made with this method and are presented. Comparisons with other theories show generally good agreement. The versatility of the method is demonstrated by applying it to (1) isolated wings, (2) wing-canard configurations, (3) a tandem wing, and (4) a wing-winglet configuration.
Improving the Cost Efficiency and Readiness of MC-130 Aircrew Training: A Case Study
2015-01-01
51 Jiang, Changbing, "A reliable solver of Euclidean traveling salesman problems with Microsoft excel add-in tools for small-size systems...DisplayPage.aspx?DocType=Reference&ItemId=+++1 343364&Pubabbrev=JAWA 124 Jiang, Changbing, "A Reliable Solver of Euclidean Traveling Salesman Problems with...49 Figure 4.5 Training Resources Locations Traveling Salesperson Problem In order to participate in training, aircrews must fly to the
Molnár, Emil
2005-11-01
A new method, developed in previous works by the author (partly with co-authors), is presented which decides algorithmically, in principle by computer, whether a combinatorial space tiling (Tau, Gamma) is realizable in the d-dimensional Euclidean space E(d) (think of d = 2, 3, 4) or in other homogeneous spaces, e.g. in Thurston's 3-geometries: E(3), S(3), H(3), S(2) x R, H(2) x R, SL(2)R, Nil, Sol. Then our group Gamma will be an isometry group of a projective metric 3-sphere PiS(3) (R, < , >), acting discontinuously on its above tiling Tau. The method is illustrated by a plane example and by the well known rhombohedron tiling (Tau, Gamma), where Gamma = R3m is the Euclidean space group No. 166 in International Tables for Crystallography.
A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications
Revathy, M.; Saravanan, R.
2015-01-01
Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures. PMID:26065017
Noncommutative products of Euclidean spaces
NASA Astrophysics Data System (ADS)
Dubois-Violette, Michel; Landi, Giovanni
2018-05-01
We present natural families of coordinate algebras on noncommutative products of Euclidean spaces R^{N_1} × _R R^{N_2} . These coordinate algebras are quadratic ones associated with an R -matrix which is involutive and satisfies the Yang-Baxter equations. As a consequence, they enjoy a list of nice properties, being regular of finite global dimension. Notably, we have eight-dimensional noncommutative euclidean spaces R4 × _R R4 . Among these, particularly well behaved ones have deformation parameter u \\in S^2 . Quotients include seven spheres S7_u as well as noncommutative quaternionic tori TH_u = S^3 × _u S^3 . There is invariance for an action of {{SU}}(2) × {{SU}}(2) on the torus TH_u in parallel with the action of U(1) × U(1) on a `complex' noncommutative torus T^2_θ which allows one to construct quaternionic toric noncommutative manifolds. Additional classes of solutions are disjoint from the classical case.
NASA Astrophysics Data System (ADS)
Bracken, Paul
2007-05-01
The generalized Weierstrass (GW) system is introduced and its correspondence with the associated two-dimensional nonlinear sigma model is reviewed. The method of symmetry reduction is systematically applied to derive several classes of invariant solutions for the GW system. The solutions can be used to induce constant mean curvature surfaces in Euclidean three space. Some properties of the system for the case of nonconstant mean curvature are introduced as well.
Caracciolo, Sergio; Sicuro, Gabriele
2014-10-01
We discuss the equivalence relation between the Euclidean bipartite matching problem on the line and on the circumference and the Brownian bridge process on the same domains. The equivalence allows us to compute the correlation function and the optimal cost of the original combinatorial problem in the thermodynamic limit; moreover, we solve also the minimax problem on the line and on the circumference. The properties of the average cost and correlation functions are discussed.
Evaluation of Image Segmentation and Object Recognition Algorithms for Image Parsing
2013-09-01
generation of the features from the key points. OpenCV uses Euclidean distance to match the key points and has the option to use Manhattan distance...feature vector includes polarity and intensity information. Final step is matching the key points. In OpenCV , Euclidean distance or Manhattan...the code below is one way and OpenCV offers the function radiusMatch (a pair must have a distance less than a given maximum distance). OpenCV’s
Spectral asymptotics of Euclidean quantum gravity with diff-invariant boundary conditions
NASA Astrophysics Data System (ADS)
Esposito, Giampiero; Fucci, Guglielmo; Kamenshchik, Alexander Yu; Kirsten, Klaus
2005-03-01
A general method is known to exist for studying Abelian and non-Abelian gauge theories, as well as Euclidean quantum gravity, at 1-loop level on manifolds with boundary. In the latter case, boundary conditions on metric perturbations h can be chosen to be completely invariant under infinitesimal diffeomorphisms, to preserve the invariance group of the theory and BRST symmetry. In the de Donder gauge, however, the resulting boundary-value problem for the Laplace-type operator acting on h is known to be self-adjoint but not strongly elliptic. The latter is a technical condition ensuring that a unique smooth solution of the boundary-value problem exists, which implies, in turn, that the global heat-kernel asymptotics yielding 1-loop divergences and 1-loop effective action actually exists. The present paper shows that, on the Euclidean 4-ball, only the scalar part of perturbative modes for quantum gravity is affected by the lack of strong ellipticity. Further evidence for lack of strong ellipticity, from an analytic point of view, is therefore obtained. Interestingly, three sectors of the scalar-perturbation problem remain elliptic, while lack of strong ellipticity is 'confined' to the remaining fourth sector. The integral representation of the resulting ζ-function asymptotics on the Euclidean 4-ball is also obtained; this remains regular at the origin by virtue of a spectral identity here obtained for the first time.
ERIC Educational Resources Information Center
Forsman, Jonas; van den Bogaard, Maartje; Linder, Cedric; Fraser, Duncan
2015-01-01
This study uses multilayer minimum spanning tree analysis to develop a model for student retention from a complex system perspective, using data obtained from first-year engineering students at a large well-regarded institution in the European Union. The results show that the elements of the system of student retention are related to one another…
Health and disease phenotyping in old age using a cluster network analysis.
Valenzuela, Jesus Felix; Monterola, Christopher; Tong, Victor Joo Chuan; Ng, Tze Pin; Larbi, Anis
2017-11-15
Human ageing is a complex trait that involves the synergistic action of numerous biological processes that interact to form a complex network. Here we performed a network analysis to examine the interrelationships between physiological and psychological functions, disease, disability, quality of life, lifestyle and behavioural risk factors for ageing in a cohort of 3,270 subjects aged ≥55 years. We considered associations between numerical and categorical descriptors using effect-size measures for each variable pair and identified clusters of variables from the resulting pairwise effect-size network and minimum spanning tree. We show, by way of a correspondence analysis between the two sets of clusters, that they correspond to coarse-grained and fine-grained structure of the network relationships. The clusters obtained from the minimum spanning tree mapped to various conceptual domains and corresponded to physiological and syndromic states. Hierarchical ordering of these clusters identified six common themes based on interactions with physiological systems and common underlying substrates of age-associated morbidity and disease chronicity, functional disability, and quality of life. These findings provide a starting point for indepth analyses of ageing that incorporate immunologic, metabolomic and proteomic biomarkers, and ultimately offer low-level-based typologies of healthy and unhealthy ageing.
Teixeira, Andreia Sofia; Monteiro, Pedro T; Carriço, João A; Ramirez, Mário; Francisco, Alexandre P
2015-01-01
Trees, including minimum spanning trees (MSTs), are commonly used in phylogenetic studies. But, for the research community, it may be unclear that the presented tree is just a hypothesis, chosen from among many possible alternatives. In this scenario, it is important to quantify our confidence in both the trees and the branches/edges included in such trees. In this paper, we address this problem for MSTs by introducing a new edge betweenness metric for undirected and weighted graphs. This spanning edge betweenness metric is defined as the fraction of equivalent MSTs where a given edge is present. The metric provides a per edge statistic that is similar to that of the bootstrap approach frequently used in phylogenetics to support the grouping of taxa. We provide methods for the exact computation of this metric based on the well known Kirchhoff's matrix tree theorem. Moreover, we implement and make available a module for the PHYLOViZ software and evaluate the proposed metric concerning both effectiveness and computational performance. Analysis of trees generated using multilocus sequence typing data (MLST) and the goeBURST algorithm revealed that the space of possible MSTs in real data sets is extremely large. Selection of the edge to be represented using bootstrap could lead to unreliable results since alternative edges are present in the same fraction of equivalent MSTs. The choice of the MST to be presented, results from criteria implemented in the algorithm that must be based in biologically plausible models.
Peak-Seeking Optimization of Spanwise Lift Distribution for Wings in Formation Flight
NASA Technical Reports Server (NTRS)
Hanson, Curtis E.; Ryan, Jack
2012-01-01
A method is presented for the in-flight optimization of the lift distribution across the wing for minimum drag of an aircraft in formation flight. The usual elliptical distribution that is optimal for a given wing with a given span is no longer optimal for the trailing wing in a formation due to the asymmetric nature of the encountered flow field. Control surfaces along the trailing edge of the wing can be configured to obtain a non-elliptical profile that is more optimal in terms of minimum combined induced and profile drag. Due to the difficult-to-predict nature of formation flight aerodynamics, a Newton-Raphson peak-seeking controller is used to identify in real time the best aileron and flap deployment scheme for minimum total drag. Simulation results show that the peak-seeking controller correctly identifies an optimal trim configuration that provides additional drag savings above those achieved with conventional anti-symmetric aileron trim.
On the measure of conformal difference between Euclidean and Lobachevsky spaces
NASA Astrophysics Data System (ADS)
Zorich, Vladimir A.
2011-12-01
Euclidean space R^n and Lobachevsky space H^n are known to be not equivalent either conformally or quasiconformally. In this work we give exact asymptotics of the critical order of growth at infinity for the quasiconformality coefficient of a diffeomorphism f\\colon R^n\\to H^n for which such a mapping f is possible. We also consider the general case of immersions f\\colon M^n\\to N^n of conformally parabolic Riemannian manifolds. Bibliography: 17 titles.
Euclidean scalar field theory in the bilocal approximation
NASA Astrophysics Data System (ADS)
Nagy, S.; Polonyi, J.; Steib, I.
2018-04-01
The blocking step of the renormalization group method is usually carried out by restricting it to fluctuations and to local blocked action. The tree-level, bilocal saddle point contribution to the blocking, defined by the infinitesimal decrease of the sharp cutoff in momentum space, is followed within the three dimensional Euclidean ϕ6 model in this work. The phase structure is changed, new phases and relevant operators are found, and certain universality classes are restricted by the bilocal saddle point.
Ultrametric properties of the attractor spaces for random iterated linear function systems
NASA Astrophysics Data System (ADS)
Buchovets, A. G.; Moskalev, P. V.
2018-03-01
We investigate attractors of random iterated linear function systems as independent spaces embedded in the ordinary Euclidean space. The introduction on the set of attractor points of a metric that satisfies the strengthened triangle inequality makes this space ultrametric. Then inherent in ultrametric spaces the properties of disconnectedness and hierarchical self-similarity make it possible to define an attractor as a fractal. We note that a rigorous proof of these properties in the case of an ordinary Euclidean space is very difficult.
NASA Technical Reports Server (NTRS)
Johnston, J. F.
1979-01-01
Active wing load alleviation to extend the wing span by 5.8 percent, giving a 3 percent reduction in cruise drag is covered. The active wing load alleviation used symmetric motions of the outboard ailerons for maneuver load control (MLC) and elastic mode suppression (EMS), and stabilizer motions for gust load alleviation (GLA). Slow maneuvers verified the MLC, and open and closed-loop flight frequency response tests verified the aircraft dynamic response to symmetric aileron and stabilizer drives as well as the active system performance. Flight tests in turbulence verified the effectiveness of the active controls in reducing gust-induced wing loads. It is concluded that active wing load alleviation/extended span is proven in the L-1011 and is ready for application to airline service; it is a very practical way to obtain the increased efficiency of a higher aspect ratio wing with minimum structural impact.
NASA Technical Reports Server (NTRS)
Rinsland, Curtis P.; Mahieu, Emmanuel; Chiou, Linda; Herbin, Herve
2009-01-01
Atmospheric CH3OH (methanol) free tropospheric (2.09-14-km altitude) time series spanning 22 years has been analyzed on the basis of high-spectral resolution infrared solar absorption spectra of the strong vs band recorded from the U.S. National Solar Observatory on Kitt Peak (latitude 31.9degN, 111.6degW, 2.09-km altitude) with a 1-m Fourier transform spectrometer (FTS). The measurements span October 1981 to December 2003 and are the first long time series of CH3OH measurements obtained from the ground. The results were analyzed with SFIT2 version 3.93 and show a factor of three variations with season, a maximum at the beginning of July, a winter minimum, and no statistically significant long-term trend over the measurement time span.
NASA Technical Reports Server (NTRS)
Rinsland, Curtis P.; Mahieu, Emmanuel; Chiou, Linda; Herbin, Herve
2009-01-01
Atmospheric CH3OH (methanol) free tropospheric (2.09-14-km altitude) time series spanning 22 years has been analyzed on the basis of high-spectral resolution infrared solar absorption spectra of the strong n8 band recorded from the U.S. National Solar Observatory on Kitt Peak (latitude 31.9degN, 111.6degW, 2.09-km altitude) with a 1-m Fourier transform spectrometer (FTS). The measurements span October 1981 to December 2003 and are the first long time series of CH3OH measurements obtained from the ground. The results were analyzed with SFIT2 version 3.93 and show a factor of three variations with season, a maximum at the beginning of July, a winter minimum, and no statistically significant long-term trend over the measurement time span.
Polyhedra and packings from hyperbolic honeycombs.
Pedersen, Martin Cramer; Hyde, Stephen T
2018-06-20
We derive more than 80 embeddings of 2D hyperbolic honeycombs in Euclidean 3 space, forming 3-periodic infinite polyhedra with cubic symmetry. All embeddings are "minimally frustrated," formed by removing just enough isometries of the (regular, but unphysical) 2D hyperbolic honeycombs [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] to allow embeddings in Euclidean 3 space. Nearly all of these triangulated "simplicial polyhedra" have symmetrically identical vertices, and most are chiral. The most symmetric examples include 10 infinite "deltahedra," with equilateral triangular faces, 6 of which were previously unknown and some of which can be described as packings of Platonic deltahedra. We describe also related cubic crystalline packings of equal hyperbolic discs in 3 space that are frustrated analogues of optimally dense hyperbolic disc packings. The 10-coordinated packings are the least "loosened" Euclidean embeddings, although frustration swells all of the hyperbolic disc packings to give less dense arrays than the flat penny-packing even though their unfrustrated analogues in [Formula: see text] are denser.
Generalising Ward's Method for Use with Manhattan Distances.
Strauss, Trudie; von Maltitz, Michael Johan
2017-01-01
The claim that Ward's linkage algorithm in hierarchical clustering is limited to use with Euclidean distances is investigated. In this paper, Ward's clustering algorithm is generalised to use with l1 norm or Manhattan distances. We argue that the generalisation of Ward's linkage method to incorporate Manhattan distances is theoretically sound and provide an example of where this method outperforms the method using Euclidean distances. As an application, we perform statistical analyses on languages using methods normally applied to biology and genetic classification. We aim to quantify differences in character traits between languages and use a statistical language signature based on relative bi-gram (sequence of two letters) frequencies to calculate a distance matrix between 32 Indo-European languages. We then use Ward's method of hierarchical clustering to classify the languages, using the Euclidean distance and the Manhattan distance. Results obtained from using the different distance metrics are compared to show that the Ward's algorithm characteristic of minimising intra-cluster variation and maximising inter-cluster variation is not violated when using the Manhattan metric.
NASA Astrophysics Data System (ADS)
Celenk, Mehmet; Song, Yinglei; Ma, Limin; Zhou, Min
2003-05-01
A new algorithm that can be used to automatically recognize and classify malignant lymphomas and lukemia is proposed in this paper. The algorithm utilizes the morphological watershed to extract boundaries of cells from their grey-level images. It generates a sequence of Euclidean distances by selecting pixels in clockwise direction on the boundary of the cell and calculating the Euclidean distances of the selected pixels from the centroid of the cell. A feature vector associated with each cell is then obtained by applying the auto-regressive moving-average (ARMA) model to the generated sequence of Euclidean distances. The clustering measure J3=trace{inverse(Sw-1)Sm} involving the within (Sw) and mixed (Sm) class-scattering matrices is computed for both cell classes to provide an insight into the extent to which different cell classes in the training data are separated. Our test results suggest that the algorithm is highly accurate for the development of an interactive, computer-assisted diagnosis (CAD) tool.
Gravitational decoupling and the Picard-Lefschetz approach
NASA Astrophysics Data System (ADS)
Brown, Jon; Cole, Alex; Shiu, Gary; Cottrell, William
2018-01-01
In this work, we consider tunneling between nonmetastable states in gravitational theories. Such processes arise in various contexts, e.g., in inflationary scenarios where the inflaton potential involves multiple fields or multiple branches. They are also relevant for bubble wall nucleation in some cosmological settings. However, we show that the transition amplitudes computed using the Euclidean method generally do not approach the corresponding field theory limit as Mp→∞ . This implies that in the Euclidean framework, there is no systematic expansion in powers of GN for such processes. Such considerations also carry over directly to no-boundary scenarios involving Hawking-Turok instantons. In this note, we illustrate this failure of decoupling in the Euclidean approach with a simple model of axion monodromy and then argue that the situation can be remedied with a Lorentzian prescription such as the Picard-Lefschetz theory. As a proof of concept, we illustrate with a simple model how tunneling transition amplitudes can be calculated using the Picard-Lefschetz approach.
t-topology on the n-dimensional Minkowski space
NASA Astrophysics Data System (ADS)
Agrawal, Gunjan; Shrivastava, Sampada
2009-05-01
In this paper, a topological study of the n-dimensional Minkowski space, n >1, with t-topology, denoted by Mt, has been carried out. This topology, unlike the usual Euclidean one, is more physically appealing being defined by means of the Lorentzian metric. It shares many topological properties with similar candidate topologies and it has the advantage of being first countable. Compact sets of Mt and continuous maps into Mt are studied using the notion of Zeno sequences besides characterizing those sets that have the same subspace topologies induced from the Euclidean and t-topologies on n-dimensional Minkowski space. A necessary and sufficient condition for a compact set in the Euclidean n-space to be compact in Mt is obtained, thereby proving that the n-cube, n >1, as a subspace of Mt, is not compact, while a segment on a timelike line is compact in Mt. This study leads to the nonsimply connectedness of Mt, for n =2. Further, Minkowski space with s-topology has also been dealt with.
Rating of Dynamic Coefficient for Simple Beam Bridge Design on High-Speed Railways
NASA Astrophysics Data System (ADS)
Diachenko, Leonid; Benin, Andrey; Smirnov, Vladimir; Diachenko, Anastasia
2018-06-01
The aim of the work is to improve the methodology for the dynamic computation of simple beam spans during the impact of high-speed trains. Mathematical simulation utilizing numerical and analytical methods of structural mechanics is used in the research. The article analyses parameters of the effect of high-speed trains on simple beam spanning bridge structures and suggests a technique of determining of the dynamic index to the live load. Reliability of the proposed methodology is confirmed by results of numerical simulation of high-speed train passage over spans with different speeds. The proposed algorithm of dynamic computation is based on a connection between maximum acceleration of the span in the resonance mode of vibrations and the main factors of stress-strain state. The methodology allows determining maximum and also minimum values of the main efforts in the construction that makes possible to perform endurance tests. It is noted that dynamic additions for the components of the stress-strain state (bending moments, transverse force and vertical deflections) are different. This condition determines the necessity for differentiated approach to evaluation of dynamic coefficients performing design verification of I and II groups of limiting state. The practical importance: the methodology of determining the dynamic coefficients allows making dynamic calculation and determining the main efforts in split beam spans without numerical simulation and direct dynamic analysis that significantly reduces the labour costs for design.
Assessment of gene order computing methods for Alzheimer's disease
2013-01-01
Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541
Exact and heuristic algorithms for Space Information Flow.
Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng
2018-01-01
Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.
Hydropathic self-organized criticality: a magic wand for protein physics.
Phillips, J C
2012-10-01
Self-organized criticality (SOC) is a popular concept that has been the subject of more than 3000 articles in the last 25 years. The characteristic signature of SOC is the appearance of self-similarity (power-law scaling) in observable properties. A characteristic observable protein property that describes protein-water interactions is the water-accessible (hydropathic) interfacial area of compacted globular protein networks. Here we show that hydropathic power-law (size- or length-scale-dependent) exponents derived from SOC enable theory to connect standard Web-based (BLAST) short-range amino acid (aa) sequence similarities to long-range aa sequence hydropathic roughening form factors that hierarchically describe evolutionary trends in water - membrane protein interactions. Our method utilizes hydropathic aa exponents that define a non-Euclidean metric realistically rooted in the atomic coordinates of 5526 protein segments. These hydropathic aa exponents thereby encapsulate universal (but previously only implicit) non-Euclidean long-range differential geometrical features of the Protein Data Bank. These hydropathic aa exponents easily organize small mutated aa sequence differences between human and proximate species proteins. For rhodopsin, the most studied transmembrane signaling protein associated with night vision, analysis shows that this approach separates Euclidean short- and non-Euclidean long-range aa sequence properties, and shows that they correlate with 96% success for humans, monkeys, cats, mice and rabbits. Proper application of SOC using hydropathic aa exponents promises unprecedented simplifications of exponentially complex protein sequence-structure-function problems, both conceptual and practical.
Kim, Hanvit; Minh Phuong Nguyen; Se Young Chun
2017-07-01
Biometrics such as ECG provides a convenient and powerful security tool to verify or identify an individual. However, one important drawback of biometrics is that it is irrevocable. In other words, biometrics cannot be re-used practically once it is compromised. Cancelable biometrics has been investigated to overcome this drawback. In this paper, we propose a cancelable ECG biometrics by deriving a generalized likelihood ratio test (GLRT) detector from a composite hypothesis testing in randomly projected domain. Since it is common to observe performance degradation for cancelable biometrics, we also propose a guided filtering (GF) with irreversible guide signal that is a non-invertibly transformed signal of ECG authentication template. We evaluated our proposed method using ECG-ID database with 89 subjects. Conventional Euclidean detector with original ECG template yielded 93.9% PD1 (detection probability at 1% FAR) while Euclidean detector with 10% compressed ECG (1/10 of the original data size) yielded 90.8% PD1. Our proposed GLRT detector with 10% compressed ECG yielded 91.4%, which is better than Euclidean with the same compressed ECG. GF with our proposed irreversible ECG template further improved the performance of our GLRT with 10% compressed ECG up to 94.3%, which is higher than Euclidean detector with original ECG. Lastly, we showed that our proposed cancelable ECG biometrics practically met cancelable biometrics criteria such as efficiency, re-usability, diversity and non-invertibility.
Combined trellis coding with asymmetric MPSK modulation: An MSAT-X report
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1985-01-01
Traditionally symmetric, multiple phase-shift-keyed (MPSK) signal constellations, i.e., those with uniformly spaced signal points around the circle, have been used for both uncoded and coded systems. Although symmetric MPSK signal constellations are optimum for systems with no coding, the same is not necessarily true for coded systems. This appears to show that by designing the signal constellations to be asymmetric, one can, in many instances, obtain a significant performance improvement over the traditional symmetric MPSK constellations combined with trellis coding. The joint design of n/(n + 1) trellis codes and asymmetric 2 sup n + 1 - point MPSK is considered, which has a unity bandwidth expansion relative to uncoded 2 sup n-point symmetric MPSK. The asymptotic performance gains due to coding and asymmetry are evaluated in terms of the minimum free Euclidean distance free of the trellis. A comparison of the maximum value of this performance measure with the minimum distance d sub min of the uncoded system is an indication of the maximum reduction in required E sub b/N sub O that can be achieved for arbitrarily small system bit-error rates. It is to be emphasized that the introduction of asymmetry into the signal set does not effect the bandwidth of power requirements of the system; hence, the above-mentioned improvements in performance come at little or no cost. MPSK signal sets in coded systems appear in the work of Divsalar.
NASA Astrophysics Data System (ADS)
Martucci, M.; Munini, R.; Boezio, M.; Di Felice, V.; Adriani, O.; Barbarino, G. C.; Bazilevskaya, G. A.; Bellotti, R.; Bongi, M.; Bonvicini, V.; Bottai, S.; Bruno, A.; Cafagna, F.; Campana, D.; Carlson, P.; Casolino, M.; Castellini, G.; De Santis, C.; Galper, A. M.; Karelin, A. V.; Koldashov, S. V.; Koldobskiy, S.; Krutkov, S. Y.; Kvashnin, A. N.; Leonov, A.; Malakhov, V.; Marcelli, L.; Marcelli, N.; Mayorov, A. G.; Menn, W.; Mergè, M.; Mikhailov, V. V.; Mocchiutti, E.; Monaco, A.; Mori, N.; Osteria, G.; Panico, B.; Papini, P.; Pearce, M.; Picozza, P.; Ricci, M.; Ricciarini, S. B.; Simon, M.; Sparvoli, R.; Spillantini, P.; Stozhkov, Y. I.; Vacchi, A.; Vannuccini, E.; Vasilyev, G.; Voronov, S. A.; Yurkin, Y. T.; Zampa, G.; Zampa, N.; Potgieter, M. S.; Raath, J. L.
2018-02-01
Precise measurements of the time-dependent intensity of the low-energy (<50 GeV) galactic cosmic rays (GCRs) are fundamental to test and improve the models that describe their propagation inside the heliosphere. In particular, data spanning different solar activity periods, i.e., from minimum to maximum, are needed to achieve comprehensive understanding of such physical phenomena. The minimum phase between solar cycles 23 and 24 was peculiarly long, extending up to the beginning of 2010 and followed by the maximum phase, reached during early 2014. In this Letter, we present proton differential spectra measured from 2010 January to 2014 February by the PAMELA experiment. For the first time the GCR proton intensity was studied over a wide energy range (0.08–50 GeV) by a single apparatus from a minimum to a maximum period of solar activity. The large statistics allowed the time variation to be investigated on a nearly monthly basis. Data were compared and interpreted in the context of a state-of-the-art three-dimensional model describing the GCRs propagation through the heliosphere.
Oversampling the Minority Class in the Feature Space.
Perez-Ortiz, Maria; Gutierrez, Pedro Antonio; Tino, Peter; Hervas-Martinez, Cesar
2016-09-01
The imbalanced nature of some real-world data is one of the current challenges for machine learning researchers. One common approach oversamples the minority class through convex combination of its patterns. We explore the general idea of synthetic oversampling in the feature space induced by a kernel function (as opposed to input space). If the kernel function matches the underlying problem, the classes will be linearly separable and synthetically generated patterns will lie on the minority class region. Since the feature space is not directly accessible, we use the empirical feature space (EFS) (a Euclidean space isomorphic to the feature space) for oversampling purposes. The proposed method is framed in the context of support vector machines, where the imbalanced data sets can pose a serious hindrance. The idea is investigated in three scenarios: 1) oversampling in the full and reduced-rank EFSs; 2) a kernel learning technique maximizing the data class separation to study the influence of the feature space structure (implicitly defined by the kernel function); and 3) a unified framework for preferential oversampling that spans some of the previous approaches in the literature. We support our investigation with extensive experiments over 50 imbalanced data sets.
NASA Technical Reports Server (NTRS)
Lamar, J. E.
1994-01-01
This program represents a subsonic aerodynamic method for determining the mean camber surface of trimmed noncoplaner planforms with minimum vortex drag. With this program, multiple surfaces can be designed together to yield a trimmed configuration with minimum induced drag at some specified lift coefficient. The method uses a vortex-lattice and overcomes previous difficulties with chord loading specification. A Trefftz plane analysis is used to determine the optimum span loading for minimum drag. The program then solves for the mean camber surface of the wing associated with this loading. Pitching-moment or root-bending-moment constraints can be employed at the design lift coefficient. Sensitivity studies of vortex-lattice arrangements have been made with this program and comparisons with other theories show generally good agreement. The program is very versatile and has been applied to isolated wings, wing-canard configurations, a tandem wing, and a wing-winglet configuration. The design problem solved with this code is essentially an optimization one. A subsonic vortex-lattice is used to determine the span load distribution(s) on bent lifting line(s) in the Trefftz plane. A Lagrange multiplier technique determines the required loading which is used to calculate the mean camber slopes, which are then integrated to yield the local elevation surface. The problem of determining the necessary circulation matrix is simplified by having the chordwise shape of the bound circulation remain unchanged across each span, though the chordwise shape may vary from one planform to another. The circulation matrix is obtained by calculating the spanwise scaling of the chordwise shapes. A chordwise summation of the lift and pitching-moment is utilized in the Trefftz plane solution on the assumption that the trailing wake does not roll up and that the general configuration has specifiable chord loading shapes. VLMD is written in FORTRAN for IBM PC series and compatible computers running MS-DOS. This program requires 360K of RAM for execution. The Ryan McFarland FORTRAN compiler and PLINK86 are required to recompile the source code; however, a sample executable is provided on the diskette. The standard distribution medium for VLMD is a 5.25 inch 360K MS-DOS format diskette. VLMD was originally developed for use on CDC 6000 series computers in 1976. It was originally ported to the IBM PC in 1986, and, after minor modifications, the IBM PC port was released in 1993.
Eigenvalues of the Wentzell-Laplace operator and of the fourth order Steklov problems
NASA Astrophysics Data System (ADS)
Xia, Changyu; Wang, Qiaoling
2018-05-01
We prove a sharp upper bound and a lower bound for the first nonzero eigenvalue of the Wentzell-Laplace operator on compact manifolds with boundary and an isoperimetric inequality for the same eigenvalue in the case where the manifold is a bounded domain in a Euclidean space. We study some fourth order Steklov problems and obtain isoperimetric upper bound for the first eigenvalue of them. We also find all the eigenvalues and eigenfunctions for two kind of fourth order Steklov problems on a Euclidean ball.
Bayesian extraction of the parton distribution amplitude from the Bethe-Salpeter wave function
NASA Astrophysics Data System (ADS)
Gao, Fei; Chang, Lei; Liu, Yu-xin
2017-07-01
We propose a new numerical method to compute the parton distribution amplitude (PDA) from the Euclidean Bethe-Salpeter wave function. The essential step is to extract the weight function in the Nakanishi representation of the Bethe-Salpeter wave function in Euclidean space, which is an ill-posed inversion problem, via the maximum entropy method (MEM). The Nakanishi weight function as well as the corresponding light-front parton distribution amplitude (PDA) can be well determined. We confirm prior work on PDA computations, which was based on different methods.
Loop-quantum-gravity vertex amplitude.
Engle, Jonathan; Pereira, Roberto; Rovelli, Carlo
2007-10-19
Spin foam models are hoped to provide the dynamics of loop-quantum gravity. However, the most popular of these, the Barrett-Crane model, does not have the good boundary state space and there are indications that it fails to yield good low-energy n-point functions. We present an alternative dynamics that can be derived as a quantization of a Regge discretization of Euclidean general relativity, where second class constraints are imposed weakly. Its state space matches the SO(3) loop gravity one and it yields an SO(4)-covariant vertex amplitude for Euclidean loop gravity.
Mass-Related Dynamical Barriers in Triatomic Reactions
NASA Astrophysics Data System (ADS)
Yanao, T.; Koon, W. S.; Marsden, J. E.
2006-06-01
A methodology is given to determine the effect of different mass distributions for triatomic reactions using the geometry of shape space. Atomic masses are incorporated into the non-Euclidean shape space metric after the separation of rotations. Using the equations of motion in this non-Euclidean shape space, an averaged field of velocity-dependent fictitious forces is determined. This force field, as opposed to the force arising from the potential, dominates branching ratios of isomerization dynamics of a triatomic molecule. This methodology may be useful for qualitative prediction of branching ratios in general triatomic reactions.
Trading spaces: building three-dimensional nets from two-dimensional tilings
Castle, Toen; Evans, Myfanwy E.; Hyde, Stephen T.; Ramsden, Stuart; Robins, Vanessa
2012-01-01
We construct some examples of finite and infinite crystalline three-dimensional nets derived from symmetric reticulations of homogeneous two-dimensional spaces: elliptic (S2), Euclidean (E2) and hyperbolic (H2) space. Those reticulations are edges and vertices of simple spherical, planar and hyperbolic tilings. We show that various projections of the simplest symmetric tilings of those spaces into three-dimensional Euclidean space lead to topologically and geometrically complex patterns, including multiple interwoven nets and tangled nets that are otherwise difficult to generate ab initio in three dimensions. PMID:24098839
40 CFR Appendix - Tables to Subpart DDDDD of Part 63
Code of Federal Regulations, 2012 CFR
2012-07-01
.... Mercury 2.1E-07 lb per MMBtu of heat input 0.2E-06 Collect enough volume to meet an in-stack detection... time, use a span value of 20 ppmv. e. Dioxins/Furans 4 ng/dscm (TEQ) corrected to 7 percent oxygen 9.2E....2E-09 (TEQ) Collect a minimum of 1 dscm per run. 12. Units designed to burn gas 2 (other) gases a...
40 CFR Appendix - Tables to Subpart DDDDD of Part 63
Code of Federal Regulations, 2011 CFR
2011-07-01
.... Mercury 2.1E-07 lb per MMBtu of heat input 0.2E-06 Collect enough volume to meet an in-stack detection... time, use a span value of 20 ppmv. e. Dioxins/Furans 4 ng/dscm (TEQ) corrected to 7 percent oxygen 9.2E....2E-09 (TEQ) Collect a minimum of 1 dscm per run. 12. Units designed to burn gas 2 (other) gases a...
Maunder, E W (1851-1928) and Maunder, Mrs A S D
NASA Astrophysics Data System (ADS)
Murdin, P.
2000-11-01
Solar astronomers. Maunder became assistant for spectroscopic and solar observations at the Royal Observatory, Greenwich under GEORGE AIRY, aided by his wife. In 1890, while studying the numbers of sunspots over a 300 year time-span he noticed the scarcity of spots in the period 1645-1715. This so-called Maunder minimum was confirmed by Jack Eddy (1976) to be a real effect rather than simply a...
Extreme values and the level-crossing problem: An application to the Feller process
NASA Astrophysics Data System (ADS)
Masoliver, Jaume
2014-04-01
We review the question of the extreme values attained by a random process. We relate it to level crossings to one boundary (first-passage problems) as well as to two boundaries (escape problems). The extremes studied are the maximum, the minimum, the maximum absolute value, and the range or span. We specialize in diffusion processes and present detailed results for the Wiener and Feller processes.
Analyzing systemic risk using non-linear marginal expected shortfall and its minimum spanning tree
NASA Astrophysics Data System (ADS)
Song, Jae Wook; Ko, Bonggyun; Chang, Woojin
2018-02-01
The aim of this paper is to propose a new theoretical framework for analyzing the systemic risk using the marginal expected shortfall (MES) and its correlation-based minimum spanning tree (MST). At first, we develop two parametric models of MES with their closed-form solutions based on the Capital Asset Pricing Model. Our models are derived from the non-symmetric quadratic form, which allows them to consolidate the non-linear relationship between the stock and market returns. Secondly, we discover the evidences related to the utility of our models and the possible association in between the non-linear relationship and the emergence of severe systemic risk by considering the US financial system as a benchmark. In this context, the evolution of MES also can be regarded as a reasonable proxy of systemic risk. Lastly, we analyze the structural properties of the systemic risk using the MST based on the computed series of MES. The topology of MST conveys the presence of sectoral clustering and strong co-movements of systemic risk leaded by few hubs during the crisis. Specifically, we discover that the Depositories are the majority sector leading the connections during the Non-Crisis period, whereas the Broker-Dealers are majority during the Crisis period.
NASA Technical Reports Server (NTRS)
Pfenninger, Werner; Vemuru, Chandra S.
1988-01-01
The achievement of 70 percent laminar flow using modest boundary layer suction on the wings, empennage, nacelles, and struts of long-range LFC transports, combined with larger wing spans and lower span loadings, could make possible an unrefuelled range halfway around the world up to near sonic cruise speeds with large payloads. It is shown that supercritical LFC airfoils with undercut front and rear lower surfaces, an upper surface static pressure coefficient distribution with an extensive low supersonic flat rooftop, a far upstream supersonic pressure minimum, and a steep subsonic rear pressure rise with suction or a slotted cruise flap could alleviate sweep-induced crossflow and attachment-line boundary-layer instability. Wing-mounted superfans can reduce fuel consumption and engine tone noise.
Leaf seal for inner and outer casings of a turbine
Schroder, Mark Stewart; Leach, David
2002-01-01
A plurality of arcuate, circumferentially extending leaf seal segments form an annular seal spanning between annular sealing surfaces of inner and outer casings of a turbine. The ends of the adjoining seal segments have circumferential gaps to enable circumferential expansion and contraction of the segments. The end of a first segment includes a tab projecting into a recess of a second end of a second segment. Edges of the tab seal against the sealing surfaces of the inner and outer casings have a narrow clearance with opposed edges of the recess. An overlying cover plate spans the joint. Leakage flow is maintained at a minimum because of the reduced gap between the radially spaced edges of the tab and recess, while the seal segments retain the capacity to expand and contract circumferentially.
Topology for efficient information dissemination in ad-hoc networking
NASA Technical Reports Server (NTRS)
Jennings, E.; Okino, C. M.
2002-01-01
In this paper, we explore the information dissemination problem in ad-hoc wirless networks. First, we analyze the probability of successful broadcast, assuming: the nodes are uniformly distributed, the available area has a lower bould relative to the total number of nodes, and there is zero knowledge of the overall topology of the network. By showing that the probability of such events is small, we are motivated to extract good graph topologies to minimize the overall transmissions. Three algorithms are used to generate topologies of the network with guaranteed connectivity. These are the minimum radius graph, the relative neighborhood graph and the minimum spanning tree. Our simulation shows that the relative neighborhood graph has certain good graph properties, which makes it suitable for efficient information dissemination.
The Wall Interference of a Wind Tunnel of Elliptic Cross Section
NASA Technical Reports Server (NTRS)
Tani, Itiro; Sanuki, Matao
1944-01-01
The wall interference is obtained for a wind tunnel of elliptic section for the two cases of closed and open working sections. The approximate and exact methods used gave results in practically good agreement. Corresponding to the result given by Glauert for the case of the closed rectangular section, the interference is found to be a minimum for a ratio of minor to major axis of 1:square root of 6 This, however, is true only for the case where the span of the airfoil is small in comparison with the width of the tunnel. For a longer airfoil the favorable ellipse is flatter. In the case of the open working section the circular shape gives the minimum interference.
On decoding of multi-level MPSK modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Gupta, Alok Kumar
1990-01-01
The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.
Input relegation control for gross motion of a kinematically redundant manipulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unseren, M.A.
1992-10-01
This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the jointmore » velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.« less
Using optimal transport theory to estimate transition probabilities in metapopulation dynamics
Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.
2017-01-01
This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.
Interspecific utilisation of wax in comb building by honeybees
NASA Astrophysics Data System (ADS)
Hepburn, H. Randall; Radloff, Sarah E.; Duangphakdee, Orawan; Phaincharoen, Mananya
2009-06-01
Beeswaxes of honeybee species share some homologous neutral lipids; but species-specific differences remain. We analysed behavioural variation for wax choice in honeybees, calculated the Euclidean distances for different beeswaxes and assessed the relationship of Euclidean distances to wax choice. We tested the beeswaxes of Apis mellifera capensis, Apis florea, Apis cerana and Apis dorsata and the plant and mineral waxes Japan, candelilla, bayberry and ozokerite as sheets placed in colonies of A. m. capensis, A. florea and A. cerana. A. m. capensis accepted the four beeswaxes but removed Japan and bayberry wax and ignored candelilla and ozokerite. A. cerana colonies accepted the wax of A. cerana, A. florea and A. dorsata but rejected or ignored that of A. m. capensis, the plant and mineral waxes. A. florea colonies accepted A. cerana, A. dorsata and A. florea wax but rejected that of A. m. capensis. The Euclidean distances for the beeswaxes are consistent with currently prevailing phylogenies for Apis. Despite post-speciation chemical differences in the beeswaxes, they remain largely acceptable interspecifically while the plant and mineral waxes are not chemically close enough to beeswax for their acceptance.
On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.
Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba
2013-01-01
Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ d , and the dictionary is learned from the training data using the vector space structure of ℝ d and its Euclidean L 2 -metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.
On A Nonlinear Generalization of Sparse Coding and Dictionary Learning
Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba
2013-01-01
Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝd, and the dictionary is learned from the training data using the vector space structure of ℝd and its Euclidean L2-metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis. PMID:24129583
Texture classification using non-Euclidean Minkowski dilation
NASA Astrophysics Data System (ADS)
Florindo, Joao B.; Bruno, Odemir M.
2018-03-01
This study presents a new method to extract meaningful descriptors of gray-scale texture images using Minkowski morphological dilation based on the Lp metric. The proposed approach is motivated by the success previously achieved by Bouligand-Minkowski fractal descriptors on texture classification. In essence, such descriptors are directly derived from the morphological dilation of a three-dimensional representation of the gray-level pixels using the classical Euclidean metric. In this way, we generalize the dilation for different values of p in the Lp metric (Euclidean is a particular case when p = 2) and obtain the descriptors from the cumulated distribution of the distance transform computed over the texture image. The proposed method is compared to other state-of-the-art approaches (such as local binary patterns and textons for example) in the classification of two benchmark data sets (UIUC and Outex). The proposed descriptors outperformed all the other approaches in terms of rate of images correctly classified. The interesting results suggest the potential of these descriptors in this type of task, with a wide range of possible applications to real-world problems.
New descriptor for skeletons of planar shapes: the calypter
NASA Astrophysics Data System (ADS)
Pirard, Eric; Nivart, Jean-Francois
1994-05-01
The mathematical definition of the skeleton as the locus of centers of maximal inscribed discs is a nondigitizable one. The idea presented in this paper is to incorporate the skeleton information and the chain-code of the contour into a single descriptor by associating to each point of a contour the center and radius of the maximum inscribed disc tangent at that point. This new descriptor is called calypter. The encoding of a calypter is a three stage algorithm: (1) chain coding of the contour; (2) euclidean distance transformation, (3) climbing on the distance relief from each point of the contour towards the corresponding maximal inscribed disc center. Here we introduce an integer euclidean distance transform called the holodisc distance transform. The major interest of this holodisc transform is to confer 8-connexity to the isolevels of the generated distance relief thereby allowing a climbing algorithm to proceed step by step towards the centers of the maximal inscribed discs. The calypter has a cyclic structure delivering high speed access to the skeleton data. Its potential uses are in high speed euclidean mathematical morphology, shape processing, and analysis.
Translational Symmetry-Breaking for Spiral Waves
NASA Astrophysics Data System (ADS)
LeBlanc, V. G.; Wulff, C.
2000-10-01
Spiral waves are observed in numerous physical situations, ranging from Belousov-Zhabotinsky (BZ) chemical reactions, to cardiac tissue, to slime-mold aggregates. Mathematical models with Euclidean symmetry have recently been developed to describe the dynamic behavior (for example, meandering) of spiral waves in excitable media. However, no physical experiment is ever infinite in spatial extent, so the Euclidean symmetry is only approximate. Experiments on spiral waves show that inhomogeneities can anchor spirals and that boundary effects (for example, boundary drifting) become very important when the size of the spiral core is comparable to the size of the reacting medium. Spiral anchoring and boundary drifting cannot be explained by the Euclidean model alone. In this paper, we investigate the effects on spiral wave dynamics of breaking the translation symmetry while keeping the rotation symmetry. This is accomplished by introducing a small perturbation in the five-dimensional center bundle equations (describing Hopf bifurcation from one-armed spiral waves) which is SO(2)-equivariant but not equivariant under translations. We then study the effects of this perturbation on rigid spiral rotation, on quasi-periodic meandering and on drifting.
2011-01-01
Background The Prospective Space-Time scan statistic (PST) is widely used for the evaluation of space-time clusters of point event data. Usually a window of cylindrical shape is employed, with a circular or elliptical base in the space domain. Recently, the concept of Minimum Spanning Tree (MST) was applied to specify the set of potential clusters, through the Density-Equalizing Euclidean MST (DEEMST) method, for the detection of arbitrarily shaped clusters. The original map is cartogram transformed, such that the control points are spread uniformly. That method is quite effective, but the cartogram construction is computationally expensive and complicated. Results A fast method for the detection and inference of point data set space-time disease clusters is presented, the Voronoi Based Scan (VBScan). A Voronoi diagram is built for points representing population individuals (cases and controls). The number of Voronoi cells boundaries intercepted by the line segment joining two cases points defines the Voronoi distance between those points. That distance is used to approximate the density of the heterogeneous population and build the Voronoi distance MST linking the cases. The successive removal of edges from the Voronoi distance MST generates sub-trees which are the potential space-time clusters. Finally, those clusters are evaluated through the scan statistic. Monte Carlo replications of the original data are used to evaluate the significance of the clusters. An application for dengue fever in a small Brazilian city is presented. Conclusions The ability to promptly detect space-time clusters of disease outbreaks, when the number of individuals is large, was shown to be feasible, due to the reduced computational load of VBScan. Instead of changing the map, VBScan modifies the metric used to define the distance between cases, without requiring the cartogram construction. Numerical simulations showed that VBScan has higher power of detection, sensitivity and positive predicted value than the Elliptic PST. Furthermore, as VBScan also incorporates topological information from the point neighborhood structure, in addition to the usual geometric information, it is more robust than purely geometric methods such as the elliptic scan. Those advantages were illustrated in a real setting for dengue fever space-time clusters. PMID:21513556
Gravitational instantons, self-duality, and geometric flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourliot, F.; Estes, J.; Petropoulos, P. M.
2010-05-15
We discuss four-dimensional 'spatially homogeneous' gravitational instantons. These are self-dual solutions of Euclidean vacuum Einstein equations. They are endowed with a product structure RxM{sub 3} leading to a foliation into three-dimensional subspaces evolving in Euclidean time. For a large class of homogeneous subspaces, the dynamics coincides with a geometric flow on the three-dimensional slice, driven by the Ricci tensor plus an so(3) gauge connection. The flowing metric is related to the vielbein of the subspace, while the gauge field is inherited from the anti-self-dual component of the four-dimensional Levi-Civita connection.
The Formalism of Quantum Mechanics Specified by Covariance Properties
NASA Astrophysics Data System (ADS)
Nisticò, G.
2009-03-01
The known methods, due for instance to G.W. Mackey and T.F. Jordan, which exploit the transformation properties with respect to the Euclidean and Galileian group to determine the formalism of the Quantum Theory of a localizable particle, fail in the case that the considered transformations are not symmetries of the physical system. In the present work we show that the formalism of standard Quantum Mechanics for a particle without spin can be completely recovered by exploiting the covariance properties with respect to the group of Euclidean transformations, without requiring that these transformations are symmetries of the physical system.
Phylogenetic trees and Euclidean embeddings.
Layer, Mark; Rhodes, John A
2017-01-01
It was recently observed by de Vienne et al. (Syst Biol 60(6):826-832, 2011) that a simple square root transformation of distances between taxa on a phylogenetic tree allowed for an embedding of the taxa into Euclidean space. While the justification for this was based on a diffusion model of continuous character evolution along the tree, here we give a direct and elementary explanation for it that provides substantial additional insight. We use this embedding to reinterpret the differences between the NJ and BIONJ tree building algorithms, providing one illustration of how this embedding reflects tree structures in data.
Constant curvature black holes in Einstein AdS gravity: Euclidean action and thermodynamics
NASA Astrophysics Data System (ADS)
Guilleminot, Pablo; Olea, Rodrigo; Petrov, Alexander N.
2018-03-01
We compute the Euclidean action for constant curvature black holes (CCBHs), as an attempt to associate thermodynamic quantities to these solutions of Einstein anti-de Sitter (AdS) gravity. CCBHs are gravitational configurations obtained by identifications along isometries of a D -dimensional globally AdS space, such that the Riemann tensor remains constant. Here, these solutions are interpreted as extended objects, which contain a (D -2 )-dimensional de-Sitter brane as a subspace. Nevertheless, the computation of the free energy for these solutions shows that they do not obey standard thermodynamic relations.
Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms
NASA Astrophysics Data System (ADS)
Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.
2017-09-01
Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.
Minimum spanning tree filtering of correlations for varying time scales and size of fluctuations
NASA Astrophysics Data System (ADS)
Kwapień, Jarosław; Oświecimka, Paweł; Forczek, Marcin; DroŻdŻ, Stanisław
2017-05-01
Based on a recently proposed q -dependent detrended cross-correlation coefficient, ρq [J. Kwapień, P. Oświęcimka, and S. Drożdż, Phys. Rev. E 92, 052815 (2015), 10.1103/PhysRevE.92.052815], we generalize the concept of the minimum spanning tree (MST) by introducing a family of q -dependent minimum spanning trees (q MST s ) that are selective to cross-correlations between different fluctuation amplitudes and different time scales of multivariate data. They inherit this ability directly from the coefficients ρq, which are processed here to construct a distance matrix being the input to the MST-constructing Kruskal's algorithm. The conventional MST with detrending corresponds in this context to q =2 . In order to illustrate their performance, we apply the q MSTs to sample empirical data from the American stock market and discuss the results. We show that the q MST graphs can complement ρq in disentangling "hidden" correlations that cannot be observed in the MST graphs based on ρDCCA, and therefore, they can be useful in many areas where the multivariate cross-correlations are of interest. As an example, we apply this method to empirical data from the stock market and show that by constructing the q MSTs for a spectrum of q values we obtain more information about the correlation structure of the data than by using q =2 only. More specifically, we show that two sets of signals that differ from each other statistically can give comparable trees for q =2 , while only by using the trees for q ≠2 do we become able to distinguish between these sets. We also show that a family of q MSTs for a range of q expresses the diversity of correlations in a manner resembling the multifractal analysis, where one computes a spectrum of the generalized fractal dimensions, the generalized Hurst exponents, or the multifractal singularity spectra: the more diverse the correlations are, the more variable the tree topology is for different q 's. As regards the correlation structure of the stock market, our analysis exhibits that the stocks belonging to the same or similar industrial sectors are correlated via the fluctuations of moderate amplitudes, while the largest fluctuations often happen to synchronize in those stocks that do not necessarily belong to the same industry.
MOEs for Drug Interdiction: Simple Tests Expose Critical Flaws
1991-09-01
operations against illegal !rugs flowing into the U.S. Six candidate measures of effective (MOEs) are subjected to a structured assessment prcess that tests...supports spanning the decision space with a minimum number of MOEs. A small suite of MOEs reflecting relatively pure effects is preferred to long and...responds to changing consumer fashion--this year’s drug of choice may be overtaken by a new fad. Patterns of preference can vary widely by location as
Guo, Hao; Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie
2017-01-01
High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%.
Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie
2017-01-01
High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%. PMID:29387141
NASA Astrophysics Data System (ADS)
Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi
2018-01-01
On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.
Exact Boson-Fermion Duality on a 3D Euclidean Lattice
Chen, Jing-Yuan; Son, Jun Ho; Wang, Chao; ...
2018-01-05
The idea of statistical transmutation plays a crucial role in descriptions of the fractional quantum Hall effect. However, a recently conjectured duality between a critical boson and a massless two-component Dirac fermion extends this notion to gapless systems. This duality sheds light on highly nontrivial problems such as the half-filled Landau level, the superconductor-insulator transition, and surface states of strongly coupled topological insulators. Although this boson-fermion duality has undergone many consistency checks, it has remained unproven. Here, we describe the duality in a nonperturbative fashion using an exact UV mapping of partition functions on a 3D Euclidean lattice.
Exact Boson-Fermion Duality on a 3D Euclidean Lattice.
Chen, Jing-Yuan; Son, Jun Ho; Wang, Chao; Raghu, S
2018-01-05
The idea of statistical transmutation plays a crucial role in descriptions of the fractional quantum Hall effect. However, a recently conjectured duality between a critical boson and a massless two-component Dirac fermion extends this notion to gapless systems. This duality sheds light on highly nontrivial problems such as the half-filled Landau level, the superconductor-insulator transition, and surface states of strongly coupled topological insulators. Although this boson-fermion duality has undergone many consistency checks, it has remained unproven. We describe the duality in a nonperturbative fashion using an exact UV mapping of partition functions on a 3D Euclidean lattice.
Supersymmetry and the rotation group
NASA Astrophysics Data System (ADS)
McKeon, D. G. C.
2018-04-01
A model invariant under a supersymmetric extension of the rotation group 0(3) is mapped, using a stereographic projection, from the spherical surface S2 to two-dimensional Euclidean space. The resulting model is not translation invariant. This has the consequence that fields that are supersymmetric partners no longer have a degenerate mass. This degeneracy is restored once the radius of S2 goes to infinity, and the resulting supersymmetry transformation for the fields is now mass dependent. An analogous model on the surface S4 is introduced and its projection onto four-dimensional Euclidean space is examined. This model in turn suggests a supersymmetric model on (3 + 1)-dimensional Minkowski space.
Multi-stability in folded shells: non-Euclidean origami
NASA Astrophysics Data System (ADS)
Evans, Arthur
2015-03-01
Both natural and man-made structures benefit from having multiple mechanically stable states, from the quick snapping motion of hummingbird beaks to micro-textured surfaces with tunable roughness. Rather than discuss special fabrication techniques for creating bi-stability through material anisotropy, in this talk I will present several examples of how folding a structure can modify the energy landscape and thus lead to multiple stable states. Using ideas from origami and differential geometry, I will discuss how deforming a non-Euclidean surface can be done either continuously or discontinuously, and explore the effects that global constraints have on the ultimate stability of the surface.
Exact Boson-Fermion Duality on a 3D Euclidean Lattice
NASA Astrophysics Data System (ADS)
Chen, Jing-Yuan; Son, Jun Ho; Wang, Chao; Raghu, S.
2018-01-01
The idea of statistical transmutation plays a crucial role in descriptions of the fractional quantum Hall effect. However, a recently conjectured duality between a critical boson and a massless two-component Dirac fermion extends this notion to gapless systems. This duality sheds light on highly nontrivial problems such as the half-filled Landau level, the superconductor-insulator transition, and surface states of strongly coupled topological insulators. Although this boson-fermion duality has undergone many consistency checks, it has remained unproven. We describe the duality in a nonperturbative fashion using an exact UV mapping of partition functions on a 3D Euclidean lattice.
NASA Astrophysics Data System (ADS)
Hou, Boyu; Song, Xingchang
1998-04-01
By compactifying the four-dimensional Euclidean space into S2 × S2 manifold and introducing two topological relevant Wess-Zumino terms to Hn ≡ SL(n,c)/SU(n) nonlinear sigma model, we construct a Lagrangian form for SU(n) self-dual Yang-Mills field, from which the self-dual equations follow as the Euler-Lagrange equations. The project supported in part by the NSF Contract No. PHY-81-09110-A-01. One of the authors (X.C. SONG) was supported by a Fung King-Hey Fellowship through the Committee for Educational Exchange with China
Combinatorial quantisation of the Euclidean torus universe
NASA Astrophysics Data System (ADS)
Meusburger, C.; Noui, K.
2010-12-01
We quantise the Euclidean torus universe via a combinatorial quantisation formalism based on its formulation as a Chern-Simons gauge theory and on the representation theory of the Drinfel'd double DSU(2). The resulting quantum algebra of observables is given by two commuting copies of the Heisenberg algebra, and the associated Hilbert space can be identified with the space of square integrable functions on the torus. We show that this Hilbert space carries a unitary representation of the modular group and discuss the role of modular invariance in the theory. We derive the classical limit of the theory and relate the quantum observables to the geometry of the torus universe.
Constructing financial network based on PMFG and threshold method
NASA Astrophysics Data System (ADS)
Nie, Chun-Xiao; Song, Fu-Tie
2018-04-01
Based on planar maximally filtered graph (PMFG) and threshold method, we introduced a correlation-based network named PMFG-based threshold network (PTN). We studied the community structure of PTN and applied ISOMAP algorithm to represent PTN in low-dimensional Euclidean space. The results show that the community corresponds well to the cluster in the Euclidean space. Further, we studied the dynamics of the community structure and constructed the normalized mutual information (NMI) matrix. Based on the real data in the market, we found that the volatility of the market can lead to dramatic changes in the community structure, and the structure is more stable during the financial crisis.
Absence of even-integer ζ-function values in Euclidean physical quantities in QCD
NASA Astrophysics Data System (ADS)
Jamin, Matthias; Miravitllas, Ramon
2018-04-01
At order αs4 in perturbative quantum chromodynamics, even-integer ζ-function values are present in Euclidean physical correlation functions like the scalar quark correlation function or the scalar gluonium correlator. We demonstrate that these contributions cancel when the perturbative expansion is expressed in terms of the so-called C-scheme coupling αˆs which has recently been introduced in Ref. [1]. It is furthermore conjectured that a ζ4 term should arise in the Adler function at order αs5 in the MS ‾-scheme, and that this term is expected to disappear in the C-scheme as well.
NASA Astrophysics Data System (ADS)
Nutku, Y.; Sheftel, M. B.
2014-02-01
This is a corrected and essentially extended version of the unpublished manuscript by Y Nutku and M Sheftel which contains new results. It is proposed to be published in honour of Y Nutku’s memory. All corrections and new results in sections 1, 2 and 4 are due to M Sheftel. We present new anti-self-dual exact solutions of the Einstein field equations with Euclidean and neutral (ultra-hyperbolic) signatures that admit only one rotational Killing vector. Such solutions of the Einstein field equations are determined by non-invariant solutions of Boyer-Finley (BF) equation. For the case of Euclidean signature such a solution of the BF equation was first constructed by Calderbank and Tod. Two years later, Martina, Sheftel and Winternitz applied the method of group foliation to the BF equation and reproduced the Calderbank-Tod solution together with new solutions for the neutral signature. In the case of Euclidean signature we obtain new metrics which asymptotically locally look like a flat space and have a non-removable singular point at the origin. In the case of ultra-hyperbolic signature there exist three inequivalent forms of metric. Only one of these can be obtained by analytic continuation from the Calderbank-Tod solution whereas the other two are new.
Symmetric log-domain diffeomorphic Registration: a demons-based approach.
Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas
2008-01-01
Modern morphometric studies use non-linear image registration to compare anatomies and perform group analysis. Recently, log-Euclidean approaches have contributed to promote the use of such computational anatomy tools by permitting simple computations of statistics on a rather large class of invertible spatial transformations. In this work, we propose a non-linear registration algorithm perfectly fit for log-Euclidean statistics on diffeomorphisms. Our algorithm works completely in the log-domain, i.e. it uses a stationary velocity field. This implies that we guarantee the invertibility of the deformation and have access to the true inverse transformation. This also means that our output can be directly used for log-Euclidean statistics without relying on the heavy computation of the log of the spatial transformation. As it is often desirable, our algorithm is symmetric with respect to the order of the input images. Furthermore, we use an alternate optimization approach related to Thirion's demons algorithm to provide a fast non-linear registration algorithm. First results show that our algorithm outperforms both the demons algorithm and the recently proposed diffeomorphic demons algorithm in terms of accuracy of the transformation while remaining computationally efficient.
Silent initial conditions for cosmological perturbations with a change of spacetime signature
NASA Astrophysics Data System (ADS)
Mielczarek, Jakub; Linsefors, Linda; Barrau, Aurelien
Recent calculations in loop quantum cosmology suggest that a transition from a Lorentzian to a Euclidean spacetime might take place in the very early universe. The transition point leads to a state of silence, characterized by a vanishing speed of light. This behavior can be interpreted as a decoupling of different space points, similar to the one characterizing the BKL phase. In this study, we address the issue of imposing initial conditions for the cosmological perturbations at the transition point between the Lorentzian and Euclidean phases. Motivated by the decoupling of space points, initial conditions characterized by a lack of correlations are investigated. We show that the “white noise” gains some support from analysis of the vacuum state in the deep Euclidean regime. Furthermore, the possibility of imposing the silent initial conditions at the trans-Planckian surface, characterized by a vanishing speed for the propagation of modes with wavelengths of the order of the Planck length, is studied. Such initial conditions might result from the loop deformations of the Poincaré algebra. The conversion of the silent initial power spectrum to a scale-invariant one is also examined.
L(2,1)-Labeling of the Strong Product of Paths and Cycles
2014-01-01
An L(2,1)-labeling of a graph G = (V, E) is a function f from the vertex set V(G) to the set of nonnegative integers such that the labels on adjacent vertices differ by at least two and the labels on vertices at distance two differ by at least one. The span of f is the difference between the largest and the smallest numbers in f(V). The λ-number of G, denoted by λ(G), is the minimum span over all L(2,1)-labelings of G. We consider the λ-number of P n⊠C m and for n ≤ 11 the λ-number of C n⊠C m. We determine λ-numbers of graphs of interest with the exception of a finite number of graphs and we improve the bounds on the λ-number of C n⊠C m, m ≥ 24 and n ≥ 26. PMID:24711734
Evolution and selection of river networks: Statics, dynamics, and complexity
Rinaldo, Andrea; Rigon, Riccardo; Banavar, Jayanth R.; Maritan, Amos; Rodriguez-Iturbe, Ignacio
2014-01-01
Moving from the exact result that drainage network configurations minimizing total energy dissipation are stationary solutions of the general equation describing landscape evolution, we review the static properties and the dynamic origins of the scale-invariant structure of optimal river patterns. Optimal channel networks (OCNs) are feasible optimal configurations of a spanning network mimicking landscape evolution and network selection through imperfect searches for dynamically accessible states. OCNs are spanning loopless configurations, however, only under precise physical requirements that arise under the constraints imposed by river dynamics—every spanning tree is exactly a local minimum of total energy dissipation. It is remarkable that dynamically accessible configurations, the local optima, stabilize into diverse metastable forms that are nevertheless characterized by universal statistical features. Such universal features explain very well the statistics of, and the linkages among, the scaling features measured for fluvial landforms across a broad range of scales regardless of geology, exposed lithology, vegetation, or climate, and differ significantly from those of the ground state, known exactly. Results are provided on the emergence of criticality through adaptative evolution and on the yet-unexplored range of applications of the OCN concept. PMID:24550264
Biolithography: Slime mould patterning of polyaniline
NASA Astrophysics Data System (ADS)
Berzina, Tatiana; Dimonte, Alice; Adamatzky, Andrew; Erokhin, Victor; Iannotta, Salvatore
2018-03-01
Slime mould Physarum polycephalum develops intricate patterns of protoplasmic networks when foraging on a non-nutrient substrates. The networks are optimised for spanning larger spaces with minimum body mass and for quick transfer of nutrients and metabolites inside the slime mould's body. We hybridise the slime mould's networks with conductive polymer polyaniline and thus produce micro-patterns of conductive networks. This unconventional lithographic method opens new perspectives in development of living technology devices, biocompatible non-silicon hardware for applications in integrated circuits, bioelectronics, and biosensing.
Weighted network analysis of high-frequency cross-correlation measures
NASA Astrophysics Data System (ADS)
Iori, Giulia; Precup, Ovidiu V.
2007-03-01
In this paper we implement a Fourier method to estimate high-frequency correlation matrices from small data sets. The Fourier estimates are shown to be considerably less noisy than the standard Pearson correlation measures and thus capable of detecting subtle changes in correlation matrices with just a month of data. The evolution of correlation at different time scales is analyzed from the full correlation matrix and its minimum spanning tree representation. The analysis is performed by implementing measures from the theory of random weighted networks.
PANDA2: Program for Minimum Weight Design of Stiffened, Composite, Locally Buckled Panels
1986-09-01
a flat panel or a panel that spans less than about 45 degrees of circumference. However, in PANDA2 complete cylindrical shells can be treated by the...compression and that corresponding to maximum in-plane shear. It is usually best to treat complete cylindrical shells in this way rather than try to set up a...to treat panels, not complete cylindrical shells. Therefore, it is best applied to panels. In PANDA2 the curved edges of a cylindrical panel lie in
A Catalog of Visual Double and Multiple Stars With Eclipsing Components
2009-08-01
astrometric data were analyzed, resulting in new orbits for eight systems and new times of minimum light for a number of the eclipsing binaries. Some...analyses; one especially productive source is the study of the long- time behav- ior of the period of an EB. As might be expected, the longer the time ...span of conjunction time measurements, or times of min- imum light, the greater the chance of detecting a long-period orbit due to an additional
Optimal steering for kinematic vehicles with applications to spatially distributed agents
NASA Astrophysics Data System (ADS)
Brown, Scott; Praeger, Cheryl E.; Giudici, Michael
While there is no universal method to address control problems involving networks of autonomous vehicles, there exist a few promising schemes that apply to different specific classes of problems, which have attracted the attention of many researchers from different fields. In particular, one way to extend techniques that address problems involving a single autonomous vehicle to those involving teams of autonomous vehicles is to use the concept of Voronoi diagram. The Voronoi diagram provides a spatial partition of the environment the team of vehicles operate in, where each element of this partition is associated with a unique vehicle from the team. The partition induces a graph abstraction of the operating space that is in an one-to-one correspondence with the network abstraction of the team of autonomous vehicles; a fact that can provide both conceptual and analytical advantages during mission planning and execution. In this dissertation, we propose the use of a new class of Voronoi-like partitioning schemes with respect to state-dependent proximity (pseudo-) metrics rather than the Euclidean distance or other generalized distance functions, which are typically used in the literature. An important nuance here is that, in contrast to the Euclidean distance, state-dependent metrics can succinctly capture system theoretic features of each vehicle from the team (e.g., vehicle kinematics), as well as the environment-vehicle interactions, which are induced, for example, by local winds/currents. We subsequently illustrate how the proposed concept of state-dependent Voronoi-like partition can induce local control schemes for problems involving networks of spatially distributed autonomous vehicles by examining a sequential pursuit problem of a maneuvering target by a group of pursuers distributed in the plane. The construction of generalized Voronoi diagrams with respect to state-dependent metrics poses some significant challenges. First, the generalized distance metric may be a function of the direction of motion of the vehicle (anisotropic pseudo-distance function) and/or may not be expressible in closed form. Second, such problems fall under the general class of partitioning problems for which the vehicles' dynamics must be taken into account. The topology of the vehicle's configuration space may be non-Euclidean, for example, it may be a manifold embedded in a Euclidean space. In other words, these problems may not be reducible to generalized Voronoi diagram problems for which efficient construction schemes, analytical and/or computational, exist in the literature. This research effort pursues three main objectives. First, we present the complete solution of different steering problems involving a single vehicle in the presence of motion constraints imposed by the maneuverability envelope of the vehicle and/or the presence of a drift field induced by winds/currents in its vicinity. The analysis of each steering problem involving a single vehicle provides us with a state-dependent generalized metric, such as the minimum time-to-go/come. We subsequently use these state-dependent generalized distance functions as the proximity metrics in the formulation of generalized Voronoi-like partitioning problems. The characterization of the solutions of these state-dependent Voronoi-like partitioning problems using either analytical or computational techniques constitutes the second main objective of this dissertation. The third objective of this research effort is to illustrate the use of the proposed concept of state-dependent Voronoi-like partition as a means for passing from control techniques that apply to problems involving a single vehicle to problems involving networks of spatially distributed autonomous vehicles. To this aim, we formulate the problem of sequential/relay pursuit of a maneuvering target by a group of spatially distributed pursuers and subsequently propose a distributed group pursuit strategy that directly derives from the solution of a state-dependent Voronoi-like partitioning problem. (Abstract shortened by UMI.)
Luo, He; Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang
2018-01-01
Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided.
Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang
2018-01-01
Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided. PMID:29561888
Some constructions of biharmonic maps and Chen’s conjecture on biharmonic hypersurfaces
NASA Astrophysics Data System (ADS)
Ou, Ye-Lin
2012-04-01
We give several construction methods and use them to produce many examples of proper biharmonic maps including biharmonic tori of any dimension in Euclidean spheres (Theorem 2.2, Corollaries 2.3, 2.4 and 2.6), biharmonic maps between spheres (Theorem 2.9) and into spheres (Theorem 2.10) via orthogonal multiplications and eigenmaps. We also study biharmonic graphs of maps, derive the equation for a function whose graph is a biharmonic hypersurface in a Euclidean space, and give an equivalent formulation of Chen's conjecture on biharmonic hypersurfaces by using the biharmonic graph equation (Theorem 4.1) which paves a way for the analytic study of the conjecture.
Modified fuzzy c-means applied to a Bragg grating-based spectral imager for material clustering
NASA Astrophysics Data System (ADS)
Rodríguez, Aida; Nieves, Juan Luis; Valero, Eva; Garrote, Estíbaliz; Hernández-Andrés, Javier; Romero, Javier
2012-01-01
We have modified the Fuzzy C-Means algorithm for an application related to segmentation of hyperspectral images. Classical fuzzy c-means algorithm uses Euclidean distance for computing sample membership to each cluster. We have introduced a different distance metric, Spectral Similarity Value (SSV), in order to have a more convenient similarity measure for reflectance information. SSV distance metric considers both magnitude difference (by the use of Euclidean distance) and spectral shape (by the use of Pearson correlation). Experiments confirmed that the introduction of this metric improves the quality of hyperspectral image segmentation, creating spectrally more dense clusters and increasing the number of correctly classified pixels.
Spontaneous PT-Symmetry Breaking for Systems of Noncommutative Euclidean Lie Algebraic Type
NASA Astrophysics Data System (ADS)
Dey, Sanjib; Fring, Andreas; Mathanaranjan, Thilagarajah
2015-11-01
We propose a noncommutative version of the Euclidean Lie algebra E 2. Several types of non-Hermitian Hamiltonian systems expressed in terms of generic combinations of the generators of this algebra are investigated. Using the breakdown of the explicitly constructed Dyson maps as a criterium, we identify the domains in the parameter space in which the Hamiltonians have real energy spectra and determine the exceptional points signifying the crossover into the different types of spontaneously broken PT-symmetric regions with pairs of complex conjugate eigenvalues. We find exceptional points which remain invariant under the deformation as well as exceptional points becoming dependent on the deformation parameter of the algebra.
Hadronic vacuum polarization in QCD and its evaluation in Euclidean spacetime
NASA Astrophysics Data System (ADS)
de Rafael, Eduardo
2017-07-01
We discuss a new technique to evaluate integrals of QCD Green's functions in the Euclidean based on their Mellin-Barnes representation. We present as a first application the evaluation of the lowest order hadronic vacuum polarization (HVP) contribution to the anomalous magnetic moment of the muon 1/2 (gμ-2 )HVP≡aμHVP . It is shown that with a precise determination of the slope and curvature of the HVP function at the origin from lattice QCD (LQCD), one can already obtain a result for aμHVP which may serve as a test of the determinations based on experimental measurements of the e+e- annihilation cross section into hadrons.
Querying databases of trajectories of differential equations: Data structures for trajectories
NASA Technical Reports Server (NTRS)
Grossman, Robert
1989-01-01
One approach to qualitative reasoning about dynamical systems is to extract qualitative information by searching or making queries on databases containing very large numbers of trajectories. The efficiency of such queries depends crucially upon finding an appropriate data structure for trajectories of dynamical systems. Suppose that a large number of parameterized trajectories gamma of a dynamical system evolving in R sup N are stored in a database. Let Eta is contained in set R sup N denote a parameterized path in Euclidean Space, and let the Euclidean Norm denote a norm on the space of paths. A data structure is defined to represent trajectories of dynamical systems, and an algorithm is sketched which answers queries.
Numerical analysis of interface debonding detection in bonded repair with Rayleigh waves
NASA Astrophysics Data System (ADS)
Xu, Ying; Li, BingCheng; Lu, Miaomiao
2017-01-01
This paper studied how to use the variation of the dispersion curves of Rayleigh wave group velocity to detect interfacial debonding damage between FRP plate and steel beam. Since FRP strengthened steel beam is two layers medium, Rayleigh wave velocity dispersion phenomenon will happen. The interface debonding damage of FRP strengthened steel beam have an obvious effect on the Rayleigh wave velocity dispersion curve. The paper first put forward average Euclidean distance and Angle separation degree to describe the relationship between the different dispersion curves. Numerical results indicate that there is a approximate linear mapping relationship between the average Euclidean distance of dispersion curves and the length of interfacial debonding damage.
NASA Astrophysics Data System (ADS)
Jiang, Yicheng; Cheng, Ping; Ou, Yangkui
2001-09-01
A new method for target classification of high-range resolution radar is proposed. It tries to use neural learning to obtain invariant subclass features of training range profiles. A modified Euclidean metric based on the Box-Cox transformation technique is investigated for Nearest Neighbor target classification improvement. The classification experiments using real radar data of three different aircraft have demonstrated that classification error can reduce 8% if this method proposed in this paper is chosen instead of the conventional method. The results of this paper have shown that by choosing an optimized metric, it is indeed possible to reduce the classification error without increasing the number of samples.
Sexual dimorphism in the human face assessed by euclidean distance matrix analysis.
Ferrario, V F; Sforza, C; Pizzini, G; Vogel, G; Miani, A
1993-01-01
The form of any object can be viewed as a combination of size and shape. A recently proposed method (euclidean distance matrix analysis) can differentiate between size and shape differences. It has been applied to analyse the sexual dimorphism in facial form in a sample of 108 healthy young adults (57 men, 51 women). The face was wider and longer in men than in women. A global shape difference was demonstrated, the male face being more rectangular and the female face more square. Gender variations involved especially the lower third of the face and, in particular, the position of the pogonion relative to the other structures. PMID:8300436
Superintegrable three-body systems on the line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chanu, Claudia; Degiovanni, Luca; Rastelli, Giovanni
2008-11-15
We consider classical three-body interactions on a Euclidean line depending on the reciprocal distance of the particles and admitting four functionally independent quadratic in the momentum first integrals. These systems are multiseparable, superintegrable, and equivalent (up to rescalings) to a one-particle system in the three-dimensional Euclidean space. Common features of the dynamics are discussed. We show how to determine quantum symmetry operators associated with the first integrals considered here but do not analyze the corresponding quantum dynamics. The conformal multiseparability is discussed and examples of conformal first integrals are given. The systems considered here in generality include the Calogero, Wolfes,more » and other three-body interactions widely studied in mathematical physics.« less
NASA Astrophysics Data System (ADS)
Khadjiev, Djavvat; Ören, Idri˙s; Pekşen, Ömer
Let E2 be the 2-dimensional Euclidean space, LSim(2) be the group of all linear similarities of E2 and LSim+(2) be the group of all orientation-preserving linear similarities of E2. The present paper is devoted to solutions of problems of global G-equivalence of paths and curves in E2 for the groups G = LSim(2),LSim+(2). Complete systems of global G-invariants of a path and a curve in E2 are obtained. Existence and uniqueness theorems are given. Evident forms of a path and a curve with the given global invariants are obtained.
Gliding flight in a jackdaw: a wind tunnel study.
Rosén, M; Hedenström, A
2001-03-01
We examined the gliding flight performance of a jackdaw Corvus monedula in a wind tunnel. The jackdaw was able to glide steadily at speeds between 6 and 11 m s(-1). The bird changed its wingspan and wing area over this speed range, and we measured the so-called glide super-polar, which is the envelope of fixed-wing glide polars over a range of forward speeds and sinking speeds. The glide super-polar was an inverted U-shape with a minimum sinking speed (V(ms)) at 7.4 m s(-1) and a speed for best glide (V(bg)) at 8.3 m s(-)). At the minimum sinking speed, the associated vertical sinking speed was 0.62 m s(-1). The relationship between the ratio of lift to drag (L:D) and airspeed showed an inverted U-shape with a maximum of 12.6 at 8.5 m s(-1). Wingspan decreased linearly with speed over the whole speed range investigated. The tail was spread extensively at low and moderate speeds; at speeds between 6 and 9 m s(-1), the tail area decreased linearly with speed, and at speeds above 9 m s(-1) the tail was fully furled. Reynolds number calculated with the mean chord as the reference length ranged from 38 000 to 76 000 over the speed range 6-11 m s(-1). Comparisons of the jackdaw flight performance were made with existing theory of gliding flight. We also re-analysed data on span ratios with respect to speed in two other bird species previously studied in wind tunnels. These data indicate that an equation for calculating the span ratio, which minimises the sum of induced and profile drag, does not predict the actual span ratios observed in these birds. We derive an alternative equation on the basis of the observed span ratios for calculating wingspan and wing area with respect to forward speed in gliding birds from information about body mass, maximum wingspan, maximum wing area and maximum coefficient of lift. These alternative equations can be used in combination with any model of gliding flight where wing area and wingspan are considered to calculate sinking rate with respect to forward speed.
Video-based face recognition via convolutional neural networks
NASA Astrophysics Data System (ADS)
Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming
2017-06-01
Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.
Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas
NASA Astrophysics Data System (ADS)
Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.
2017-12-01
Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.
Ichikawa, Kazuki; Morishita, Shinichi
2014-01-01
K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.
NASA Astrophysics Data System (ADS)
Tisdell, Christopher C.
2017-11-01
For over 50 years, the learning of teaching of a priori bounds on solutions to linear differential equations has involved a Euclidean approach to measuring the size of a solution. While the Euclidean approach to a priori bounds on solutions is somewhat manageable in the learning and teaching of the proofs involving second-order, linear problems with constant co-efficients, we believe it is not pedagogically optimal. Moreover, the Euclidean method becomes pedagogically unwieldy in the proofs involving higher-order cases. The purpose of this work is to propose a simpler pedagogical approach to establish a priori bounds on solutions by considering a different way of measuring the size of a solution to linear problems, which we refer to as the Uber size. The Uber form enables a simplification of pedagogy from the literature and the ideas are accessible to learners who have an understanding of the Fundamental Theorem of Calculus and the exponential function, both usually seen in a first course in calculus. We believe that this work will be of mathematical and pedagogical interest to those who are learning and teaching in the area of differential equations or in any of the numerous disciplines where linear differential equations are used.
Jat, Prahlad; Serre, Marc L
2016-12-01
Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R 2 by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles. Copyright © 2016. Published by Elsevier Ltd.
Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.
Zhang, Ying-Ying; Yang, Cai; Zhang, Ping
2017-05-01
In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kim, Won Hwa; Singh, Vikas; Chung, Moo K.; Hinrichs, Chris; Pachauri, Deepti; Okonkwo, Ozioma C.; Johnson, Sterling C.
2014-01-01
Statistical analysis on arbitrary surface meshes such as the cortical surface is an important approach to understanding brain diseases such as Alzheimer’s disease (AD). Surface analysis may be able to identify specific cortical patterns that relate to certain disease characteristics or exhibit differences between groups. Our goal in this paper is to make group analysis of signals on surfaces more sensitive. To do this, we derive multi-scale shape descriptors that characterize the signal around each mesh vertex, i.e., its local context, at varying levels of resolution. In order to define such a shape descriptor, we make use of recent results from harmonic analysis that extend traditional continuous wavelet theory from the Euclidean to a non-Euclidean setting (i.e., a graph, mesh or network). Using this descriptor, we conduct experiments on two different datasets, the Alzheimer’s Disease NeuroImaging Initiative (ADNI) data and images acquired at the Wisconsin Alzheimer’s Disease Research Center (W-ADRC), focusing on individuals labeled as having Alzheimer’s disease (AD), mild cognitive impairment (MCI) and healthy controls. In particular, we contrast traditional univariate methods with our multi-resolution approach which show increased sensitivity and improved statistical power to detect a group-level effects. We also provide an open source implementation. PMID:24614060
MISTICA: Minimum Spanning Tree-based Coarse Image Alignment for Microscopy Image Sequences
Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T.
2016-01-01
Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to re-order the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries. PMID:26415193
MISTICA: Minimum Spanning Tree-Based Coarse Image Alignment for Microscopy Image Sequences.
Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T
2016-11-01
Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to reorder the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by the way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries.
Aerodynamics of gliding flight in common swifts.
Henningsson, P; Hedenström, A
2011-02-01
Gliding flight performance and wake topology of a common swift (Apus apus L.) were examined in a wind tunnel at speeds between 7 and 11 m s(-1). The tunnel was tilted to simulate descending flight at different sink speeds. The swift varied its wingspan, wing area and tail span over the speed range. Wingspan decreased linearly with speed, whereas tail span decreased in a nonlinear manner. For each airspeed, the minimum glide angle was found. The corresponding sink speeds showed a curvilinear relationship with airspeed, with a minimum sink speed at 8.1 m s(-1) and a speed of best glide at 9.4 m s(-1). Lift-to-drag ratio was calculated for each airspeed and tilt angle combinations and the maximum for each speed showed a curvilinear relationship with airspeed, with a maximum of 12.5 at an airspeed of 9.5 m s(-1). Wake was sampled in the transverse plane using stereo digital particle image velocimetry (DPIV). The main structures of the wake were a pair of trailing wingtip vortices and a pair of trailing tail vortices. Circulation of these was measured and a model was constructed that showed good weight support. Parasite drag was estimated from the wake defect measured in the wake behind the body. Parasite drag coefficient ranged from 0.30 to 0.22 over the range of airspeeds. Induced drag was calculated and used to estimate profile drag coefficient, which was found to be in the same range as that previously measured on a Harris' hawk.
Minimum spanning tree analysis of the human connectome
Sommer, Iris E.; Bohlken, Marc M.; Tewarie, Prejaas; Draaisma, Laurijn; Zalesky, Andrew; Di Biase, Maria; Brown, Jesse A.; Douw, Linda; Otte, Willem M.; Mandl, René C.W.; Stam, Cornelis J.
2018-01-01
Abstract One of the challenges of brain network analysis is to directly compare network organization between subjects, irrespective of the number or strength of connections. In this study, we used minimum spanning tree (MST; a unique, acyclic subnetwork with a fixed number of connections) analysis to characterize the human brain network to create an empirical reference network. Such a reference network could be used as a null model of connections that form the backbone structure of the human brain. We analyzed the MST in three diffusion‐weighted imaging datasets of healthy adults. The MST of the group mean connectivity matrix was used as the empirical null‐model. The MST of individual subjects matched this reference MST for a mean 58%–88% of connections, depending on the analysis pipeline. Hub nodes in the MST matched with previously reported locations of hub regions, including the so‐called rich club nodes (a subset of high‐degree, highly interconnected nodes). Although most brain network studies have focused primarily on cortical connections, cortical–subcortical connections were consistently present in the MST across subjects. Brain network efficiency was higher when these connections were included in the analysis, suggesting that these tracts may be utilized as the major neural communication routes. Finally, we confirmed that MST characteristics index the effects of brain aging. We conclude that the MST provides an elegant and straightforward approach to analyze structural brain networks, and to test network topological features of individual subjects in comparison to empirical null models. PMID:29468769
Partial photoionization cross sections of NH4 and H3O Rydberg radicals
NASA Astrophysics Data System (ADS)
Velasco, A. M.; Lavín, C.; Martín, I.; Melin, J.; Ortiz, J. V.
2009-07-01
Photoionization cross sections for various Rydberg series that correspond to ionization channels of ammonium and oxonium Rydberg radicals from the outermost, occupied orbitals of their respective ground states are reported. These properties are known to be relevant in photoelectron dynamics studies. For the present calculations, the molecular-adapted quantum defect orbital method has been employed. A Cooper minimum has been found in the 3sa1-kpt2 Rydberg channel of NH4 beyond the ionization threshold, which provides the main contribution to the photoionization of this radical. However, no net minimum is found in the partial cross section of H3O despite the presence of minima in the 3sa1-kpe and 3sa1-kpa1 Rydberg channels. The complete oscillator strength distributions spanning the discrete and continuous regions of both radicals exhibit the expected continuity across the ionization threshold.
Annealing Ant Colony Optimization with Mutation Operator for Solving TSP.
Mohsen, Abdulqader M
2016-01-01
Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality.
A multifunctional force microscope for soft matter with in situ imaging
NASA Astrophysics Data System (ADS)
Roberts, Paul; Pilkington, Georgia A.; Wang, Yumo; Frechette, Joelle
2018-04-01
We present the multifunctional force microscope (MFM), a normal and lateral force-measuring instrument with in situ imaging. In the MFM, forces are calculated from the normal and lateral deflection of a cantilever as measured via fiber optic sensors. The motion of the cantilever is controlled normally by a linear micro-translation stage and a piezoelectric actuator, while the lateral motion of the sample is controlled by another linear micro-translation stage. The micro-translation stages allow for travel distances that span 25 mm with a minimum step size of 50 nm, while the piezo has a minimum step size of 0.2 nm, but a 100 μm maximum range. Custom-designed cantilevers allow for the forces to be measured over 4 orders of magnitude (from 50 μN to 1 N). We perform probe tack, friction, and hydrodynamic drainage experiments to demonstrate the sensitivity, versatility, and measurable force range of the instrument.
Gravitational instantons from minimal surfaces
NASA Astrophysics Data System (ADS)
Aliev, A. N.; Hortaçsu, M.; Kalayci, J.; Nutku, Y.
1999-02-01
Physical properties of gravitational instantons which are derivable from minimal surfaces in three-dimensional Euclidean space are examined using the Newman-Penrose formalism for Euclidean signature. The gravitational instanton that corresponds to the helicoid minimal surface is investigated in detail. This is a metric of Bianchi type 0264-9381/16/2/024/img9, or E(2), which admits a hidden symmetry due to the existence of a quadratic Killing tensor. It leads to a complete separation of variables in the Hamilton-Jacobi equation for geodesics, as well as in Laplace's equation for a massless scalar field. The scalar Green function can be obtained in closed form, which enables us to calculate the vacuum fluctuations of a massless scalar field in the background of this instanton.
Twistor Geometry of Null Foliations in Complex Euclidean Space
NASA Astrophysics Data System (ADS)
Taghavi-Chabert, Arman
2017-01-01
We give a detailed account of the geometric correspondence between a smooth complex projective quadric hypersurface Q^n of dimension n ≥ 3, and its twistor space PT, defined to be the space of all linear subspaces of maximal dimension of Q^n. Viewing complex Euclidean space CE^n as a dense open subset of Q^n, we show how local foliations tangent to certain integrable holomorphic totally null distributions of maximal rank on CE^n can be constructed in terms of complex submanifolds of PT. The construction is illustrated by means of two examples, one involving conformal Killing spinors, the other, conformal Killing-Yano 2-forms. We focus on the odd-dimensional case, and we treat the even-dimensional case only tangentially for comparison.
Canonical Drude Weight for Non-integrable Quantum Spin Chains
NASA Astrophysics Data System (ADS)
Mastropietro, Vieri; Porta, Marcello
2018-03-01
The Drude weight is a central quantity for the transport properties of quantum spin chains. The canonical definition of Drude weight is directly related to Kubo formula of conductivity. However, the difficulty in the evaluation of such expression has led to several alternative formulations, accessible to different methods. In particular, the Euclidean, or imaginary-time, Drude weight can be studied via rigorous renormalization group. As a result, in the past years several universality results have been proven for such quantity at zero temperature; remarkably, the proofs work for both integrable and non-integrable quantum spin chains. Here we establish the equivalence of Euclidean and canonical Drude weights at zero temperature. Our proof is based on rigorous renormalization group methods, Ward identities, and complex analytic ideas.
Duality of caustics in Minkowski billiards
NASA Astrophysics Data System (ADS)
Artstein-Avidan, S.; Florentin, D. I.; Ostrover, Y.; Rosen, D.
2018-04-01
In this paper we study convex caustics in Minkowski billiards. We show that for the Euclidean billiard dynamics in a planar smooth, centrally symmetric, strictly convex body K, for every convex caustic which K possesses, the ‘dual’ billiard dynamics in which the table is the Euclidean unit ball and the geometry that governs the motion is induced by the body K, possesses a dual convex caustic. Such a pair of caustics are dual in a strong sense, and in particular they have the same perimeter, Lazutkin parameter (both measured with respect to the corresponding geometries), and rotation number. We show moreover that for general Minkowski billiards this phenomenon fails, and one can construct a smooth caustic in a Minkowski billiard table which possesses no dual convex caustic.
Action with Acceleration II: Euclidean Hamiltonian and Jordan Blocks
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.
2013-10-01
The Euclidean action with acceleration has been analyzed in Ref. 1, and referred to henceforth as Paper I, for its Hamiltonian and path integral. In this paper, the state space of the Hamiltonian is analyzed for the case when it is pseudo-Hermitian (equivalent to a Hermitian Hamiltonian), as well as the case when it is inequivalent. The propagator is computed using both creation and destruction operators as well as the path integral. A state space calculation of the propagator shows the crucial role played by the dual state vectors that yields a result impossible to obtain from a Hermitian Hamiltonian. When it is not pseudo-Hermitian, the Hamiltonian is shown to be a direct sum of Jordan blocks.
The Facespan-the perceptual span for face recognition.
Papinutto, Michael; Lao, Junpeng; Ramon, Meike; Caldara, Roberto; Miellet, Sébastien
2017-05-01
In reading, the perceptual span is a well-established concept that refers to the amount of information that can be read in a single fixation. Surprisingly, despite extensive empirical interest in determining the perceptual strategies deployed to process faces and an ongoing debate regarding the factors or mechanism(s) underlying efficient face processing, the perceptual span for faces-the Facespan-remains undetermined. To address this issue, we applied the gaze-contingent Spotlight technique implemented in an old-new face recognition paradigm. This procedure allowed us to parametrically vary the amount of facial information available at a fixated location in order to determine the minimal aperture size at which face recognition performance plateaus. As expected, accuracy increased nonlinearly with spotlight size apertures. Analyses of Structural Similarity comparing the available information during spotlight and natural viewing conditions indicate that the Facespan-the minimum spatial extent of preserved facial information leading to comparable performance as in natural viewing-encompasses 7° of visual angle in our viewing conditions (size of the face stimulus: 15.6°; viewing distance: 70 cm), which represents 45% of the face. The present findings provide a benchmark for future investigations that will address if and how the Facespan is modulated by factors such as cultural, developmental, idiosyncratic, or task-related differences.
Technical and Economic Assessment of Span-Distributed Loading Cargo Aircraft Concepts
NASA Technical Reports Server (NTRS)
Johnston, W. M.; Muehlbauer, J. C.; Eudaily, R. R.; Farmer, B. T.; Monrath, J. F.; Thompson, S. G.
1976-01-01
A 700,000 kg (1,540,000-lb) aircraft with a cruise Mach number of 0.75 was found to be optimum for the specified mission parameters of a 272 155-kg (600,000-lb) payload, a 5560-km (3000-n.mi.) range, and an annual productivity of 113 billion revenue-ton km (67 billion revenue-ton n. mi.). The optimum 1990 technology level spanloader aircraft exhibited the minimum 15-year life-cycle costs, direct operating costs, and fuel consumption of all candidate versions. Parametric variations of wing sweep angle, thickness ratio, rows of cargo, and cargo density were investigated. The optimum aircraft had two parallel rows of 2.44 x 2.44-m (8 x 8-ft) containerized cargo with a density of 160 kg/cu m (10 lb/ft 3) carried throughout the entire 101-m (331-ft) span of the constant chord, 22-percent thick, supercritical wing. Additional containers or outsized equipment were carried in the 24.4-m (80-ft) long fuselage compartment preceding the wing. Six 284,000-N (64,000-lb) thrust engines were mounted beneath the 0.7-rad (40-deg) swept wing. Flight control was provided by a 36.6-m (120-ft) span canard surface mounted atop the forward fuselage, by rudders on the wingtip verticals and by outboard wing flaperons.
Noninductively Driven Tokamak Plasmas at Near-Unity Toroidal Beta
Schlossberg, David J.; Bodner, Grant M.; Bongard, Michael W.; ...
2017-07-01
Access to and characterization of sustained, toroidally confined plasmas with a very high plasma-to-magnetic pressure ratio (β t), low internal inductance, high elongation, and nonsolenoidal current drive is a central goal of present tokamak plasma research. Stable access to this desirable parameter space is demonstrated in plasmas with ultralow aspect ratio and high elongation. Local helicity injection provides nonsolenoidal sustainment, low internal inductance, and ion heating. Equilibrium analyses indicate β t up to ~100% with a minimum |B| well spanning up to ~50% of the plasma volume.
Maintenance of a Minimum Spanning Forest in a Dynamic Planar Graph
1990-01-18
v): Delete the edge from v to its parent , thereby dividing the tree containing v into two trees. evert(v): Make v the root of its tree by reversing...the path from v to the original root. find parent (v): Return the parent of v, or null if v is the root of its tree. find Ica(u, v): Return the least...given node (including the parent edge). The ordered set of edges adjacent to node v is called the edge list for v. For example, in our application we
3D Visualization of Machine Learning Algorithms with Astronomical Data
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2016-01-01
We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.
Noninductively Driven Tokamak Plasmas at Near-Unity Toroidal Beta.
Schlossberg, D J; Bodner, G M; Bongard, M W; Burke, M G; Fonck, R J; Perry, J M; Reusch, J A
2017-07-21
Access to and characterization of sustained, toroidally confined plasmas with a very high plasma-to-magnetic pressure ratio (β_{t}), low internal inductance, high elongation, and nonsolenoidal current drive is a central goal of present tokamak plasma research. Stable access to this desirable parameter space is demonstrated in plasmas with ultralow aspect ratio and high elongation. Local helicity injection provides nonsolenoidal sustainment, low internal inductance, and ion heating. Equilibrium analyses indicate β_{t} up to ∼100% with a minimum |B| well spanning up to ∼50% of the plasma volume.
1978-04-26
Geometry 11-13 13-12 Shipboard Heavw Weather Tiedown 11-14 11-13 Nose & ’Main Gear Load Deflection Curves 11-15 11-14 Main Wheel Tire Span vs Aircraft...sustained taxi roll under conditions of 40-knot headwind and for wheel roll over 1-1/2 inch cable immediately after initial forward motion? 9. Planform...rolling/roll-oG vertical takeoff versus VTO. Discuss various methods of approach (e. g., stern, offset, cross axial). A Define minimum wheel -to-deck
Bessell, Paul R; Shaw, Darren J; Savill, Nicholas J; Woolhouse, Mark E J
2008-10-03
Models of Foot and Mouth Disease (FMD) transmission have assumed a homogeneous landscape across which Euclidean distance is a suitable measure of the spatial dependency of transmission. This paper investigated features of the landscape and their impact on transmission during the period of predominantly local spread which followed the implementation of the national movement ban during the 2001 UK FMD epidemic. In this study 113 farms diagnosed with FMD which had a known source of infection within 3 km (cases) were matched to 188 control farms which were either uninfected or infected at a later timepoint. Cases were matched to controls by Euclidean distance to the source of infection and farm size. Intervening geographical features and connectivity between the source of infection and case and controls were compared. Road distance between holdings, access to holdings, presence of forest, elevation change between holdings and the presence of intervening roads had no impact on the risk of local FMD transmission (p > 0.2). However the presence of linear features in the form of rivers and railways acted as barriers to FMD transmission (odds ratio = 0.507, 95% CIs = 0.297,0.887, p = 0.018). This paper demonstrated that although FMD spread can generally be modelled using Euclidean distance and numbers of animals on susceptible holdings, the presence of rivers and railways has an additional protective effect reducing the probability of transmission between holdings.
An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion
Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng
2015-01-01
The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278
An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.
Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng
2015-08-31
The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy.
The distance function effect on k-nearest neighbor classification for medical datasets.
Hu, Li-Yu; Huang, Min-Wei; Ke, Shih-Wen; Tsai, Chih-Fong
2016-01-01
K-nearest neighbor (k-NN) classification is conventional non-parametric classifier, which has been used as the baseline classifier in many pattern classification problems. It is based on measuring the distances between the test data and each of the training data to decide the final classification output. Since the Euclidean distance function is the most widely used distance metric in k-NN, no study examines the classification performance of k-NN by different distance functions, especially for various medical domain problems. Therefore, the aim of this paper is to investigate whether the distance function can affect the k-NN performance over different medical datasets. Our experiments are based on three different types of medical datasets containing categorical, numerical, and mixed types of data and four different distance functions including Euclidean, cosine, Chi square, and Minkowsky are used during k-NN classification individually. The experimental results show that using the Chi square distance function is the best choice for the three different types of datasets. However, using the cosine and Euclidean (and Minkowsky) distance function perform the worst over the mixed type of datasets. In this paper, we demonstrate that the chosen distance function can affect the classification accuracy of the k-NN classifier. For the medical domain datasets including the categorical, numerical, and mixed types of data, K-NN based on the Chi square distance function performs the best.
Zhang, Ying-Ying; Yang, Cai; Zhang, Ping
2017-08-01
In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.
THREE PLANETS ORBITING WOLF 1061
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, D. J.; Wittenmyer, R. A.; Tinney, C. G.
We use archival HARPS spectra to detect three planets orbiting the M3 dwarf Wolf 1061 (GJ 628). We detect a 1.36 M{sub ⊕} minimum-mass planet with an orbital period P = 4.888 days (Wolf 1061b), a 4.25 M{sub ⊕} minimum-mass planet with orbital period P = 17.867 days (Wolf 1061c), and a likely 5.21 M{sub ⊕} minimum-mass planet with orbital period P = 67.274 days (Wolf 1061d). All of the planets are of sufficiently low mass that they may be rocky in nature. The 17.867 day planet falls within the habitable zone for Wolf 1061 and the 67.274 day planetmore » falls just outside the outer boundary of the habitable zone. There are no signs of activity observed in the bisector spans, cross-correlation FWHMs, calcium H and K indices, NaD indices, or Hα indices near the planetary periods. We use custom methods to generate a cross-correlation template tailored to the star. The resulting velocities do not suffer the strong annual variation observed in the HARPS DRS velocities. This differential technique should deliver better exploitation of the archival HARPS data for the detection of planets at extremely low amplitudes.« less
ERIC Educational Resources Information Center
Eperson, D. B.
1985-01-01
Presents six mathematical problems (with answers) which focus on: (1) chess moves; (2) patterned numbers; (3) quadratics with rational roots; (4) number puzzles; (5) Euclidean geometry; and (6) Carrollian word puzzles. (JN)
Tessellating the Sphere with Regular Polygons
ERIC Educational Resources Information Center
Soto-Johnson, Hortensia; Bechthold, Dawn
2004-01-01
Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.
Cannistraci, Carlo Vittorio; Ravasi, Timothy; Montevecchi, Franco Maria; Ideker, Trey; Alessio, Massimo
2010-09-15
Nonlinear small datasets, which are characterized by low numbers of samples and very high numbers of measures, occur frequently in computational biology, and pose problems in their investigation. Unsupervised hybrid-two-phase (H2P) procedures-specifically dimension reduction (DR), coupled with clustering-provide valuable assistance, not only for unsupervised data classification, but also for visualization of the patterns hidden in high-dimensional feature space. 'Minimum Curvilinearity' (MC) is a principle that-for small datasets-suggests the approximation of curvilinear sample distances in the feature space by pair-wise distances over their minimum spanning tree (MST), and thus avoids the introduction of any tuning parameter. MC is used to design two novel forms of nonlinear machine learning (NML): Minimum Curvilinear embedding (MCE) for DR, and Minimum Curvilinear affinity propagation (MCAP) for clustering. Compared with several other unsupervised and supervised algorithms, MCE and MCAP, whether individually or combined in H2P, overcome the limits of classical approaches. High performance was attained in the visualization and classification of: (i) pain patients (proteomic measurements) in peripheral neuropathy; (ii) human organ tissues (genomic transcription factor measurements) on the basis of their embryological origin. MC provides a valuable framework to estimate nonlinear distances in small datasets. Its extension to large datasets is prefigured for novel NMLs. Classification of neuropathic pain by proteomic profiles offers new insights for future molecular and systems biology characterization of pain. Improvements in tissue embryological classification refine results obtained in an earlier study, and suggest a possible reinterpretation of skin attribution as mesodermal. https://sites.google.com/site/carlovittoriocannistraci/home.
NASA Astrophysics Data System (ADS)
Toohey, M.; Quine, B. M.; Strong, K.; Bernath, P. F.; Boone, C. D.; Jonsson, A. I.; McElroy, C. T.; Walker, K. A.; Wunch, D.
2007-12-01
Low-resolution atmospheric thermal emission spectra collected by balloon-borne radiometers over the time span of 1990-2002 are used to retrieve vertical profiles of HNO3, CFC-11 and CFC-12 volume mixing ratios between approximately 10 and 35 km altitude. All of the data analyzed have been collected from launches from a Northern Hemisphere mid-latitude site, during late summer, when stratospheric dynamic variability is at a minimum. The retrieval technique incorporates detailed forward modeling of the instrument and the radiative properties of the atmosphere, and obtains a best fit between modeled and measured spectra through a combination of onion-peeling and optimization steps. The retrieved HNO3 profiles are consistent over the 12-year period, and are consistent with recent measurements by the Atmospheric Chemistry Experiment-Fourier transform spectrometer satellite instrument. We therefore find no evidence of long-term changes in the HNO3 summer mid-latitude profile, although the uncertainty of our measurements precludes a conclusive trend analysis.
NASA Astrophysics Data System (ADS)
Toohey, M.; Quine, B. M.; Strong, K.; Bernath, P. F.; Boone, C. D.; Jonsson, A. I.; McElroy, C. T.; Walker, K. A.; Wunch, D.
2007-08-01
Low-resolution atmospheric thermal emission spectra collected by balloon-borne radiometers over the time span of 1990-2002 are used to retrieve vertical profiles of HNO3, CFC-11 and CFC-12 volume mixing ratios between approximately 10 and 35 km altitude. All of the data analyzed have been collected from launches from a Northern Hemisphere mid-latitude site, during late summer, when stratospheric dynamic variability is at a minimum. The retrieval technique incorporates detailed forward modeling of the instrument and the radiative properties of the atmosphere, and obtains a best fit between modeled and measured spectra through a combination of onion-peeling and global optimization steps. The retrieved HNO3 profiles are consistent over the 12-year period, and are consistent with recent measurements by the Atmospheric Chemistry Experiment-Fourier transform spectrometer satellite instrument. This suggests that, to within the errors of the 1990 measurements, there has been no significant change in the HNO3 summer mid-latitude profile.
Upper bound for the span of pencil graph
NASA Astrophysics Data System (ADS)
Parvathi, N.; Vimala Rani, A.
2018-04-01
An L(2,1)-Coloring or Radio Coloring or λ coloring of a graph is a function f from the vertex set V(G) to the set of all nonnegative integers such that |f(x) ‑ f(y)| ≥ 2 if d(x,y) = 1 and |f(x) ‑ f(y)| ≥ 1 if d(x,y)=2, where d(x,y) denotes the distance between x and y in G. The L(2,1)-coloring number or span number λ(G) of G is the smallest number k such that G has an L(2,1)-coloring with max{f(v) : v ∈ V(G)} = k. [2]The minimum number of colors used in L(2,1)-coloring is called the radio number rn(G) of G (Positive integer). Griggs and yeh conjectured that λ(G) ≤ Δ2 for any simple graph with maximum degree Δ>2. In this article, we consider some special graphs like, n-sunlet graph, pencil graph families and derive its upper bound of (G) and rn(G).
Aerodynamic Comparison of Hyper-Elliptic Cambered Span (HECS) Wings with Conventional Configurations
NASA Technical Reports Server (NTRS)
Lazos, Barry S.; Visser, Kenneth D.
2006-01-01
An experimental study was conducted to examine the aerodynamic and flow field characteristics of hyper-elliptic cambered span (HECS) wings and compare results with more conventional configurations used for induced drag reduction. Previous preliminary studies, indicating improved L/D characteristics when compared to an elliptical planform prompted this more detailed experimental investigation. Balance data were acquired on a series of swept and un-swept HECS wings, a baseline elliptic planform, two winglet designs and a raked tip configuration. Seven-hole probe wake surveys were also conducted downstream of a number of the configurations. Wind tunnel results indicated aerodynamic performance levels of all but one of the HECS wings exceeded that of the other configurations. The flow field data surveys indicate the HECS configurations displaced the tip vortex farther outboard of the wing than the Baseline configuration. Minimum drag was observed on the raked tip configuration and it was noted that the winglet wake lacked the cohesive vortex structure present in the wakes of the other configurations.
Gau, Susan Shur-Fen; Shang, Chi-Yung
2010-07-01
Little is known about executive functions among unaffected siblings of children with attention deficit/hyperactivity disorder (ADHD), and there is lack of such information from non-Western countries. We examined verbal and nonverbal executive functions in adolescents with ADHD, unaffected siblings and controls to test whether executive functions could be potential endophenotypes for ADHD. We assessed 279 adolescents (age range: 11-17 years) with a childhood diagnosis of DSM-IV ADHD, 136 biological siblings (108 unaffected, 79.4%), and 173 unaffected controls by using psychiatric interviews, the Wechsler Intelligence Scale for Children - 3rd edition (WISC-III), including digit spans, and the tasks involving executive functions of the Cambridge Neuropsychological Test Automated Battery (CANTAB): Intra-dimensional/Extra-dimensional Shifts (IED), Spatial Span (SSP), Spatial Working Memory (SWM), and Stockings of Cambridge (SOC). Compared with the controls, adolescents with ADHD and unaffected siblings had a significantly shorter backward digit span, more extra-dimensional shift errors in the IED, shorter spatial span length in the SSP, more total errors and poorer strategy use in the SWM, and fewer problems solved in the minimum number of moves and shorter initial thinking time in the SOC. The magnitudes of the differences in the SWM and SOC increased with increased task difficulties. In general, neither persistent ADHD nor comorbidity was associated with increased deficits in executive functions among adolescents with ADHD. The lack of much difference in executive dysfunctions between unaffected siblings and ADHD adolescents suggests that executive dysfunctions may be useful cognitive endophenotypes for ADHD genetic studies.
NASA Astrophysics Data System (ADS)
Zhao, L.; Zhang, H.
2014-12-01
Anomalous cosmic rays (ACRs) carry crucial information on the coupling between solar wind and interstellar medium, as well as cosmic ray modulation within the heliosphere. Due to the distinct origins and modulation processes, the spectra and abundance of ACRs are significantly different from that of galactic cosmic rays (GCRs). Since the launch of NASA's ACE spacecraft in 1997, its CRIS and SIS instruments have continuously recorded GCR and ACR intensities of several elemental heavy-ions, spanning the whole cycle 23 and the cycle 24 maximum. Here we present a statistical comparison of ACR and GCR observed by ACE spacecraft and their possible relation to solar activity. While the differential flux of ACR also exhibits apparent anti-correlation with solar activity level, the flux of the latest prolonged solar minimum (year 2009) is approximately 5% lower than its previous solar minimum (year 1997). And the minimal level of ACR flux appears in year 2004, instead of year 2001 with the strongest solar activities. The negative indexes of the power law spectra within the energy range from 5 to 30 MeV/nuc also vary with time. The spectra get harder during the solar minimum but softer during the solar maximum. The approaching solar minimum of cycle 24 is believed to resemble the Dalton or Gleissberg Minimum with extremely low solar activity (Zolotova and Ponyavin, 2014). Therefore, the different characteristics of ACRs between the coming solar minimum and the previous minimum are also of great interest. Finally, we will also discuss the possible solar-modulation processes which is responsible for different modulation of ACR and GCR, especially the roles played by diffusion and drifts. The comparative analysis will provide valuable insights into the physical modulation process within the heliosphere under opposite solar polarity and variable solar activity levels.
Unstable spiral waves and local Euclidean symmetry in a model of cardiac tissue.
Marcotte, Christopher D; Grigoriev, Roman O
2015-06-01
This paper investigates the properties of unstable single-spiral wave solutions arising in the Karma model of two-dimensional cardiac tissue. In particular, we discuss how such solutions can be computed numerically on domains of arbitrary shape and study how their stability, rotational frequency, and spatial drift depend on the size of the domain as well as the position of the spiral core with respect to the boundaries. We also discuss how the breaking of local Euclidean symmetry due to finite size effects as well as the spatial discretization of the model is reflected in the structure and dynamics of spiral waves. This analysis allows identification of a self-sustaining process responsible for maintaining the state of spiral chaos featuring multiple interacting spirals.
Discrimination of malignant lymphomas and leukemia using Radon transform based-higher order spectra
NASA Astrophysics Data System (ADS)
Luo, Yi; Celenk, Mehmet; Bejai, Prashanth
2006-03-01
A new algorithm that can be used to automatically recognize and classify malignant lymphomas and leukemia is proposed in this paper. The algorithm utilizes the morphological watersheds to obtain boundaries of cells from cell images and isolate them from the surrounding background. The areas of cells are extracted from cell images after background subtraction. The Radon transform and higher-order spectra (HOS) analysis are utilized as an image processing tool to generate class feature vectors of different type cells and to extract testing cells' feature vectors. The testing cells' feature vectors are then compared with the known class feature vectors for a possible match by computing the Euclidean distances. The cell in question is classified as belonging to one of the existing cell classes in the least Euclidean distance sense.
A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.
1988-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Khandeev, V. I.
2016-02-01
The strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters of given sizes (cardinalities) minimizing the sum (over both clusters) of the intracluster sums of squared distances from the elements of the clusters to their centers is considered. It is assumed that the center of one of the sought clusters is specified at the desired (arbitrary) point of space (without loss of generality, at the origin), while the center of the other one is unknown and determined as the mean value over all elements of this cluster. It is shown that unless P = NP, there is no fully polynomial-time approximation scheme for this problem, and such a scheme is substantiated in the case of a fixed space dimension.
Corrected Mean-Field Model for Random Sequential Adsorption on Random Geometric Graphs
NASA Astrophysics Data System (ADS)
Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur
2018-03-01
A notorious problem in mathematics and physics is to create a solvable model for random sequential adsorption of non-overlapping congruent spheres in the d-dimensional Euclidean space with d≥ 2 . Spheres arrive sequentially at uniformly chosen locations in space and are accepted only when there is no overlap with previously deposited spheres. Due to spatial correlations, characterizing the fraction of accepted spheres remains largely intractable. We study this fraction by taking a novel approach that compares random sequential adsorption in Euclidean space to the nearest-neighbor blocking on a sequence of clustered random graphs. This random network model can be thought of as a corrected mean-field model for the interaction graph between the attempted spheres. Using functional limit theorems, we characterize the fraction of accepted spheres and its fluctuations.
Boersma, Maria; Smit, Dirk J A; Boomsma, Dorret I; De Geus, Eco J C; Delemarre-van de Waal, Henriette A; Stam, Cornelis J
2013-01-01
The child brain is a small-world network, which is hypothesized to change toward more ordered configurations with development. In graph theoretical studies, comparing network topologies under different conditions remains a critical point. Constructing a minimum spanning tree (MST) might present a solution, since it does not require setting a threshold and uses a fixed number of nodes and edges. In this study, the MST method is introduced to examine developmental changes in functional brain network topology in young children. Resting-state electroencephalography was recorded from 227 children twice at 5 and 7 years of age. Synchronization likelihood (SL) weighted matrices were calculated in three different frequency bands from which MSTs were constructed, which represent constructs of the most important routes for information flow in a network. From these trees, several parameters were calculated to characterize developmental change in network organization. The MST diameter and eccentricity significantly increased, while the leaf number and hierarchy significantly decreased in the alpha band with development. Boys showed significant higher leaf number, betweenness, degree and hierarchy and significant lower SL, diameter, and eccentricity than girls in the theta band. The developmental changes indicate a shift toward more decentralized line-like trees, which supports the previously hypothesized increase toward regularity of brain networks with development. Additionally, girls showed more line-like decentralized configurations, which is consistent with the view that girls are ahead of boys in brain development. MST provides an elegant method sensitive to capture subtle developmental changes in network organization without the bias of network comparison.
NASA Astrophysics Data System (ADS)
Xie, ChengJun; Xu, Lin
2008-03-01
This paper presents an algorithm based on mixing transform of wave band grouping to eliminate spectral redundancy, the algorithm adapts to the relativity difference between different frequency spectrum images, and still it works well when the band number is not the power of 2. Using non-boundary extension CDF(2,2)DWT and subtraction mixing transform to eliminate spectral redundancy, employing CDF(2,2)DWT to eliminate spatial redundancy and SPIHT+CABAC for compression coding, the experiment shows that a satisfied lossless compression result can be achieved. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, when the band number is not the power of 2, lossless compression result of this compression algorithm is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, Minimum Spanning Tree and Near Minimum Spanning Tree, on the average the compression ratio of this algorithm exceeds the above algorithms by 41%,37%,35%,29%,16%,10%,8% respectively; when the band number is the power of 2, for 128 frames of the image Canal, taking 8, 16 and 32 respectively as the number of one group for groupings based on different numbers, considering factors like compression storage complexity, the type of wave band and the compression effect, we suggest using 8 as the number of bands included in one group to achieve a better compression effect. The algorithm of this paper has priority in operation speed and hardware realization convenience.
Source clustering in the Hi-GAL survey determined using a minimum spanning tree method
NASA Astrophysics Data System (ADS)
Beuret, M.; Billot, N.; Cambrésy, L.; Eden, D. J.; Elia, D.; Molinari, S.; Pezzuto, S.; Schisano, E.
2017-01-01
Aims: The aims are to investigate the clustering of the far-infrared sources from the Herschel infrared Galactic Plane Survey (Hi-GAL) in the Galactic longitude range of -71 to 67 deg. These clumps, and their spatial distribution, are an imprint of the original conditions within a molecular cloud. This will produce a catalogue of over-densities. Methods: The minimum spanning tree (MST) method was used to identify the over-densities in two dimensions. The catalogue was further refined by folding in heliocentric distances, resulting in more reliable over-densities, which are cluster candidates. Results: We found 1633 over-densities with more than ten members. Of these, 496 are defined as cluster candidates because of the reliability of the distances, with a further 1137 potential cluster candidates. The spatial distributions of the cluster candidates are different in the first and fourth quadrants, with all clusters following the spiral structure of the Milky Way. The cluster candidates are fractal. The clump mass functions of the clustered and isolated are statistically indistinguishable from each other and are consistent with Kroupa's initial mass function. Hi-GAL is a key-project of the Herschel Space Observatory survey (Pilbratt et al. 2010) and uses the PACS (Poglitsch et al. 2010) and SPIRE (Griffin et al. 2010) cameras in parallel mode.The catalogues of cluster candidates and potential clusters are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/597/A114
NASA Astrophysics Data System (ADS)
Yu, Jincheng; Puzia, Thomas H.; Lin, Congping; Zhang, Yiwei
2017-05-01
We compare the existent methods, including the minimum spanning tree based method and the local stellar density based method, in measuring mass segregation of star clusters. We find that the minimum spanning tree method reflects more the compactness, which represents the global spatial distribution of massive stars, while the local stellar density method reflects more the crowdedness, which provides the local gravitational potential information. It is suggested to measure the local and the global mass segregation simultaneously. We also develop a hybrid method that takes both aspects into account. This hybrid method balances the local and the global mass segregation in the sense that the predominant one is either caused by dynamical evolution or purely accidental, especially when such information is unknown a priori. In addition, we test our prescriptions with numerical models and show the impact of binaries in estimating the mass segregation value. As an application, we use these methods on the Orion Nebula Cluster (ONC) observations and the Taurus cluster. We find that the ONC is significantly mass segregated down to the 20th most massive stars. In contrast, the massive stars of the Taurus cluster are sparsely distributed in many different subclusters, showing a low degree of compactness. The massive stars of Taurus are also found to be distributed in the high-density region of the subclusters, showing significant mass segregation at subcluster scales. Meanwhile, we also apply these methods to discuss the possible mechanisms of the dynamical evolution of the simulated substructured star clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Jincheng; Puzia, Thomas H.; Lin, Congping
2017-05-10
We compare the existent methods, including the minimum spanning tree based method and the local stellar density based method, in measuring mass segregation of star clusters. We find that the minimum spanning tree method reflects more the compactness, which represents the global spatial distribution of massive stars, while the local stellar density method reflects more the crowdedness, which provides the local gravitational potential information. It is suggested to measure the local and the global mass segregation simultaneously. We also develop a hybrid method that takes both aspects into account. This hybrid method balances the local and the global mass segregationmore » in the sense that the predominant one is either caused by dynamical evolution or purely accidental, especially when such information is unknown a priori. In addition, we test our prescriptions with numerical models and show the impact of binaries in estimating the mass segregation value. As an application, we use these methods on the Orion Nebula Cluster (ONC) observations and the Taurus cluster. We find that the ONC is significantly mass segregated down to the 20th most massive stars. In contrast, the massive stars of the Taurus cluster are sparsely distributed in many different subclusters, showing a low degree of compactness. The massive stars of Taurus are also found to be distributed in the high-density region of the subclusters, showing significant mass segregation at subcluster scales. Meanwhile, we also apply these methods to discuss the possible mechanisms of the dynamical evolution of the simulated substructured star clusters.« less
Minimum spanning tree analysis of the human connectome.
van Dellen, Edwin; Sommer, Iris E; Bohlken, Marc M; Tewarie, Prejaas; Draaisma, Laurijn; Zalesky, Andrew; Di Biase, Maria; Brown, Jesse A; Douw, Linda; Otte, Willem M; Mandl, René C W; Stam, Cornelis J
2018-06-01
One of the challenges of brain network analysis is to directly compare network organization between subjects, irrespective of the number or strength of connections. In this study, we used minimum spanning tree (MST; a unique, acyclic subnetwork with a fixed number of connections) analysis to characterize the human brain network to create an empirical reference network. Such a reference network could be used as a null model of connections that form the backbone structure of the human brain. We analyzed the MST in three diffusion-weighted imaging datasets of healthy adults. The MST of the group mean connectivity matrix was used as the empirical null-model. The MST of individual subjects matched this reference MST for a mean 58%-88% of connections, depending on the analysis pipeline. Hub nodes in the MST matched with previously reported locations of hub regions, including the so-called rich club nodes (a subset of high-degree, highly interconnected nodes). Although most brain network studies have focused primarily on cortical connections, cortical-subcortical connections were consistently present in the MST across subjects. Brain network efficiency was higher when these connections were included in the analysis, suggesting that these tracts may be utilized as the major neural communication routes. Finally, we confirmed that MST characteristics index the effects of brain aging. We conclude that the MST provides an elegant and straightforward approach to analyze structural brain networks, and to test network topological features of individual subjects in comparison to empirical null models. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Pryor, Sara C.; Sullivan, Ryan C.; Schoof, Justin T.
2017-12-01
The static energy content of the atmosphere is increasing on a global scale, but exhibits important subglobal and subregional scales of variability and is a useful parameter for integrating the net effect of changes in the partitioning of energy at the surface and for improving understanding of the causes of so-called warming holes
(i.e., locations with decreasing daily maximum air temperatures (T) or increasing trends of lower magnitude than the global mean). Further, measures of the static energy content (herein the equivalent potential temperature, θe) are more strongly linked to excess human mortality and morbidity than air temperature alone, and have great relevance in understanding causes of past heat-related excess mortality and making projections of possible future events that are likely to be associated with negative human health and economic consequences. New nonlinear statistical models for summertime daily maximum and minimum θe are developed and used to advance understanding of drivers of historical change and variability over the eastern USA. The predictor variables are an index of the daily global mean temperature, daily indices of the synoptic-scale meteorology derived from T and specific humidity (Q) at 850 and 500 hPa geopotential heights (Z), and spatiotemporally averaged soil moisture (SM). SM is particularly important in determining the magnitude of θe over regions that have previously been identified as exhibiting warming holes, confirming the key importance of SM in dictating the partitioning of net radiation into sensible and latent heat and dictating trends in near-surface T and θe. Consistent with our a priori expectations, models built using artificial neural networks (ANNs) out-perform linear models that do not permit interaction of the predictor variables (global T, synoptic-scale meteorological conditions and SM). This is particularly marked in regions with high variability in minimum and maximum θe, where more complex models built using ANN with multiple hidden layers are better able to capture the day-to-day variability in θe and the occurrence of extreme maximum θe. Over the entire domain, the ANN with three hidden layers exhibits high accuracy in predicting maximum θe > 347 K. The median hit rate for maximum θe > 347 K is > 0.60, while the median false alarm rate is ≈ 0.08.
Snowball gouge-aggregates formed in experimental fault gouges at seismic slip rates
NASA Astrophysics Data System (ADS)
Kim, J. H.; Ree, J. H.; Hirose, T.; Yang, K.; Kim, J. W.
2015-12-01
Clay-clast aggregates (CCA) have commonly been reported from experimental and natural fault gouges, but their formation process and mechanical meaning are not so clear. We call CCA snowball gouge aggregate (SGA) since its formation process is similar to that of snowball (see below) and CCA-like structure has been reported also from pure quartz and pure calcite gouges. Here, we discuss the formation process of SGA and its implication for faulting from experimental results of simulated gouges. We conducted high-velocity rotary shear experiments on Ca-bentonite gouges at a normal stress of 1 MPa, slip rate of 1.31 m/s, room temperature and room humidity conditions. Ca-bentonite gouge consists of montmorillonite (>95%) and other minor minerals including quartz and plagioclase. Upon displacement, the friction abruptly increases to the 1st peak (friction coefficient μ≈ 0.7) followed by slip weakening to reach a steady state (μ≈ 0.25~0.3). The simulated fault zone can be divided into slip-localization zone (SLZ) and low-slip-rate zone (LSZ) based on grain size. Spherical SGAs with their size ranging from 1 to 100 μm occur only in LSZ, and their proportion is more than 90%. Two types of SGA occur; SGA with and without a central clast. Both types of SGA show a concentric layering defined by the alternation of pore-rich (1-1.5 μm thick) and pore-poor layers (1.5-2 μm thick). Clay minerals locally exhibit a preferred orientation with their basal plane parallel to the layer boundary. We interpret that the pore-poor layers are clay-accumulated layers formed by rolling of SGA nuclei, and pore-rich layers correspond to the boundary between accumulated clay layers. Water produced from dehydration of clays due to frictional heating presumably acts as an adhesion agent of clay minerals during rolling of SGA. Since the number of layers within each SGA represents the number of rolling, the minimum displacement estimated from the number of layers and layer thickness of the largest SGA (with a diameter of 100 μm) is about 2.7 mm (slip rate≈ 170 μm/s) which is much less than the total displacement of 20 m, suggesting that most of the displacement occurred along the SLZ. Our results imply that SGA can be formed only in subseismic slip-rate zones and that minimum displacement and slip rate can be estimated from SGA.
Newton's Experimentum Crucis Reconsidered
ERIC Educational Resources Information Center
Holtsmark, Torger
1970-01-01
Certain terminological inconsistencies in the teaching of optical theory at the elementary level are traced back to Newton who derived them from Euclidean geometrical optics. Discusses this terminological ambiguity which influenced later textbooks. (LS)
Ab initio nanostructure determination
NASA Astrophysics Data System (ADS)
Gujarathi, Saurabh
Reconstruction of complex structures is an inverse problem arising in virtually all areas of science and technology, from protein structure determination to bulk heterostructure solar cells and the structure of nanoparticles. This problem is cast as a complex network problem where the edges in a network have weights equal to the Euclidean distance between their endpoints. A method, called Tribond, for the reconstruction of the locations of the nodes of the network given only the edge weights of the Euclidean network is presented. The timing results indicate that the algorithm is a low order polynomial in the number of nodes in the network in two dimensions. Reconstruction of Euclidean networks in two dimensions of about one thousand nodes in approximately twenty four hours on a desktop computer using this implementation is done. In three dimensions, the computational cost for the reconstruction is a higher order polynomial in the number of nodes and reconstruction of small Euclidean networks in three dimensions is shown. If a starting network of size five is assumed to be given, then for a network of size 100, the remaining reconstruction can be done in about two hours on a desktop computer. In situations when we have less precise data, modifications of the method may be necessary and are discussed. A related problem in one dimension known as the Optimal Golomb ruler (OGR) is also studied. A statistical physics Hamiltonian to describe the OGR problem is introduced and the first order phase transition from a symmetric low constraint phase to a complex symmetry broken phase at high constraint is studied. Despite the fact that the Hamiltonian is not disordered, the asymmetric phase is highly irregular with geometric frustration. The phase diagram is obtained and it is seen that even at a very low temperature T there is a phase transition at finite and non-zero value of the constraint parameter gamma/mu. Analytic calculations for the scaling of the density and free energy of the ruler are done and they are compared with those from the mean field approach. A scaling law is also derived for the length of OGR, which is consistent with Erdos conjecture and with numerical results.
NASA Astrophysics Data System (ADS)
Pace, Phillip Eric; Tan, Chew Kung; Ong, Chee K.
2018-02-01
Direction finding (DF) systems are fundamental electronic support measures for electronic warfare. A number of DF techniques have been developed over the years; however, these systems are limited in bandwidth and resolution and suffer from a complex design for frequency downconversion. The design of a photonic DF technique for the detection and DF of low probability of intercept (LPI) signals is investigated. Key advantages of this design include a small baseline, wide bandwidth, high resolution, minimal space, weight, and power requirement. A robust postprocessing algorithm that utilizes the minimum Euclidean distance detector provides consistence and accurate estimation of angle of arrival (AoA) for a wide range of LPI waveforms. Experimental tests using frequency modulation continuous wave (FMCW) and P4 modulation signals were conducted in an anechoic chamber to verify the system design. Test results showed that the photonic DF system is capable of measuring the AoA of the LPI signals with 1-deg resolution over a 180 deg field-of-view. For an FMCW signal, the AoA was determined with a RMS error of 0.29 deg at 1-deg resolution. For a P4 coded signal, the RMS error in estimating the AoA is 0.32 deg at 1-deg resolution.
COSMIC monthly progress report
NASA Technical Reports Server (NTRS)
1993-01-01
Activities of the Computer Software Management and Information Center (COSMIC) are summarized for the month of August, 1993. Tables showing the current inventory of programs available from COSMIC are presented and program processing and evaluation activities are discussed. Ten articles were prepared for publication in the NASA Tech Brief Journal. These articles (included in this report) describe the following software items: (1) MOM3D - A Method of Moments Code for Electromagnetic Scattering (UNIX Version); (2) EM-Animate - Computer Program for Displaying and Animating the Steady-State Time-Harmonic Electromagnetic Near Field and Surface-Current Solutions; (3) MOM3D - A Method of Moments Code for Electromagnetic Scattering (IBM PC Version); (4) M414 - MIL-STD-414 Variable Sampling Procedures Computer Program; (5) MEDOF - Minimum Euclidean Distance Optimal Filter; (6) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (Macintosh Version); (7) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (IBM PC Version); (8) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (UNIX Version); (9) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (DEC VAX VMS Version); and (10) TFSSRA - Thick Frequency Selective Surface with Rectangular Apertures. Activities in the areas of marketing, customer service, benefits identification, maintenance and support, and dissemination are also described along with a budget summary.
Walking economy is predictably determined by speed, grade, and gravitational load.
Ludlow, Lindsay W; Weyand, Peter G
2017-11-01
The metabolic energy that human walking requires can vary by more than 10-fold, depending on the speed, surface gradient, and load carried. Although the mechanical factors determining economy are generally considered to be numerous and complex, we tested a minimum mechanics hypothesis that only three variables are needed for broad, accurate prediction: speed, surface grade, and total gravitational load. We first measured steady-state rates of oxygen uptake in 20 healthy adult subjects during unloaded treadmill trials from 0.4 to 1.6 m/s on six gradients: -6, -3, 0, 3, 6, and 9°. Next, we tested a second set of 20 subjects under three torso-loading conditions (no-load, +18, and +31% body weight) at speeds from 0.6 to 1.4 m/s on the same six gradients. Metabolic rates spanned a 14-fold range from supine rest to the greatest single-trial walking mean (3.1 ± 0.1 to 43.3 ± 0.5 ml O 2 ·kg -body -1 ·min -1 , respectively). As theorized, the walking portion (V̇o 2-walk = V̇o 2-gross - V̇o 2-supine-rest ) of the body's gross metabolic rate increased in direct proportion to load and largely in accordance with support force requirements across both speed and grade. Consequently, a single minimum-mechanics equation was derived from the data of 10 unloaded-condition subjects to predict the pooled mass-specific economy (V̇o 2-gross , ml O 2 ·kg -body + load -1 ·min -1 ) of all the remaining loaded and unloaded trials combined ( n = 1,412 trials from 90 speed/grade/load conditions). The accuracy of prediction achieved ( r 2 = 0.99, SEE = 1.06 ml O 2 ·kg -1 ·min -1 ) leads us to conclude that human walking economy is predictably determined by the minimum mechanical requirements present across a broad range of conditions. NEW & NOTEWORTHY Introduced is a "minimum mechanics" model that predicts human walking economy across a broad range of conditions from only three variables: speed, surface grade, and body-plus-load mass. The derivation/validation data set includes steady-state loaded and unloaded walking trials ( n = 3,414) that span a fourfold range of walking speeds on each of six different surface gradients (-6 to +9°). The accuracy of our minimum mechanics model ( r 2 = 0.99; SEE = 1.06 ml O 2 ·kg -1 ·min -1 ) appreciably exceeds that of currently used standards. Copyright © 2017 the American Physiological Society.
Area distortion under certain classes of quasiconformal mappings.
Hernández-Montes, Alfonso; Reséndis O, Lino F
2017-01-01
In this paper we study the hyperbolic and Euclidean area distortion of measurable sets under some classes of K -quasiconformal mappings from the upper half-plane and the unit disk onto themselves, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baleanu, Dumitru; Institute of Space Sciences, P.O. Box MG-6, Magurele-Bucharest
The geodesic motion of pseudo-classical spinning particles in extended Euclidean Taub-NUT space was analyzed. The non-generic symmetries of Taub-NUT was investigated. We found new non-generic symmetries in the presence of electromagnetic field like a monopole.
Why conventional detection methods fail in identifying the existence of contamination events.
Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han
2016-04-15
Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.
Linear time relational prototype based learning.
Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara
2012-10-01
Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.
Thermal dynamics on the lattice with exponentially improved accuracy
NASA Astrophysics Data System (ADS)
Pawlowski, Jan M.; Rothkopf, Alexander
2018-03-01
We present a novel simulation prescription for thermal quantum fields on a lattice that operates directly in imaginary frequency space. By distinguishing initial conditions from quantum dynamics it provides access to correlation functions also outside of the conventional Matsubara frequencies ωn = 2 πnT. In particular it resolves their frequency dependence between ω = 0 and ω1 = 2 πT, where the thermal physics ω ∼ T of e.g. transport phenomena is dominantly encoded. Real-time spectral functions are related to these correlators via an integral transform with rational kernel, so that their unfolding from the novel simulation data is exponentially improved compared to standard Euclidean simulations. We demonstrate this improvement within a non-trivial 0 + 1-dimensional quantum mechanical toy-model and show that spectral features inaccessible in standard Euclidean simulations are quantitatively captured.
Experimental Non-Violation of the Bell Inequality
NASA Astrophysics Data System (ADS)
Palmer, Tim
2018-05-01
A finite non-classical framework for physical theory is described which challenges the conclusion that the Bell Inequality has been shown to have been violated experimentally, even approximately. This framework postulates the universe as a deterministic locally causal system evolving on a measure-zero fractal-like geometry $I_U$ in cosmological state space. Consistent with the assumed primacy of $I_U$, and $p$-adic number theory, a non-Euclidean (and hence non-classical) metric $g_p$ is defined on cosmological state space, where $p$ is a large but finite Pythagorean prime. Using number-theoretic properties of spherical triangles, the inequalities violated experimentally are shown to be $g_p$-distant from the CHSH inequality, whose violation would rule out local realism. This result fails in the singular limit $p=\\infty$, at which $g_p$ is Euclidean. Broader implications are discussed.
NASA Astrophysics Data System (ADS)
González-Díaz, Pedro F.
We re-explore the effects of multiply-connected wormholes on ordinary matter at low energies. It is obtained that the path integral that describes these effects is given in terms of a Planckian probability distribution for the Coleman α-parameters, rather than a classical Gaussian distribution law. This implies that the path integral over all low-energy fields with the wormhole effective interactions can no longer vary continuously, and that the quantities α2 are interpretable as the momenta of a quantum field. Using the new result that, rather than being given in terms of the Coleman-Hawking probability, the Euclidean action must equal negative entropy, the model predicts a very small but still nonzero cosmological constant and quite reasonable values for the pion and neutrino masses. The divergence problems of Euclidean quantum gravity are also discussed in the light of the above results.
From Glass Formation to Icosahedral Ordering by Curving Three-Dimensional Space.
Turci, Francesco; Tarjus, Gilles; Royall, C Patrick
2017-05-26
Geometric frustration describes the inability of a local molecular arrangement, such as icosahedra found in metallic glasses and in model atomic glass formers, to tile space. Local icosahedral order, however, is strongly frustrated in Euclidean space, which obscures any causal relationship with the observed dynamical slowdown. Here we relieve frustration in a model glass-forming liquid by curving three-dimensional space onto the surface of a 4-dimensional hypersphere. For sufficient curvature, frustration vanishes and the liquid "freezes" in a fully icosahedral structure via a sharp "transition." Frustration increases upon reducing the curvature, and the transition to the icosahedral state smoothens while glassy dynamics emerge. Decreasing the curvature leads to decoupling between dynamical and structural length scales and the decrease of kinetic fragility. This sheds light on the observed glass-forming behavior in Euclidean space.
Emotion-independent face recognition
NASA Astrophysics Data System (ADS)
De Silva, Liyanage C.; Esther, Kho G. P.
2000-12-01
Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.
Renormalized vacuum polarization of rotating black holes
NASA Astrophysics Data System (ADS)
Ferreira, Hugo R. C.
2015-04-01
Quantum field theory on rotating black hole spacetimes is plagued with technical difficulties. Here, we describe a general method to renormalize and compute the vacuum polarization of a quantum field in the Hartle-Hawking state on rotating black holes. We exemplify the technique with a massive scalar field on the warped AdS3 black hole solution to topologically massive gravity, a deformation of (2 + 1)-dimensional Einstein gravity. We use a "quasi-Euclidean" technique, which generalizes the Euclidean techniques used for static spacetimes, and we subtract the divergences by matching to a sum over mode solutions on Minkowski spacetime. This allows us, for the first time, to have a general method to compute the renormalized vacuum polarization, for a given quantum state, on a rotating black hole, such as the physically relevant case of the Kerr black hole in four dimensions.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension
NASA Astrophysics Data System (ADS)
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension.
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
Random topologies and the emergence of cooperation: the role of short-cuts
NASA Astrophysics Data System (ADS)
Vilone, D.; Sánchez, A.; Gómez-Gardeñes, J.
2011-04-01
We study in detail the role of short-cuts in promoting the emergence of cooperation in a network of agents playing the Prisoner's Dilemma game (PDG). We introduce a model whose topology interpolates between the one-dimensional Euclidean lattice (a ring) and the complete graph by changing the value of one parameter (the probability p of adding a link between two nodes not already connected in the Euclidean configuration). We show that there is a region of values of p in which cooperation is greatly enhanced, whilst for smaller values of p only a few cooperators are present in the final state, and for p\\rightarrow 1^- cooperation is totally suppressed. We present analytical arguments that provide a very plausible interpretation of the simulation results, thus unveiling the mechanism by which short-cuts contribute to promoting (or suppressing) cooperation.
Modified multidimensional scaling approach to analyze financial markets.
Yin, Yi; Shang, Pengjian
2014-06-01
Detrended cross-correlation coefficient (σDCCA) and dynamic time warping (DTW) are introduced as the dissimilarity measures, respectively, while multidimensional scaling (MDS) is employed to translate the dissimilarities between daily price returns of 24 stock markets. We first propose MDS based on σDCCA dissimilarity and MDS based on DTW dissimilarity creatively, while MDS based on Euclidean dissimilarity is also employed to provide a reference for comparisons. We apply these methods in order to further visualize the clustering between stock markets. Moreover, we decide to confront MDS with an alternative visualization method, "Unweighed Average" clustering method, for comparison. The MDS analysis and "Unweighed Average" clustering method are employed based on the same dissimilarity. Through the results, we find that MDS gives us a more intuitive mapping for observing stable or emerging clusters of stock markets with similar behavior, while the MDS analysis based on σDCCA dissimilarity can provide more clear, detailed, and accurate information on the classification of the stock markets than the MDS analysis based on Euclidean dissimilarity. The MDS analysis based on DTW dissimilarity indicates more knowledge about the correlations between stock markets particularly and interestingly. Meanwhile, it reflects more abundant results on the clustering of stock markets and is much more intensive than the MDS analysis based on Euclidean dissimilarity. In addition, the graphs, originated from applying MDS methods based on σDCCA dissimilarity and DTW dissimilarity, may also guide the construction of multivariate econometric models.
A Riemannian framework for orientation distribution function computing.
Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid
2009-01-01
Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation.
Constraint algebra in Smolin's G →0 limit of 4D Euclidean gravity
NASA Astrophysics Data System (ADS)
Varadarajan, Madhavan
2018-05-01
Smolin's generally covariant GNewton→0 limit of 4d Euclidean gravity is a useful toy model for the study of the constraint algebra in loop quantum gravity (LQG). In particular, the commutator between its Hamiltonian constraints has a metric dependent structure function. While a prior LQG-like construction of nontrivial anomaly free constraint commutators for the model exists, that work suffers from two defects. First, Smolin's remarks on the inability of the quantum dynamics to generate propagation effects apply. Second, the construction only yields the action of a single Hamiltonian constraint together with the action of its commutator through a continuum limit of corresponding discrete approximants; the continuum limit of a product of two or more constraints does not exist. Here, we incorporate changes in the quantum dynamics through structural modifications in the choice of discrete approximants to the quantum Hamiltonian constraint. The new structure is motivated by that responsible for propagation in an LQG-like quantization of paramatrized field theory and significantly alters the space of physical states. We study the off shell constraint algebra of the model in the context of these structural changes and show that the continuum limit action of multiple products of Hamiltonian constraints is (a) supported on an appropriate domain of states, (b) yields anomaly free commutators between pairs of Hamiltonian constraints, and (c) is diffeomorphism covariant. Many of our considerations seem robust enough to be applied to the setting of 4d Euclidean gravity.
Principal component analysis and the locus of the Fréchet mean in the space of phylogenetic trees.
Nye, Tom M W; Tang, Xiaoxian; Weyenberg, Grady; Yoshida, Ruriko
2017-12-01
Evolutionary relationships are represented by phylogenetic trees, and a phylogenetic analysis of gene sequences typically produces a collection of these trees, one for each gene in the analysis. Analysis of samples of trees is difficult due to the multi-dimensionality of the space of possible trees. In Euclidean spaces, principal component analysis is a popular method of reducing high-dimensional data to a low-dimensional representation that preserves much of the sample's structure. However, the space of all phylogenetic trees on a fixed set of species does not form a Euclidean vector space, and methods adapted to tree space are needed. Previous work introduced the notion of a principal geodesic in this space, analogous to the first principal component. Here we propose a geometric object for tree space similar to the [Formula: see text]th principal component in Euclidean space: the locus of the weighted Fréchet mean of [Formula: see text] vertex trees when the weights vary over the [Formula: see text]-simplex. We establish some basic properties of these objects, in particular showing that they have dimension [Formula: see text], and propose algorithms for projection onto these surfaces and for finding the principal locus associated with a sample of trees. Simulation studies demonstrate that these algorithms perform well, and analyses of two datasets, containing Apicomplexa and African coelacanth genomes respectively, reveal important structure from the second principal components.
NASA Astrophysics Data System (ADS)
Kreymer, E. L.
2018-06-01
The model of Euclidean space with imaginary time used in sub-hadron physics uses only part of it since this part is isomorphic to Minkowski space and has the velocity limit 0 ≤ ||v Ei|| ≤ 1. The model of four-dimensional Euclidean space with real time (E space), in which 0 ≤ ||v E|| ≤ ∞ is investigated. The vectors of this space have E-invariants, equal or analogous to the invariants of Minkowski space. All relations between physical quantities in E-space, after they are mapped into Minkowski space, satisfy the principles of SRT and are Lorentz-invariant, and the velocity of light corresponds to infinite velocity. Results obtained in the model are different from the physical laws in Minkowski space. Thus, from the model of the Lagrangian mechanics of quarks in a centrally symmetric attractive potential it follows that the energy-mass of a quark decreases with increase of the velocity and is equal to zero for v = ∞. This made it possible to establish the conditions of emission and absorption of gluons by quarks. The effect of emission of gluons by high-energy quarks was discovered experimentally significantly earlier. The model describes for the first time the dynamic coupling of the masses of constituent and current quarks and reveals new possibilities in the study of intrahardon space. The classical trajectory of the oscillation of quarks in protons is described.
DNA methylation intratumor heterogeneity in localized lung adenocarcinomas.
Quek, Kelly; Li, Jun; Estecio, Marcos; Zhang, Jiexin; Fujimoto, Junya; Roarty, Emily; Little, Latasha; Chow, Chi-Wan; Song, Xingzhi; Behrens, Carmen; Chen, Taiping; William, William N; Swisher, Stephen; Heymach, John; Wistuba, Ignacio; Zhang, Jianhua; Futreal, Andrew; Zhang, Jianjun
2017-03-28
Cancers are composed of cells with distinct molecular and phenotypic features within a given tumor, a phenomenon termed intratumor heterogeneity (ITH). Previously, we have demonstrated genomic ITH in localized lung adenocarcinomas; however, the nature of methylation ITH in lung cancers has not been well investigated. In this study, we generated methylation profiles of 48 spatially separated tumor regions from 11 localized lung adenocarcinomas and their matched normal lung tissues using Illumina Infinium Human Methylation 450K BeadChip array. We observed methylation ITH within the same tumors, but to a much less extent compared to inter-individual heterogeneity. On average, 25% of all differentially methylated probes compared to matched normal lung tissues were shared by all regions from the same tumors. This is in contrast to somatic mutations, of which approximately 77% were shared events amongst all regions of individual tumors, suggesting that while the majority of somatic mutations were early clonal events, the tumor-specific DNA methylation might be associated with later branched evolution of these 11 tumors. Furthermore, our data showed that a higher extent of DNA methylation ITH was associated with larger tumor size (average Euclidean distance of 35.64 (> 3cm, median size) versus 27.24 (<= 3cm), p = 0.014), advanced age (average Euclidean distance of 34.95 (above 65) verse 28.06 (below 65), p = 0.046) and increased risk of postsurgical recurrence (average Euclidean distance of 35.65 (relapsed patients) versus 29.03 (patients without relapsed), p = 0.039).
Analysis of Trajectory Parameters for Probe and Round-Trip Missions to Venus
NASA Technical Reports Server (NTRS)
Dugan, James F., Jr.; Simsic, Carl R.
1960-01-01
For one-way transfers between Earth and Venus, charts are obtained that show velocity, time, and angle parameters as functions of the eccentricity and semilatus rectum of the Sun-focused vehicle conic. From these curves, others are obtained that are useful in planning one-way and round-trip missions to Venus. The analysis is characterized by circular coplanar planetary orbits, successive two-body approximations, impulsive velocity changes, and circular parking orbits at 1.1 planet radii. For round trips the mission time considered ranges from 65 to 788 days, while wait time spent in the parking orbit at Venus ranges from 0 to 467 days. Individual velocity increments, one-way travel times, and departure dates are presented for round trips requiring the minimum total velocity increment. For both single-pass and orbiting Venusian probes, the time span available for launch becomes appreciable with only a small increase in velocity-increment capability above the minimum requirement. Velocity-increment increases are much more effective in reducing travel time for single-pass probes than they are for orbiting probes. Round trips composed of a direct route along an ellipse tangent to Earth's orbit and an aphelion route result in the minimum total velocity increment for wait times less than 100 days and mission times ranging from 145 to 612 days. Minimum-total-velocity-increment trips may be taken along perihelion-perihelion routes for wait times ranging from 300 to 467 days. These wait times occur during missions lasting from 640 to 759 days.
NASA Technical Reports Server (NTRS)
Ponchak, George E.; Papapolymerou, John; Tentzeris, Emmanouil M.; Williams, W. O. (Technical Monitor)
2002-01-01
Measured propagation characteristics of Finite Ground Coplanar (FGC) waveguide on silicon substrates with resistivities spanning 3 orders of magnitude (0.1 to 15.5 Ohm cm) and a 20 micron thick polyimide interface layer is presented as a function of the FGC geometry. Results show that there is an optimum FGC geometry for minimum loss, and silicon with a resistivity of 0.1 Ohm cm has greater loss than substrates with higher and lower resistivity. Lastly, substrates with a resistivity of 10 Ohm cm or greater have acceptable loss.
Statistical indicators of collective behavior and functional clusters in gene networks of yeast
NASA Astrophysics Data System (ADS)
Živković, J.; Tadić, B.; Wick, N.; Thurner, S.
2006-03-01
We analyze gene expression time-series data of yeast (S. cerevisiae) measured along two full cell-cycles. We quantify these data by using q-exponentials, gene expression ranking and a temporal mean-variance analysis. We construct gene interaction networks based on correlation coefficients and study the formation of the corresponding giant components and minimum spanning trees. By coloring genes according to their cell function we find functional clusters in the correlation networks and functional branches in the associated trees. Our results suggest that a percolation point of functional clusters can be identified on these gene expression correlation networks.
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Astrophysics Data System (ADS)
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
Comprehensive Understanding for Vegetated Scene Radiance Relationships
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Deering, D. W.
1984-01-01
The improvement of our fundamental understanding of the dynamics of directional scattering properties of vegetation canopies through analysis of field data and model simulation data is discussed. Directional reflectance distributions spanning the entire existance hemisphere were measured in two field studies; one using a Mark III 3-band radiometer and one using rapid scanning bidirectional field instrument called PARABOLA. Surfaces measured included corn, soybeans, bare soils, grass lawn, orchard grass, alfalfa, cotton row crops, plowed field, annual grassland, stipa grass, hard wheat, salt plain shrubland, and irrigated wheat. Some structural and optical measurements were taken. Field data show unique reflectance distributions ranging from bare soil to complete vegetation canopies. Physical mechanisms causing these trends are proposed based on scattering properties of soil and vegetation. Soil exhibited a strong backscattering peak toward the Sun. Complete vegetation exhibited a bowl distribution with the minimum reflectance near nadir. Incomplete vegetation canopies show shifting of the minimum reflectance off of nadir in the forward scattering direction because both the scattering properties or the vegetation and soil are observed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulfan, R.M.; Vachal, J.D.
1978-02-01
A Preliminary Design Study of large turbulent flow military transport aircraft has been made. The study airplanes were designed to carry a heavy payload (350,000 lb) for a long range (10,000 nmi). The study tasks included: Wing geometry/cruise speed optimization of a large cantilever wing military transport airplane; Preliminary design and performance evaluation of a strut-braced wing transport airplane; and Structural analyses of large-span cantilever and strut-braced wings of graphite/epoxy sandwich construction (1985 technology). The best cantilever wing planform for minimum takeoff gross weight, and minimum fuel requirements, as determined using statistical weight evaluations, has a high aspect ratio, lowmore » sweep, low thickness/chord ratio, and a cruise Mach number of 0.76. A near optimum wing planform with greater speed capability (M = 0.78) has an aspect ratio = 12, quarter chord sweep = 20 deg, and thickness/chord ratio of 0.14/0.08 (inboard/outboard).« less
On streak spacing in wall-bounded turbulent flows
NASA Technical Reports Server (NTRS)
Hamilton, James M.; Kim, John J.
1993-01-01
The present study is a continuation of the examination by Hamilton et al. of the regeneration mechanisms of near-wall turbulence and an attempt to investigate the conjecture of Waleffe et al. The basis of this study is an extension of the 'minimal channel' approach of Jimenez and Moin that emphasizes the near-wall region and reduces the complexity of the turbulent flow by considering a plane Couette flow of near minimum Reynolds number and stream-wise and span-wise extent. Reduction of the flow Reynolds number to the minimum value which will allow turbulence to be sustained has the effect of reducing the ratio of the largest scales to the smallest scales or, equivalently, of causing the near-wall region to fill more of the area between the channel walls. A plane Couette flow was chosen for study since this type of flow has a mean shear of a single sign, and at low Reynolds numbers, the two wall regions are found to share a single set of structures.
Annealing Ant Colony Optimization with Mutation Operator for Solving TSP
2016-01-01
Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality. PMID:27999590
Photosynthetic thermotolerance of woody savanna species in China is correlated with leaf life span
Zhang, Jiao-Lin; Poorter, L.; Hao, Guang-You; Cao, Kun-Fang
2012-01-01
Background and Aims Photosynthetic thermotolerance (PT) is important for plant survival in tropical and sub-tropical savannas. However, little is known about thermotolerance of tropical and sub-tropical wild plants and its association with leaf phenology and persistence. Longer-lived leaves of savanna plants may experience a higher risk of heat stress. Foliar Ca is related to cell integrity of leaves under stresses. In this study it is hypothesized that (1) species with leaf flushing in the hot-dry season have greater PT than those with leaf flushing in the rainy season; and (2) PT correlates positively with leaf life span, leaf mass per unit area (LMA) and foliar Ca concentration ([Ca]) across woody savanna species. Methods The temperature-dependent increase in minimum fluorescence was measured to assess PT, together with leaf dynamics, LMA and [Ca] for a total of 24 woody species differing in leaf flushing time in a valley-type savanna in south-west China. Key Results The PT of the woody savanna species with leaf flushing in the hot-dry season was greater than that of those with leaf flushing in the rainy season. Thermotolerance was positively associated with leaf life span and [Ca] for all species irrespective of the time of flushing. The associations of PT with leaf life span and [Ca] were evolutionarily correlated. Thermotolerance was, however, independent of LMA. Conclusions Chinese savanna woody species are adapted to hot-dry habitats. However, the current maximum leaf temperature during extreme heat stress (44·3 °C) is close to the critical temperature of photosystem II (45·2 °C); future global warming may increase the risk of heat damage to the photosynthetic apparatus of Chinese savanna species. PMID:22875810
Great geomagnetic storm of 9 November 1991: Association with a disappearing solar filament
NASA Astrophysics Data System (ADS)
Cliver, E. W.; Balasubramaniam, K. S.; Nitta, N. V.; Li, X.
2009-02-01
We attribute the great geomagnetic storm on 8-10 November 1991 to a large-scale eruption that encompassed the disappearance of a ~25° solar filament in the southern solar hemisphere. The resultant soft X-ray arcade spanned ~90° of solar longitude. The rapid growth of an active region lying at one end of the X-ray arcade appears to have triggered the eruption. This is the largest geomagnetic storm yet associated with the eruption of a quiescent filament. The minimum hourly Dst value of -354 nT on 9 November 1991 compares with a minimum Dst value of -161 nT for the largest 27-day recurrent (coronal hole) storm observed from 1972 to 2005 and the minimum -559 nT value observed during the flare-associated storm of 14 March 1989, the greatest magnetic storm recorded during the space age. Overall, the November 1991 storm ranks 15th on a list of Dst storms from 1905 to 2004, surpassing in intensity such well-known storms as 14 July 1982 (-310 nT) and 15 July 2000 (-317 nT). We used the Cliver et al. and Gopalswamy et al. empirical models of coronal mass ejection propagation in the solar wind to provide consistency checks on the eruption/storm association.
[Survey on menopausal age and menstruation span in women in Pudong district of Shanghai].
Chen, Hua; Feng, You-ji; Shu, Hui-min; Lu, Tian-mei; Zhu, Hong-mei; Yang, Bin-lie; Xiong, Miao
2010-06-01
To investigate natural spontaneous menopausal age, menstruation span and their relationship with menarche age and parity in Pudong district of Shanghai. From Jan 2007 to Jul 2008, 15 083 spontaneous menopause women undergoing cervical cancer screening were enrolled in this study. The questionnaire included menarche age, parity, spontaneous menopausal age and menstruation span. Those women were divided into four groups based on age, which were group of 56 - 60, 61 - 65, 66 - 70 and more than 70.Analysis of variance (ANOVA) was used for comparing difference between menopausal age and menstruation span. Multiple factor regressions was used to analyze the relationship between menarche age, parity and menopausal age and menstruation span. (1) Spontaneous menopausal age: the minimum was 29 years old, the maximum was 61 years old, and the mean age was (50.6 ± 3.7) years old. The mean spontaneous menopause age were (50.9 ± 3.4), (50.7 ± 3.7), (50.0 ± 4.1), (49.6 ± 4.0) years in groups of 56 - 60, 61 - 65, 66 - 70 and more than 70 years. With the increasing age range in four groups, the increasing trends of menopausal age were observed, which the difference of 1.36 year was shown between groups of 56 - 60 and more than 70 years. (2) Menstruation span: the mean of menstruation span was (34.3 ± 4.1) years, which the minimal age of 12 years and maximal age of 48 years were recorded. (34.6 ± 3.8), (34.3 ± 4.1), (33.9 ± 4.6), (33.2 ± 4.5) were observed in groups of 56 - 60, 61 - 65, 66 - 70 and more than 70 years. With the increasing age range in four groups, the increasing trends of menstruation span were observed, which the difference of 1.41 year was shown between groups of 56 - 60 and more than 70 years. (3) The impact of menarche age on menopausal age and menstruation span: there was no correlation between menarche age and menopausal age (r = 0.02); however, menstruation span was found to be negatively correlated with the menarche age (r = -0.43). (4) The impact of parity on menopausal age and menstruation span: the mean menopausal age of women who had 1 - 2 deliveries was significantly higher than those had no delivery or more than 3 deliveries (P < 0.05). However, there was no difference in menopausal age between women with 1 and 2 deliveries or between women without delivery and more than 3 deliveries (P > 0.05). Menstruation span of women with 1 delivery was significantly longer that those with more than 1 delivery (P < 0.05), similarly, women with 2 deliveries had longer menstruation span than women without delivery or more than 3 deliveries (P < 0.05). There were no difference in menstruation span between women with more than 3 deliveries and without delivery (P > 0.05). (5) Multifactor regression analysis for menstruation span: menarche age was correlated with menstruation span negatively (r = -0.97, P < 0.001). There was significantly different menstruation span between group of 61 - 65, 66 - 70 or more than 70 years and group of 56 - 60 (r = -0.18, P = 0.020; r = -0.78, P < 0.001 and r = -1.23, P < 0.001). Menstruation span in women with 1 - 2 deliveries was significantly longer than that of women without delivery or more than 3 deliveries. (6) Multifactor logistic analysis of menopausal age: there was no association between menarche age and menopausal age, however, significant differences were found in mean menopausal age between different groups, which show that menopausal age of group 56 - 60 years was significant higher than the other groups, including age-group 61 - 65 years, 66 - 70 years and over 70 years (r = -0.18, P = 0.020; r = -0.78, P < 0.001; r = -1.23, P < 0.001). Menopausal age in women with 1 - 2 deliveries was significantly higher than those of women without delivery or with more than 3 deliveries, however, no difference between women with 1 and 2 deliveries or between women without deliveries and more than 3 deliveries was observed. (1) Menopausal age and menstruation span exhibited increasing trends in Pudong district of Shanghai. (2) Menarche age and parity were the important factors influencing menopausal age and menstruation span. (3) With younger age of menarche, the menstruation span become longer.(4) Deliveries of 1 - 2 times can significantly delay the menopause and prolong menstruation span, however, the multiple deliveries (≥ 3 times) had no significant impact on menopausal age and menstruation span.
Metric and geometric morphometric analysis of new hominin fossils from Maba (Guangdong, China).
Xiao, Dongfang; Bae, Christopher J; Shen, Guanjun; Delson, Eric; Jin, Jennie J H; Webb, Nicole M; Qiu, Licheng
2014-09-01
We present an analysis of a set of previously unreported hominin fossils from Maba (Guangdong, China), a cave site that is best known for the presence of a partial hominin cranium currently assigned as mid-Pleistocene Homo and that has been traditionally dated to around the Middle-Late Pleistocene transition. A more recent set of Uranium series dates indicate that the Maba travertine may date to >237 ka (thousands of years ago), as opposed to the original U-series date, which placed Maba at 135-129 ka. The fossils under study include five upper first and second molars and a partial left mandible with a socketed m3, all recovered from different parts of the site than the cranium or the dated sediments. The results of our metric and 2D geometric morphometric ('GM') study suggest that the upper first molars are likely from modern humans, suggesting a more recent origin. The upper second molars align more closely with modern humans, though the minimum spanning tree from the 2D GM analysis also connects Maba to Homo neanderthalensis. The patterning in the M2s is not as clear as with the M1s. The m3 and partial mandible are morphometrically intermediate between Holocene modern humans and older Homo sapiens. However, a minimum spanning tree indicates that both the partial mandible and m3 align most closely with Holocene modern humans, and they also may be substantially younger than the cranium. Because questions exist regarding the context and the relationship of the dated travertine with the hominin fossils, we suggest caution is warranted in interpreting the Maba specimens. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Ke; Testi, Leonardo; Burkert, Andreas; Walmsley, C. Malcolm; Beuther, Henrik; Henning, Thomas
2016-09-01
Large-scale gaseous filaments with lengths up to the order of 100 pc are on the upper end of the filamentary hierarchy of the Galactic interstellar medium (ISM). Their association with respect to the Galactic structure and their role in Galactic star formation are of great interest from both an observational and theoretical point of view. Previous “by-eye” searches, combined together, have started to uncover the Galactic distribution of large filaments, yet inherent bias and small sample size limit conclusive statistical results from being drawn. Here, we present (1) a new, automated method for identifying large-scale velocity-coherent dense filaments, and (2) the first statistics and the Galactic distribution of these filaments. We use a customized minimum spanning tree algorithm to identify filaments by connecting voxels in the position-position-velocity space, using the Bolocam Galactic Plane Survey spectroscopic catalog. In the range of 7\\buildrel{\\circ}\\over{.} 5≤slant l≤slant 194^\\circ , we have identified 54 large-scale filaments and derived mass (˜ {10}3{--}{10}5 {M}⊙ ), length (10-276 pc), linear mass density (54-8625 {M}⊙ pc-1), aspect ratio, linearity, velocity gradient, temperature, fragmentation, Galactic location, and orientation angle. The filaments concentrate along major spiral arms. They are widely distributed across the Galactic disk, with 50% located within ±20 pc from the Galactic mid-plane and 27% run in the center of spiral arms. An order of 1% of the molecular ISM is confined in large filaments. Massive star formation is more favorable in large filaments compared to elsewhere. This is the first comprehensive catalog of large filaments that can be useful for a quantitative comparison with spiral structures and numerical simulations.
Removing the Shackles of Euclid: 1 Classification.
ERIC Educational Resources Information Center
Fielker, David S.
1981-01-01
A new approach to classifying quadrilaterals is presented that tries to classify all possible combinations of shapes and angles by removing the traditional Euclidean viewpoints. The document concludes with a brief look at other types of polygons. (MP)
A Log-Euclidean polyaffine registration for articulated structures in medical images.
Martín-Fernández, Miguel Angel; Martín-Fernández, Marcos; Alberola-López, Carlos
2009-01-01
In this paper we generalize the Log-Euclidean polyaffine registration framework of Arsigny et al. to deal with articulated structures. This framework has very useful properties as it guarantees the invertibility of smooth geometric transformations. In articulated registration a skeleton model is defined for rigid structures such as bones. The final transformation is affine for the bones and elastic for other tissues in the image. We extend the Arsigny el al.'s method to deal with locally-affine registration of pairs of wires. This enables the possibility of using this registration framework to deal with articulated structures. In this context, the design of the weighting functions, which merge the affine transformations defined for each pair of wires, has a great impact not only on the final result of the registration algorithm, but also on the invertibility of the global elastic transformation. Several experiments, using both synthetic images and hand radiographs, are also presented.
Extrinsic local regression on manifold-valued data
Lin, Lizhen; St Thomas, Brian; Zhu, Hongtu; Dunson, David B.
2017-01-01
We propose an extrinsic regression framework for modeling data with manifold valued responses and Euclidean predictors. Regression with manifold responses has wide applications in shape analysis, neuroscience, medical imaging and many other areas. Our approach embeds the manifold where the responses lie onto a higher dimensional Euclidean space, obtains a local regression estimate in that space, and then projects this estimate back onto the image of the manifold. Outside the regression setting both intrinsic and extrinsic approaches have been proposed for modeling i.i.d manifold-valued data. However, to our knowledge our work is the first to take an extrinsic approach to the regression problem. The proposed extrinsic regression framework is general, computationally efficient and theoretically appealing. Asymptotic distributions and convergence rates of the extrinsic regression estimates are derived and a large class of examples are considered indicating the wide applicability of our approach. PMID:29225385
Sensor Network Localization by Eigenvector Synchronization Over the Euclidean Group
CUCURINGU, MIHAI; LIPMAN, YARON; SINGER, AMIT
2013-01-01
We present a new approach to localization of sensors from noisy measurements of a subset of their Euclidean distances. Our algorithm starts by finding, embedding, and aligning uniquely realizable subsets of neighboring sensors called patches. In the noise-free case, each patch agrees with its global positioning up to an unknown rigid motion of translation, rotation, and possibly reflection. The reflections and rotations are estimated using the recently developed eigenvector synchronization algorithm, while the translations are estimated by solving an overdetermined linear system. The algorithm is scalable as the number of nodes increases and can be implemented in a distributed fashion. Extensive numerical experiments show that it compares favorably to other existing algorithms in terms of robustness to noise, sparse connectivity, and running time. While our approach is applicable to higher dimensions, in the current article, we focus on the two-dimensional case. PMID:23946700
Quantum entanglement in de Sitter space with a wall and the decoherence of bubble universes
NASA Astrophysics Data System (ADS)
Albrecht, Andreas; Kanno, Sugumi; Sasaki, Misao
2018-04-01
We study the effect of a bubble wall on the entanglement entropy of a free massive scalar field between two causally disconnected open charts in de Sitter space. We assume there is a delta-functional wall between the open charts. This can be thought of as a model of pair creation of bubble universes in de Sitter space. We first derive the Euclidean vacuum mode functions of the scalar field in the presence of the wall in the coordinates that respect the open charts. We then derive the Bogoliubov transformation between the Euclidean vacuum and the open chart vacua that makes the reduced density matrix diagonal. We find that larger walls lead to less entanglement. Our result may be regarded as evidence of decoherence of bubble universes from each other. We also note an interesting relationship between our results and discussions of the black hole firewall problem.
Artificial immune system via Euclidean Distance Minimization for anomaly detection in bearings
NASA Astrophysics Data System (ADS)
Montechiesi, L.; Cocconcelli, M.; Rubini, R.
2016-08-01
In recent years new diagnostics methodologies have emerged, with particular interest into machinery operating in non-stationary conditions. In fact continuous speed changes and variable loads make non-trivial the spectrum analysis. A variable speed means a variable characteristic fault frequency related to the damage that is no more recognizable in the spectrum. To overcome this problem the scientific community proposed different approaches listed in two main categories: model-based approaches and expert systems. In this context the paper aims to present a simple expert system derived from the mechanisms of the immune system called Euclidean Distance Minimization, and its application in a real case of bearing faults recognition. The proposed method is a simplification of the original process, adapted by the class of Artificial Immune Systems, which proved to be useful and promising in different application fields. Comparative results are provided, with a complete explanation of the algorithm and its functioning aspects.
Asymptotically locally Euclidean/Kaluza-Klein stationary vacuum black holes in five dimensions
NASA Astrophysics Data System (ADS)
Khuri, Marcus; Weinstein, Gilbert; Yamada, Sumio
2018-05-01
We produce new examples, both explicit and analytical, of bi-axisymmetric stationary vacuum black holes in five dimensions. A novel feature of these solutions is that they are asymptotically locally Euclidean, in which spatial cross-sections at infinity have lens space L(p,q) topology, or asymptotically Kaluza-Klein so that spatial cross-sections at infinity are topologically S^1× S^2. These are nondegenerate black holes of cohomogeneity 2, with any number of horizon components, where the horizon cross-section topology is any one of the three admissible types: S^3, S^1× S^2, or L(p,q). Uniqueness of these solutions is also established. Our method is to solve the relevant harmonic map problem with prescribed singularities, having target symmetric space SL(3,{R})/SO(3). In addition, we analyze the possibility of conical singularities and find a large family for which geometric regularity is guaranteed.
Wormholes and the cosmological constant problem.
NASA Astrophysics Data System (ADS)
Klebanov, I.
The author reviews the cosmological constant problem and the recently proposed wormhole mechanism for its solution. Summation over wormholes in the Euclidean path integral for gravity turns all the coupling parameters into dynamical variables, sampled from a probability distribution. A formal saddle point analysis results in a distribution with a sharp peak at the cosmological constant equal to zero, which appears to solve the cosmological constant problem. He discusses the instabilities of the gravitational Euclidean path integral and the difficulties with its interpretation. He presents an alternate formalism for baby universes, based on the "third quantization" of the Wheeler-De Witt equation. This approach is analyzed in a minisuperspace model for quantum gravity, where it reduces to simple quantum mechanics. Once again, the coupling parameters become dynamical. Unfortunately, the a priori probability distribution for the cosmological constant and other parameters is typically a smooth function, with no sharp peaks.
An ensemble of dissimilarity based classifiers for Mackerel gender determination
NASA Astrophysics Data System (ADS)
Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.
2014-03-01
Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.
Curvature-driven morphing of non-Euclidean shells
NASA Astrophysics Data System (ADS)
Pezzulla, Matteo; Stoop, Norbert; Jiang, Xin; Holmes, D. P.
2017-05-01
We investigate how thin structures change their shape in response to non-mechanical stimuli that can be interpreted as variations in the structure's natural curvature. Starting from the theory of non-Euclidean plates and shells, we derive an effective model that reduces a three-dimensional stimulus to the natural fundamental forms of the mid-surface of the structure, incorporating expansion, or growth, in the thickness. Then, we apply the model to a variety of thin bodies, from flat plates to spherical shells, obtaining excellent agreement between theory and numerics. We show how cylinders and cones can either bend more or unroll, and eventually snap and rotate. We also study the nearly isometric deformations of a spherical shell and describe how this shape change is ruled by the geometry of a spindle. As the derived results stem from a purely geometrical model, they are general and scalable.
Mass effects and internal space geometry in triatomic reaction dynamics
NASA Astrophysics Data System (ADS)
Yanao, Tomohiro; Koon, Wang S.; Marsden, Jerrold E.
2006-05-01
The effect of the distribution of mass in triatomic reaction dynamics is analyzed using the geometry of the associated internal space. Atomic masses are appropriately incorporated into internal coordinates as well as the associated non-Euclidean internal space metric tensor after a separation of the rotational degrees of freedom. Because of the non-Euclidean nature of the metric in the internal space, terms such as connection coefficients arise in the internal equations of motion, which act as velocity-dependent forces in a coordinate chart. By statistically averaging these terms, an effective force field is deduced, which accounts for the statistical tendency of geodesics in the internal space. This force field is shown to play a crucial role in determining mass-related branching ratios of isomerization and dissociation dynamics of a triatomic molecule. The methodology presented can be useful for qualitatively predicting branching ratios in general triatomic reactions, and may be applied to the study of isotope effects.
Pion distribution amplitude from Euclidean correlation functions
NASA Astrophysics Data System (ADS)
Bali, Gunnar S.; Braun, Vladimir M.; Gläßle, Benjamin; Göckeler, Meinulf; Gruber, Michael; Hutzler, Fabian; Korcyl, Piotr; Lang, Bernhard; Schäfer, Andreas; Wein, Philipp; Zhang, Jian-Hui
2018-03-01
Following the proposal in (Braun and Müller. Eur Phys J C55:349, 2008), we study the feasibility to calculate the pion distribution amplitude (DA) from suitably chosen Euclidean correlation functions at large momentum. In our lattice study we employ the novel momentum smearing technique (Bali et al. Phys Rev D93:094515, 2016; Bali et al. Phys Lett B774:91, 2017). This approach is complementary to the calculations of the lowest moments of the DA using the Wilson operator product expansion and avoids mixing with lower dimensional local operators on the lattice. The theoretical status of this method is similar to that of quasi-distributions (Ji. Phys Rev Lett 110:262002, 2013) that have recently been used in (Zhang et al. Phys Rev D95:094514, 2017) to estimate the twist two pion DA. The similarities and differences between these two techniques are highlighted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezerra de Mello, E.R.
2006-01-15
In this paper we present, in a integral form, the Euclidean Green function associated with a massless scalar field in the five-dimensional Kaluza-Klein magnetic monopole superposed to a global monopole, admitting a nontrivial coupling between the field with the geometry. This Green function is expressed as the sum of two contributions: the first one related with uncharged component of the field, is similar to the Green function associated with a scalar field in a four-dimensional global monopole space-time. The second contains the information of all the other components. Using this Green function it is possible to study the vacuum polarizationmore » effects on this space-time. Explicitly we calculate the renormalized vacuum expectation value <{phi}{sup *}(x){phi}(x)>{sub Ren}, which by its turn is also expressed as the sum of two contributions.« less
NASA Astrophysics Data System (ADS)
Jonsson, Rickard M.
2005-03-01
I present a way to visualize the concept of curved spacetime. The result is a curved surface with local coordinate systems (Minkowski systems) living on it, giving the local directions of space and time. Relative to these systems, special relativity holds. The method can be used to visualize gravitational time dilation, the horizon of black holes, and cosmological models. The idea underlying the illustrations is first to specify a field of timelike four-velocities uμ. Then, at every point, one performs a coordinate transformation to a local Minkowski system comoving with the given four-velocity. In the local system, the sign of the spatial part of the metric is flipped to create a new metric of Euclidean signature. The new positive definite metric, called the absolute metric, can be covariantly related to the original Lorentzian metric. For the special case of a two-dimensional original metric, the absolute metric may be embedded in three-dimensional Euclidean space as a curved surface.
NASA Astrophysics Data System (ADS)
Kragh, Helge
2012-12-01
The idea that space is not Euclidean by necessity, and that there are other kinds of "curved" spaces, diffused slowly to the physical and astronomical sciences. Until Einstein's general theory of relativity, only a handful of astronomers contemplated a connection between non-Euclidean geometry and real space. One of them, the German astrophysicist Johann Carl Friedrich Zöllner (1834-1882), suggested in 1872 a remarkable cosmological model describing a finite universe in closed space. I examine Zöllner's little-known contribution to cosmology and also his even more unorthodox speculations of a four-dimensional space including both physical and spiritual phenomena. I provide an overview of Zöllner's scientific work, of his status in the German scientific community, and of the controversies caused by his polemical style of science. Zöllner's cosmology was effectively forgotten, but there is no reason why it should remain an unwritten chapter in the history of science.
Cataractogenic potential of ionizing radiations in animal models that simulate man
NASA Technical Reports Server (NTRS)
Lett, J. T.; Cox, A. B.; Lee, A. C.
1986-01-01
Aspects of experiments on radiation-induced lenticular opacification during the life spans of two animal models, the New Zealand white rabbit and the rhesus monkey, are compared and contrasted with published results from a life-span study of another animal model, the beagle dog, and the most recent data from the ongoing study of the survivors from radiation exposure at Hiroshima and Nagasaki. An important connection among the three animal studies is that all the measurements of cataract indices were made by one of the authors (Lee), so variation from personal subjectivity was reduced to a minimum. The primary objective of the rabbit experiments (radiations involved: Fe-56, Ar-40, and Ne-20 ions and Co-60 gamma photons) is an evaluation of hazards to astronauts from Galactic particulate radiations. An analogous evaluation of hazards from solar flares during space flight is being made with monkeys exposed to 32, 55, 138 and 400-MeV protons. Conclusions are drawn about the proper use of animal models to simulate radiation responses in man and the levels of radiation-induced lenticular opacification that pose risks to man in space.
NASA Astrophysics Data System (ADS)
Hamprecht, Fred A.; Peter, Christine; Daura, Xavier; Thiel, Walter; van Gunsteren, Wilfred F.
2001-02-01
We propose an approach for summarizing the output of long simulations of complex systems, affording a rapid overview and interpretation. First, multidimensional scaling techniques are used in conjunction with dimension reduction methods to obtain a low-dimensional representation of the configuration space explored by the system. A nonparametric estimate of the density of states in this subspace is then obtained using kernel methods. The free energy surface is calculated from that density, and the configurations produced in the simulation are then clustered according to the topography of that surface, such that all configurations belonging to one local free energy minimum form one class. This topographical cluster analysis is performed using basin spanning trees which we introduce as subgraphs of Delaunay triangulations. Free energy surfaces obtained in dimensions lower than four can be visualized directly using iso-contours and -surfaces. Basin spanning trees also afford a glimpse of higher-dimensional topographies. The procedure is illustrated using molecular dynamics simulations on the reversible folding of peptide analoga. Finally, we emphasize the intimate relation of density estimation techniques to modern enhanced sampling algorithms.
Alvarez, Otto; Guo, Qinghua; Klinger, Robert C.; Li, Wenkai; Doherty, Paul
2013-01-01
Climate models may be limited in their inferential use if they cannot be locally validated or do not account for spatial uncertainty. Much of the focus has gone into determining which interpolation method is best suited for creating gridded climate surfaces, which often a covariate such as elevation (Digital Elevation Model, DEM) is used to improve the interpolation accuracy. One key area where little research has addressed is in determining which covariate best improves the accuracy in the interpolation. In this study, a comprehensive evaluation was carried out in determining which covariates were most suitable for interpolating climatic variables (e.g. precipitation, mean temperature, minimum temperature, and maximum temperature). We compiled data for each climate variable from 1950 to 1999 from approximately 500 weather stations across the Western United States (32° to 49° latitude and −124.7° to −112.9° longitude). In addition, we examined the uncertainty of the interpolated climate surface. Specifically, Thin Plate Spline (TPS) was used as the interpolation method since it is one of the most popular interpolation techniques to generate climate surfaces. We considered several covariates, including DEM, slope, distance to coast (Euclidean distance), aspect, solar potential, radar, and two Normalized Difference Vegetation Index (NDVI) products derived from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS). A tenfold cross-validation was applied to determine the uncertainty of the interpolation based on each covariate. In general, the leading covariate for precipitation was radar, while DEM was the leading covariate for maximum, mean, and minimum temperatures. A comparison to other products such as PRISM and WorldClim showed strong agreement across large geographic areas but climate surfaces generated in this study (ClimSurf) had greater variability at high elevation regions, such as in the Sierra Nevada Mountains.
Alarcon Falconi, Tania M; Kulinkina, Alexandra V; Mohan, Venkata Raghava; Francis, Mark R; Kattula, Deepthi; Sarkar, Rajiv; Ward, Honorine; Kang, Gagandeep; Balraj, Vinohar; Naumova, Elena N
2017-01-01
Municipal water sources in India have been found to be highly contaminated, with further water quality deterioration occurring during household storage. Quantifying water quality deterioration requires knowledge about the exact source tap and length of water storage at the household, which is not usually known. This study presents a methodology to link source and household stored water, and explores the effects of spatial assumptions on the association between tap-to-household water quality deterioration and enteric infections in two semi-urban slums of Vellore, India. To determine a possible water source for each household sample, we paired household and tap samples collected on the same day using three spatial approaches implemented in GIS: minimum Euclidean distance; minimum network distance; and inverse network-distance weighted average. Logistic and Poisson regression models were used to determine associations between water quality deterioration and household-level characteristics, and between diarrheal cases and water quality deterioration. On average, 60% of households had higher fecal coliform concentrations in household samples than at source taps. Only the weighted average approach detected a higher risk of water quality deterioration for households that do not purify water and that have animals in the home (RR=1.50 [1.03, 2.18], p=0.033); and showed that households with water quality deterioration were more likely to report diarrheal cases (OR=3.08 [1.21, 8.18], p=0.02). Studies to assess contamination between source and household are rare due to methodological challenges and high costs associated with collecting paired samples. Our study demonstrated it is possible to derive useful spatial links between samples post hoc; and that the pairing approach affects the conclusions related to associations between enteric infections and water quality deterioration. Copyright © 2016 Elsevier GmbH. All rights reserved.
Numerical methods for comparing fresh and weathered oils by their FTIR spectra.
Li, Jianfeng; Hibbert, D Brynn; Fuller, Stephen
2007-08-01
Four comparison statistics ('similarity indices') for the identification of the source of a petroleum oil spill based on the ASTM standard test method D3414 were investigated. Namely, (1) first difference correlation coefficient squared and (2) correlation coefficient squared, (3) first difference Euclidean cosine squared and (4) Euclidean cosine squared. For numerical comparison, an FTIR spectrum is divided into three regions, described as: fingerprint (900-700 cm(-1)), generic (1350-900 cm(-1)) and supplementary (1770-1685 cm(-1)), which are the same as the three major regions recommended by the ASTM standard. For fresh oil samples, each similarity index was able to distinguish between replicate independent spectra of the same sample and between different samples. In general, the two first difference-based indices worked better than their parent indices. To provide samples to reveal relationships between weathered and fresh oils, a simple artificial weathering procedure was carried out. Euclidean cosine and correlation coefficients both worked well to maintain identification of a match in the fingerprint region and the two first difference indices were better in the generic region. Receiver operating characteristic curves (true positive rate versus false positive rate) for decisions on matching using the fingerprint region showed two samples could be matched when the difference in weathering time was up to 7 days. Beyond this time the true positive rate falls and samples cannot be reliably matched. However, artificial weathering of a fresh source sample can aid the matching of a weathered sample to its real source from a pool of very similar candidates.
High-Order Local Pooling and Encoding Gaussians Over a Dictionary of Gaussians.
Li, Peihua; Zeng, Hui; Wang, Qilong; Shiu, Simon C K; Zhang, Lei
2017-07-01
Local pooling (LP) in configuration (feature) space proposed by Boureau et al. explicitly restricts similar features to be aggregated, which can preserve as much discriminative information as possible. At the time it appeared, this method combined with sparse coding achieved competitive classification results with only a small dictionary. However, its performance lags far behind the state-of-the-art results as only the zero-order information is exploited. Inspired by the success of high-order statistical information in existing advanced feature coding or pooling methods, we make an attempt to address the limitation of LP. To this end, we present a novel method called high-order LP (HO-LP) to leverage the information higher than the zero-order one. Our idea is intuitively simple: we compute the first- and second-order statistics per configuration bin and model them as a Gaussian. Accordingly, we employ a collection of Gaussians as visual words to represent the universal probability distribution of features from all classes. Our problem is naturally formulated as encoding Gaussians over a dictionary of Gaussians as visual words. This problem, however, is challenging since the space of Gaussians is not a Euclidean space but forms a Riemannian manifold. We address this challenge by mapping Gaussians into the Euclidean space, which enables us to perform coding with common Euclidean operations rather than complex and often expensive Riemannian operations. Our HO-LP preserves the advantages of the original LP: pooling only similar features and using a small dictionary. Meanwhile, it achieves very promising performance on standard benchmarks, with either conventional, hand-engineered features or deep learning-based features.
Oppugning the assumptions of spatial averaging of segment and joint orientations.
Pierrynowski, Michael Raymond; Ball, Kevin Arthur
2009-02-09
Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.
Févotte, Cédric; Bertin, Nancy; Durrieu, Jean-Louis
2009-03-01
This letter presents theoretical, algorithmic, and experimental results about nonnegative matrix factorization (NMF) with the Itakura-Saito (IS) divergence. We describe how IS-NMF is underlaid by a well-defined statistical model of superimposed gaussian components and is equivalent to maximum likelihood estimation of variance parameters. This setting can accommodate regularization constraints on the factors through Bayesian priors. In particular, inverse-gamma and gamma Markov chain priors are considered in this work. Estimation can be carried out using a space-alternating generalized expectation-maximization (SAGE) algorithm; this leads to a novel type of NMF algorithm, whose convergence to a stationary point of the IS cost function is guaranteed. We also discuss the links between the IS divergence and other cost functions used in NMF, in particular, the Euclidean distance and the generalized Kullback-Leibler (KL) divergence. As such, we describe how IS-NMF can also be performed using a gradient multiplicative algorithm (a standard algorithm structure in NMF) whose convergence is observed in practice, though not proven. Finally, we report a furnished experimental comparative study of Euclidean-NMF, KL-NMF, and IS-NMF algorithms applied to the power spectrogram of a short piano sequence recorded in real conditions, with various initializations and model orders. Then we show how IS-NMF can successfully be employed for denoising and upmix (mono to stereo conversion) of an original piece of early jazz music. These experiments indicate that IS-NMF correctly captures the semantics of audio and is better suited to the representation of music signals than NMF with the usual Euclidean and KL costs.
Geometrical and quantum mechanical aspects in observers' mathematics
NASA Astrophysics Data System (ADS)
Khots, Boris; Khots, Dmitriy
2013-10-01
When we create mathematical models for Quantum Mechanics we assume that the mathematical apparatus used in modeling, at least the simplest mathematical apparatus, is infallible. In particular, this relates to the use of "infinitely small" and "infinitely large" quantities in arithmetic and the use of Newton Cauchy definitions of a limit and derivative in analysis. We believe that is where the main problem lies in contemporary study of nature. We have introduced a new concept of Observer's Mathematics (see www.mathrelativity.com). Observer's Mathematics creates new arithmetic, algebra, geometry, topology, analysis and logic which do not contain the concept of continuum, but locally coincide with the standard fields. We prove that Euclidean Geometry works in sufficiently small neighborhood of the given line, but when we enlarge the neighborhood, non-euclidean Geometry takes over. We prove that the physical speed is a random variable, cannot exceed some constant, and this constant does not depend on an inertial coordinate system. We proved the following theorems: Theorem A (Lagrangian). Let L be a Lagrange function of free material point with mass m and speed v. Then the probability P of L = m 2 v2 is less than 1: P(L = m 2 v2) < 1. Theorem B (Nadezhda effect). On the plane (x, y) on every line y = kx there is a point (x0, y0) with no existing Euclidean distance between origin (0, 0) and this point. Conjecture (Black Hole). Our space-time nature is a black hole: light cannot go out infinitely far from origin.
NASA Astrophysics Data System (ADS)
Holst, Michael; Meier, Caleb
2015-01-01
In this article we further develop the solution theory for the Einstein constraint equations on an n-dimensional, asymptotically Euclidean manifold M with interior boundary Σ. Building on recent results for both the asymptotically Euclidean and compact with boundary settings, we show the existence of far-from-CMC and near-CMC solutions to the conformal formulation of the Einstein constraints when nonlinear Robin boundary conditions are imposed on Σ, similar to those analyzed previously by Dain (2004 Class. Quantum Grav. 21 555-73), by Maxwell (2004, 2005 Commun. Math. Phys. 253 561-83), and by Holst and Tsogtgerel (2013 Class. Quantum Grav. 30 205011) as a model of black holes in various CMC settings, and by Holst et al (2013 Non-CMC solutions to the einstein constraint equations with apparent horizon boundaries arXiv:1310.2302v1) in the setting of far-from-CMC solutions on compact manifolds with boundary. These ‘marginally trapped surface’ Robin conditions ensure that the expansion scalars along null geodesics perpendicular to the boundary region Σ are non-positive, which is considered the correct mathematical model for black holes in the context of the Einstein constraint equations. Assuming a suitable form of weak cosmic censorship, the results presented in this article guarantee the existence of initial data that will evolve into a space-time containing an arbitrary number of black holes. A particularly important feature of our results are the minimal restrictions we place on the mean curvature, giving both near- and far-from-CMC results that are new.
Building Intuitive Arguments for the Triangle Congruence Conditions
ERIC Educational Resources Information Center
Piatek-Jimenez, Katrina
2008-01-01
The triangle congruence conditions are a central focus to nearly any course in Euclidean geometry. The author presents a hands-on activity that uses straws and pipe cleaners to explore and justify the triangle congruence conditions. (Contains 4 figures.)
Fluegge, Kyle; Malone, LaShaunda L; Nsereko, Mary; Okware, Brenda; Wejse, Christian; Kisingo, Hussein; Mupere, Ezekiel; Boom, W Henry; Stein, Catherine M
2018-06-26
Appraisal delay is the time a patient takes to consider a symptom as not only noticeable, but a sign of illness. The study's objective was to determine the association between appraisal delay in seeking tuberculosis (TB) treatment and geographic distance measured by network travel (driving and pedestrian) time (in minutes) and distance (Euclidean and self-reported) (in kilometers) and to identify other risk factors from selected covariates and how they modify the core association between delay and distance. This was part of a longitudinal cohort study known as the Kawempe Community Health Study based in Kampala, Uganda. The study enrolled households from April 2002 to July 2012. Multivariable interval regression with multiplicative heteroscedasticity was used to assess the impact of time and distance on delay. The delay interval outcome was defined using a comprehensive set of 28 possible self-reported symptoms. The main independent variables were network travel time (in minutes) and Euclidean distance (in kilometers). Other covariates were organized according to the Andersen utilization conceptual framework. A total of 838 patients with both distance and delay data were included in the network analysis. Bivariate analyses did not reveal a significant association of any distance metric with the delay outcome. However, adjusting for patient characteristics and cavitary disease status, the multivariable model indicated that each minute of driving time to the clinic significantly (p = 0.02) and positively predicted 0.25 days' delay. At the median distance value of 47 min, this represented an additional delay of about 12 (95% CI: [3, 21]) days to the mean of 40 days (95% CI: [25, 56]). Increasing Euclidean distance significantly predicted (p = 0.02) reduced variance in the delay outcome, thereby increasing precision of the mean delay estimate. At the median Euclidean distance of 2.8 km, the variance in the delay was reduced by more than 25%. Of the four geographic distance measures, network travel driving time was a better and more robust predictor of mean delay in this setting. Including network travel driving time with other risk factors may be important in identifying populations especially vulnerable to delay.
Universality in the nonlinear leveling of capillary films
NASA Astrophysics Data System (ADS)
Zheng, Zhong; Fontelos, Marco A.; Shin, Sangwoo; Stone, Howard A.
2018-03-01
Many material science, coating, and manufacturing problems involve liquid films where defects that span the film thickness must be removed. Here, we study the surface-tension-driven leveling dynamics of a thin viscous film following closure of an initial hole. The dynamics of the film shape is described by a nonlinear evolution equation, for which we obtain a self-similar solution. The analytical results are verified using time-dependent numerical and experimental results for the profile shapes and the minimum film thickness at the center. The universal behavior we identify can be useful for characterizing the time evolution of the leveling process and estimating material properties from experiments.
Convergence of Mayer and Virial expansions and the Penrose tree-graph identity
NASA Astrophysics Data System (ADS)
Procacci, Aldo; Yuhjtman, Sergio A.
2017-01-01
We establish new lower bounds for the convergence radius of the Mayer series and the Virial series of a continuous particle system interacting via a stable and tempered pair potential. Our bounds considerably improve those given by Penrose (J Math Phys 4:1312, 1963) and Ruelle (Ann Phys 5:109-120, 1963) for the Mayer series and by Lebowitz and Penrose (J Math Phys 7:841-847, 1964) for the Virial series. To get our results, we exploit the tree-graph identity given by Penrose (Statistical mechanics: foundations and applications. Benjamin, New York, 1967) using a new partition scheme based on minimum spanning trees.
Continuum Theory of Retroviral Capsids
NASA Astrophysics Data System (ADS)
Nguyen, T. T.; Bruinsma, R. F.; Gelbart, W. M.
2006-02-01
We present a self-assembly phase diagram for the shape of retroviral capsids, based on continuum elasticity theory. The spontaneous curvature of the capsid proteins drives a weakly first-order transition from spherical to spherocylindrical shapes. The conical capsid shape which characterizes the HIV-1 retrovirus is never stable under unconstrained energy minimization. Only under conditions of fixed volume and/or fixed spanning length can the conical shape be a minimum energy structure. Our results indicate that, unlike the capsids of small viruses, retrovirus capsids are not uniquely determined by the molecular structure of the constituent proteins but depend in an essential way on physical constraints present during assembly.
2-µm wavelength-range low-loss inhibited-coupling hollow-core PCF
NASA Astrophysics Data System (ADS)
Maurel, M.; Chafer, M.; Delahaye, F.; Amrani, F.; Debord, B.; Gerome, F.; Benabid, F.
2018-02-01
We report on the design and fabrication of inhibited-coupling guiding hollow-core photonic crystal fiber with a transmission band optimized for low loss guidance around 2 μm. Two fibers design based on a Kagome-lattice cladding have been studied to demonstrate a minimum loss figure of 25 dB/km at 2 μm associated to an ultra-broad transmission band spanning from the visible to our detection limit of 3.4 μm. Such fibers could be an excellent tool to deliver and compress ultra-short pulse laser systems, especially for the emerging 2-3 μm spectral region.
NASA Astrophysics Data System (ADS)
Song, Y.; Gurney, K. R.; Rayner, P. J.; Asefi-Najafabady, S.
2012-12-01
High resolution quantification of global fossil fuel CO2 emissions has become essential in research aimed at understanding the global carbon cycle and supporting the verification of international agreements on greenhouse gas emission reductions. The Fossil Fuel Data Assimilation System (FFDAS) was used to estimate global fossil fuel carbon emissions at 0.25 degree from 1992 to 2010. FFDAS quantifies CO2 emissions based on areal population density, per capita economic activity, energy intensity and carbon intensity. A critical constraint to this system is the estimation of national-scale fossil fuel CO2 emissions disaggregated into economic sectors. Furthermore, prior uncertainty estimation is an important aspect of the FFDAS. Objective techniques to quantify uncertainty for the national emissions are essential. There are several institutional datasets that quantify national carbon emissions, including British Petroleum (BP), the International Energy Agency (IEA), the Energy Information Administration (EIA), and the Carbon Dioxide Information and Analysis Center (CDIAC). These four datasets have been "harmonized" by Jordan Macknick for inter-comparison purposes (Macknick, Carbon Management, 2011). The harmonization attempted to generate consistency among the different institutional datasets via a variety of techniques such as reclassifying into consistent emitting categories, recalculating based on consistent emission factors, and converting into consistent units. These harmonized data form the basis of our uncertainty estimation. We summarized the maximum, minimum and mean national carbon emissions for all the datasets from 1992 to 2010. We calculated key statistics highlighting the remaining differences among the harmonized datasets. We combine the span (max - min) of datasets for each country and year with the standard deviation of the national spans over time. We utilize the economic sectoral definitions from IEA to disaggregate the national total emission into specific sectors required by FFDAS. Our results indicated that although the harmonization performed by Macknick generates better agreement among datasets, significant differences remain at national total level. For example, the CO2 emission span for most countries range from 10% to 12%; BP is generally the highest of the four datasets while IEA is typically the lowest; The US and China had the highest absolute span values but lower percentage span values compared to other countries. However, the US and China make up nearly one-half of the total global absolute span quantity. The absolute span value for the summation of national differences approaches 1 GtC/year in 2007, almost one-half of the biological "missing sink". The span value is used as a potential bias in a recalculation of global and regional carbon budgets to highlight the importance of fossil fuel CO2 emissions in calculating the missing sink. We conclude that if the harmonized span represents potential bias, calculations of the missing sink through forward budget or inverse approaches may be biased by nearly a factor of two.
3d Abelian dualities with boundaries
NASA Astrophysics Data System (ADS)
Aitken, Kyle; Baumgartner, Andrew; Karch, Andreas; Robinson, Brandon
2018-03-01
We establish the action of three-dimensional bosonization and particle-vortex duality in the presence of a boundary, which supports a non-anomalous two-dimensional theory. We confirm our prescription using a microscopic realization of the duality in terms of a Euclidean lattice.
A fast estimation of shock wave pressure based on trend identification
NASA Astrophysics Data System (ADS)
Yao, Zhenjian; Wang, Zhongyu; Wang, Chenchen; Lv, Jing
2018-04-01
In this paper, a fast method based on trend identification is proposed to accurately estimate the shock wave pressure in a dynamic measurement. Firstly, the collected output signal of the pressure sensor is reconstructed by discrete cosine transform (DCT) to reduce the computational complexity for the subsequent steps. Secondly, the empirical mode decomposition (EMD) is applied to decompose the reconstructed signal into several components with different frequency-bands, and the last few low-frequency components are chosen to recover the trend of the reconstructed signal. In the meantime, the optimal component number is determined based on the correlation coefficient and the normalized Euclidean distance between the trend and the reconstructed signal. Thirdly, with the areas under the gradient curve of the trend signal, the stable interval that produces the minimum can be easily identified. As a result, the stable value of the output signal is achieved in this interval. Finally, the shock wave pressure can be estimated according to the stable value of the output signal and the sensitivity of the sensor in the dynamic measurement. A series of shock wave pressure measurements are carried out with a shock tube system to validate the performance of this method. The experimental results show that the proposed method works well in shock wave pressure estimation. Furthermore, comparative experiments also demonstrate the superiority of the proposed method over the existing approaches in both estimation accuracy and computational efficiency.
Microevolution in Perugia: isonymy 1890-1990.
Rodríguez-Larralde, A; Formica, G; Scapoli, C; Beretta, M; Mamolini, E; Barrai, I
1993-01-01
The distribution of surnames in the population of the Comune of Perugia, as it existed in the memory banks of the Municipality Computer in autumn 1991, was studied by age and place of birth. Fisher's alpha and Karlin-McGregor's v were estimated in the total population, in persons born before 1901, and in persons born in the nine decades thereafter, ending with the period 1981-1991, for immigrants and for natives of Perugia, respectively. The wealth of surnames was significantly higher in immigrants than in natives of Perugia, as detected by alpha, v and by the log-log regression of the corresponding distributions. Among residents born in Perugia, Fisher's alpha shows a minimum value during 1921-1930, explained as a consequence of the First World War. The relationship between all possible combinations of cohorts born in the 10 different decades was studied through the Euclidean distance and through Lasker's coefficient of relationship, and a significant correlation between the former and time was revealed, both in immigrants and in natives of Perugia. When compared with the Province of Ferrara, Perugia was far richer in surnames, as measured by Fisher's alpha, for the total population and for each of the 10 decades analysed. Recent immigration, measured by Karlin-McGregor's v, was significantly higher in Perugia until the 1960s, equal in both Provinces during the 1970s, and higher in Ferrara during the 1980s.
Arrieta-Bolaños, Esteban; Maldonado-Torres, Hazael; Dimitriu, Oana; Hoddinott, Michael A; Fowles, Finnuala; Shah, Anila; Orlich-Pérez, Priscilla; McWhinnie, Alasdair J; Alfaro-Bourrouet, Wilbert; Buján-Boza, Willem; Little, Ann-Margaret; Salazar-Sánchez, Lizbeth; Madrigal, J Alejandro
2011-01-01
The human leukocyte antigen (HLA) system is the most polymorphic in humans. Its allele, genotype, and haplotype frequencies vary significantly among different populations. Molecular typing data on HLA are necessary for the development of stem cell donor registries, cord blood banks, HLA-disease association studies, and anthropology studies. The Costa Rica Central Valley Population (CCVP) is the major population in this country. No previous study has characterized HLA frequencies in this population. Allele group and haplotype frequencies of HLA genes in the CCVP were determined by means of molecular typing in a sample of 130 unrelated blood donors from one of the country's major hospitals. A comparison between these frequencies and those of 126 populations worldwide was also carried out. A minimum variance dendrogram based on squared Euclidean distances was constructed to assess the relationship between the CCVP sample and populations from all over the world. Allele group and haplotype frequencies observed in this study are consistent with a profile of a dynamic and diverse population, with a hybrid ethnic origin, predominantly Caucasian-Amerindian. Results showed that populations genetically closest to the CCVP are a Mestizo urban population from Venezuela, and another one from Guadalajara, Mexico. Copyright © 2011 American Society for Histocompatibility and Immunogenetics. All rights reserved.
NASA Technical Reports Server (NTRS)
Flamm, Jeffrey D.; Deere, Karen A.; Mason, Mary L.; Berrier, Bobby L.; Johnson, Stuart K.
2007-01-01
An axisymmetric version of the Dual Throat Nozzle concept with a variable expansion ratio has been studied to determine the impacts on thrust vectoring and nozzle performance. The nozzle design, applicable to a supersonic aircraft, was guided using the unsteady Reynolds-averaged Navier-Stokes computational fluid dynamics code, PAB3D. The axisymmetric Dual Throat Nozzle concept was tested statically in the Jet Exit Test Facility at the NASA Langley Research Center. The nozzle geometric design variables included circumferential span of injection, cavity length, cavity convergence angle, and nozzle expansion ratio for conditions corresponding to take-off and landing, mid climb and cruise. Internal nozzle performance and thrust vectoring performance was determined for nozzle pressure ratios up to 10 with secondary injection rates up to 10 percent of the primary flow rate. The 60 degree span of injection generally performed better than the 90 degree span of injection using an equivalent injection area and number of holes, in agreement with computational results. For injection rates less than 7 percent, thrust vector angle for the 60 degree span of injection was 1.5 to 2 degrees higher than the 90 degree span of injection. Decreasing cavity length improved thrust ratio and discharge coefficient, but decreased thrust vector angle and thrust vectoring efficiency. Increasing cavity convergence angle from 20 to 30 degrees increased thrust vector angle by 1 degree over the range of injection rates tested, but adversely affected system thrust ratio and discharge coefficient. The dual throat nozzle concept generated the best thrust vectoring performance with an expansion ratio of 1.0 (a cavity in between two equal minimum areas). The variable expansion ratio geometry did not provide the expected improvements in discharge coefficient and system thrust ratio throughout the flight envelope of typical a supersonic aircraft. At mid-climb and cruise conditions, the variable geometry design compromised thrust vector angle achieved, but some thrust vector control would be available, potentially for aircraft trim. The fixed area, expansion ratio of 1.0, Dual Throat Nozzle provided the best overall compromise for thrust vectoring and nozzle internal performance over the range of NPR tested compared to the variable geometry Dual Throat Nozzle.
Lotfi, Yones; Mehrkian, Saiedeh; Moossavi, Abdollah; Zadeh, Soghrat Faghih; Sadjedi, Hamed
2016-03-01
This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD). The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9-11 years) according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth) and lower negative correlations in the most lateral reference location (60° azimuth) in the children with APD. The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information.
Hierarchical clustering using correlation metric and spatial continuity constraint
Stork, Christopher L.; Brewer, Luke N.
2012-10-02
Large data sets are analyzed by hierarchical clustering using correlation as a similarity measure. This provides results that are superior to those obtained using a Euclidean distance similarity measure. A spatial continuity constraint may be applied in hierarchical clustering analysis of images.
ERIC Educational Resources Information Center
Dalton, LeRoy C., Ed.; Snyder, Henry D., Ed.
The ten chapters in this booklet cover topics not ordinarily discussed in the classroom: Fibonacci sequences, projective geometry, groups, infinity and transfinite numbers, Pascal's Triangle, topology, experiments with natural numbers, non-Euclidean geometries, Boolean algebras, and the imaginary and the infinite in geometry. Each chapter is…
Performance characterization tests of three 0.44-N (0.1 lbf) hydrazine catalytic thrusters
NASA Technical Reports Server (NTRS)
Moynihan, P. I.; Bjorklund, R. A.
1973-01-01
The 0.44-N (0.1-lbf) class of hydrazine catalytic thruster has been evaluated to assess its capability for spacecraft limit-cycle attitude control with thruster pulse durations on the order of 10 milliseconds. Dynamic-environment and limit-cycle simulation tests were performed on three commercially available thruster/valve assemblies, purchased from three different manufacturers. The results indicate that this class of thruster can sustain a launch environment and, when properly temperature-conditioned, can perform limit-cycle operations over the anticipated life span of a multi-year mission. The minimum operating temperature for very short pulse durations was determined for each thruster. Pulsing life tests were then conducted on each thruster under a thermally controlled condition which maintained the catalyst bed at both a nominal 93 C (200 F) and 205 C (400 F). These were the temperatures believed to be slightly below and very near the minimum recommended operating temperature, respectively. The ensuing life tests ranged from 100,000 to 250,000 pulses at these temperatures, as would be required for spacecraft limit-cycle attitude control applications.
NASA Technical Reports Server (NTRS)
Shindo, S.; Joppa, R. G.
1980-01-01
As a means to achieve a minimum interference correction wind tunnel, a partially actively controlled test section was experimentally examined. A jet flapped wing with 0.91 m (36 in) span and R = 4.05 was used as a model to create moderately high lift coefficients. The partially controlled test section was simulated using an insert, a rectangular box 0.96 x 1.44 m (3.14 x 4.71 ft) open on both ends in the direction of the tunnel air flow, placed in the University of Washington Aeronautical Laboratories (UWAL) 2.44 x 3.66 m (8 x 12 ft) wind tunnel. A tail located three chords behind the wing was used to measure the downwash at the tail region. The experimental data indicates that, within the range of momentum coefficient examined, it appears to be unnecessary to actively control all four sides of the test section walls in order to achieve the near interference free flow field environment in a small wind tunnel. The remaining wall interference can be satisfactorily corrected by the vortex lattice method.
SASS: A symmetry adapted stochastic search algorithm exploiting site symmetry
NASA Astrophysics Data System (ADS)
Wheeler, Steven E.; Schleyer, Paul v. R.; Schaefer, Henry F.
2007-03-01
A simple symmetry adapted search algorithm (SASS) exploiting point group symmetry increases the efficiency of systematic explorations of complex quantum mechanical potential energy surfaces. In contrast to previously described stochastic approaches, which do not employ symmetry, candidate structures are generated within simple point groups, such as C2, Cs, and C2v. This facilitates efficient sampling of the 3N-6 Pople's dimensional configuration space and increases the speed and effectiveness of quantum chemical geometry optimizations. Pople's concept of framework groups [J. Am. Chem. Soc. 102, 4615 (1980)] is used to partition the configuration space into structures spanning all possible distributions of sets of symmetry equivalent atoms. This provides an efficient means of computing all structures of a given symmetry with minimum redundancy. This approach also is advantageous for generating initial structures for global optimizations via genetic algorithm and other stochastic global search techniques. Application of the SASS method is illustrated by locating 14 low-lying stationary points on the cc-pwCVDZ ROCCSD(T) potential energy surface of Li5H2. The global minimum structure is identified, along with many unique, nonintuitive, energetically favorable isomers.
Radiation measurements on the Mir Orbital Station.
Badhwar, G D; Atwell, W; Reitz, G; Beaujean, R; Heinrich, W
2002-10-01
Radiation measurements made onboard the MIR Orbital Station have spanned nearly a decade and covered two solar cycles, including one of the largest solar particle events, one of the largest magnetic storms, and a mean solar radio flux level reaching 250 x 10(4) Jansky that has been observed in the last 40 years. The cosmonaut absorbed dose rates varied from about 450 microGy day-1 during solar minimum to approximately half this value during the last solar maximum. There is a factor of about two in dose rate within a given module, and a similar variation from module to module. The average radiation quality factor during solar minimum, using the ICRP-26 definition, was about 2.4. The drift of the South Atlantic Anomaly was measured to be 6.0 +/- 0.5 degrees W, and 1.6 +/- 0.5 degrees N. These measurements are of direct applicability to the International Space Station. This paper represents a comprehensive review of Mir Space Station radiation data available from a variety of sources. c2002 Elsevier Science Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Franco, Patrick; Ogier, Jean-Marc; Loonis, Pierre; Mullot, Rémy
Recently we have developed a model for shape description and matching. Based on minimum spanning trees construction and specifics stages like the mixture, it seems to have many desirable properties. Recognition invariance in front shift, rotated and noisy shape was checked through median scale tests related to GREC symbol reference database. Even if extracting the topology of a shape by mapping the shortest path connecting all the pixels seems to be powerful, the construction of graph induces an expensive algorithmic cost. In this article we discuss on the ways to reduce time computing. An alternative solution based on image compression concepts is provided and evaluated. The model no longer operates in the image space but in a compact space, namely the Discrete Cosine space. The use of block discrete cosine transform is discussed and justified. The experimental results led on the GREC2003 database show that the proposed method is characterized by a good discrimination power, a real robustness to noise with an acceptable time computing.
Multivariate pattern analysis for MEG: A comparison of dissimilarity measures.
Guggenmos, Matthias; Sterzer, Philipp; Cichy, Radoslaw Martin
2018-06-01
Multivariate pattern analysis (MVPA) methods such as decoding and representational similarity analysis (RSA) are growing rapidly in popularity for the analysis of magnetoencephalography (MEG) data. However, little is known about the relative performance and characteristics of the specific dissimilarity measures used to describe differences between evoked activation patterns. Here we used a multisession MEG data set to qualitatively characterize a range of dissimilarity measures and to quantitatively compare them with respect to decoding accuracy (for decoding) and between-session reliability of representational dissimilarity matrices (for RSA). We tested dissimilarity measures from a range of classifiers (Linear Discriminant Analysis - LDA, Support Vector Machine - SVM, Weighted Robust Distance - WeiRD, Gaussian Naïve Bayes - GNB) and distances (Euclidean distance, Pearson correlation). In addition, we evaluated three key processing choices: 1) preprocessing (noise normalisation, removal of the pattern mean), 2) weighting decoding accuracies by decision values, and 3) computing distances in three different partitioning schemes (non-cross-validated, cross-validated, within-class-corrected). Four main conclusions emerged from our results. First, appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures. Second, LDA, SVM and WeiRD yielded high peak decoding accuracies and nearly identical time courses. Third, while using decoding accuracies for RSA was markedly less reliable than continuous distances, this disadvantage was ameliorated by decision-value-weighting of decoding accuracies. Fourth, the cross-validated Euclidean distance provided unbiased distance estimates and highly replicable representational dissimilarity matrices. Overall, we strongly advise the use of multivariate noise normalisation as a general preprocessing step, recommend LDA, SVM and WeiRD as classifiers for decoding and highlight the cross-validated Euclidean distance as a reliable and unbiased default choice for RSA. Copyright © 2018 Elsevier Inc. All rights reserved.
Bullock, Joshua Matthew Allen; Schwab, Jannik; Thalassinos, Konstantinos; Topf, Maya
2016-01-01
Crosslinking mass spectrometry (XL-MS) is becoming an increasingly popular technique for modeling protein monomers and complexes. The distance restraints garnered from these experiments can be used alone or as part of an integrative modeling approach, incorporating data from many sources. However, modeling practices are varied and the difference in their usefulness is not clear. Here, we develop a new scoring procedure for models based on crosslink data—Matched and Nonaccessible Crosslink score (MNXL). We compare its performance with that of other commonly-used scoring functions (Number of Violations and Sum of Violation Distances) on a benchmark of 14 protein domains, each with 300 corresponding models (at various levels of quality) and associated, previously published, experimental crosslinks (XLdb). The distances between crosslinked lysines are calculated either as Euclidean distances or Solvent Accessible Surface Distances (SASD) using a newly-developed method (Jwalk). MNXL takes into account whether a crosslink is nonaccessible, i.e. an experimentally observed crosslink has no corresponding SASD in a model due to buried lysines. This metric alone is shown to have a significant impact on modeling performance and is a concept that is not considered at present if only Euclidean distances are used. Additionally, a comparison between modeling with SASD or Euclidean distance shows that SASD is superior, even when factoring out the effect of the nonaccessible crosslinks. Our benchmarking also shows that MNXL outperforms the other tested scoring functions in terms of precision and correlation to Cα-RMSD from the crystal structure. We finally test the MNXL at different levels of crosslink recovery (i.e. the percentage of crosslinks experimentally observed out of all theoretical ones) and set a target recovery of ∼20% after which the performance plateaus. PMID:27150526
MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER
NASA Technical Reports Server (NTRS)
Barton, R. S.
1994-01-01
The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.
Gravitational Instantons and Minimal Surfaces
NASA Astrophysics Data System (ADS)
Nutku, Y.
1996-12-01
We show that for every minimal surface in E3 there is a gravitational instanton, an exact solution of the Einstein field equations with Euclidean signature and anti-self-dual curvature. The explicit metric establishing this correspondence is presented and a new class of exact solutions are obtained.
Euclid and Descartes: A Partnership.
ERIC Educational Resources Information Center
Wasdovich, Dorothy Hoy
1991-01-01
Presented is a method of reorganizing a high school geometry course to integrate coordinate geometry together with Euclidean geometry at an earlier stage in the course, thus enabling students to prove subsequent theorems from either perspective. Several examples contrasting different proofs from both perspectives are provided. (MDH)
Micromaths: Removing Euclid from the Shackles.
ERIC Educational Resources Information Center
Oldknow, Adrian
2000-01-01
Attempts to lay the groundwork for a study of curves produced as loci using dynamic geometry. Provides some sketches of ways Cabri may be used to enhance the teaching of geometry with particular reference to synthetic plane Euclidean geometry, locus, and the conics. (Contains 26 references.) (ASK)
Spatial versus Tree Representations of Proximity Data.
ERIC Educational Resources Information Center
Pruzansky, Sandra; And Others
1982-01-01
Two-dimensional euclidean planes and additive trees are two of the most common representations of proximity data for multidimensional scaling. Guidelines for comparing these representations and discovering properties that could help identify which representation is more appropriate for a given data set are presented. (Author/JKS)
Evaluation of procedures for prediction of unconventional gas in the presence of geologic trends
Attanasi, E.D.; Coburn, T.C.
2009-01-01
This study extends the application of local spatial nonparametric prediction models to the estimation of recoverable gas volumes in continuous-type gas plays to regimes where there is a single geologic trend. A transformation is presented, originally proposed by Tomczak, that offsets the distortions caused by the trend. This article reports on numerical experiments that compare predictive and classification performance of the local nonparametric prediction models based on the transformation with models based on Euclidean distance. The transformation offers improvement in average root mean square error when the trend is not severely misspecified. Because of the local nature of the models, even those based on Euclidean distance in the presence of trends are reasonably robust. The tests based on other model performance metrics such as prediction error associated with the high-grade tracts and the ability of the models to identify sites with the largest gas volumes also demonstrate the robustness of both local modeling approaches. ?? International Association for Mathematical Geology 2009.
Izard, T; Aevarsson, A; Allen, M D; Westphal, A H; Perham, R N; de Kok, A; Hol, W G
1999-02-16
The pyruvate dehydrogenase multienzyme complex (Mr of 5-10 million) is assembled around a structural core formed of multiple copies of dihydrolipoyl acetyltransferase (E2p), which exhibits the shape of either a cube or a dodecahedron, depending on the source. The crystal structures of the 60-meric dihydrolipoyl acyltransferase cores of Bacillus stearothermophilus and Enterococcus faecalis pyruvate dehydrogenase complexes were determined and revealed a remarkably hollow dodecahedron with an outer diameter of approximately 237 A, 12 large openings of approximately 52 A diameter across the fivefold axes, and an inner cavity with a diameter of approximately 118 A. Comparison of cubic and dodecahedral E2p assemblies shows that combining the principles of quasi-equivalence formulated by Caspar and Klug [Caspar, D. L. & Klug, A. (1962) Cold Spring Harbor Symp. Quant. Biol. 27, 1-4] with strict Euclidean geometric considerations results in predictions of the major features of the E2p dodecahedron matching the observed features almost exactly.
A d-dimensional stress tensor for Minkd+2 gravity
NASA Astrophysics Data System (ADS)
Kapec, Daniel; Mitra, Prahar
2018-05-01
We consider the tree-level scattering of massless particles in ( d+2)-dimensional asymptotically flat spacetimes. The S -matrix elements are recast as correlation functions of local operators living on a space-like cut ℳ d of the null momentum cone. The Lorentz group SO( d + 1 , 1) is nonlinearly realized as the Euclidean conformal group on ℳ d . Operators of non-trivial spin arise from massless particles transforming in non-trivial representations of the little group SO( d), and distinguished operators arise from the soft-insertions of gauge bosons and gravitons. The leading soft-photon operator is the shadow transform of a conserved spin-one primary operator J a , and the subleading soft-graviton operator is the shadow transform of a conserved spin-two symmetric traceless primary operator T ab . The universal form of the soft-limits ensures that J a and T ab obey the Ward identities expected of a conserved current and energy momentum tensor in a Euclidean CFT d , respectively.
A new convexity measure for polygons.
Zunic, Jovisa; Rosin, Paul L
2004-07-01
Abstract-Convexity estimators are commonly used in the analysis of shape. In this paper, we define and evaluate a new convexity measure for planar regions bounded by polygons. The new convexity measure can be understood as a "boundary-based" measure and in accordance with this it is more sensitive to measured boundary defects than the so called "area-based" convexity measures. When compared with the convexity measure defined as the ratio between the Euclidean perimeter of the convex hull of the measured shape and the Euclidean perimeter of the measured shape then the new convexity measure also shows some advantages-particularly for shapes with holes. The new convexity measure has the following desirable properties: 1) the estimated convexity is always a number from (0, 1], 2) the estimated convexity is 1 if and only if the measured shape is convex, 3) there are shapes whose estimated convexity is arbitrarily close to 0, 4) the new convexity measure is invariant under similarity transformations, and 5) there is a simple and fast procedure for computing the new convexity measure.
Buckling transition and boundary layer in non-Euclidean plates.
Efrati, Efi; Sharon, Eran; Kupferman, Raz
2009-07-01
Non-Euclidean plates are thin elastic bodies having no stress-free configuration, hence exhibiting residual stresses in the absence of external constraints. These bodies are endowed with a three-dimensional reference metric, which may not necessarily be immersible in physical space. Here, based on a recently developed theory for such bodies, we characterize the transition from flat to buckled equilibrium configurations at a critical value of the plate thickness. Depending on the reference metric, the buckling transition may be either continuous or discontinuous. In the infinitely thin plate limit, under the assumption that a limiting configuration exists, we show that the limit is a configuration that minimizes the bending content, among all configurations with zero stretching content (isometric immersions of the midsurface). For small but finite plate thickness, we show the formation of a boundary layer, whose size scales with the square root of the plate thickness and whose shape is determined by a balance between stretching and bending energies.
Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories
NASA Astrophysics Data System (ADS)
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.
Bayesian approach to spectral function reconstruction for Euclidean quantum field theories.
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33T(C).
Entropy, extremality, euclidean variations, and the equations of motion
NASA Astrophysics Data System (ADS)
Dong, Xi; Lewkowycz, Aitor
2018-01-01
We study the Euclidean gravitational path integral computing the Rényi entropy and analyze its behavior under small variations. We argue that, in Einstein gravity, the extremality condition can be understood from the variational principle at the level of the action, without having to solve explicitly the equations of motion. This set-up is then generalized to arbitrary theories of gravity, where we show that the respective entanglement entropy functional needs to be extremized. We also extend this result to all orders in Newton's constant G N , providing a derivation of quantum extremality. Understanding quantum extremality for mixtures of states provides a generalization of the dual of the boundary modular Hamiltonian which is given by the bulk modular Hamiltonian plus the area operator, evaluated on the so-called modular extremal surface. This gives a bulk prescription for computing the relative entropies to all orders in G N . We also comment on how these ideas can be used to derive an integrated version of the equations of motion, linearized around arbitrary states.
Asymptotically Vanishing Cosmological Constant in the Multiverse
NASA Astrophysics Data System (ADS)
Kawai, Hikaru; Okada, Takashi
We study the problem of the cosmological constant in the context of the multiverse in Lorentzian space-time, and show that the cosmological constant will vanish in the future. This sort of argument was started by Sidney Coleman in 1989, and he argued that the Euclidean wormholes make the multiverse partition function a superposition of various values of the cosmological constant Λ, which has a sharp peak at Λ = 0. However, the implication of the Euclidean analysis to our Lorentzian space-time is unclear. With this motivation, we analyze the quantum state of the multiverse in Lorentzian space-time by the WKB method, and calculate the density matrix of our universe by tracing out the other universes. Our result predicts vanishing cosmological constant. While Coleman obtained the enhancement at Λ = 0 through the action itself, in our Lorentzian analysis the similar enhancement arises from the front factor of eiS in the universe wave function, which is in the next leading order in the WKB approximation.
Wheat, J S; Choppin, S; Goyal, A
2014-06-01
Three-dimensional surface imaging technologies have been used in the planning and evaluation of breast reconstructive and cosmetic surgery. The aim of this study was to develop a 3D surface imaging system based on the Microsoft Kinect and assess the accuracy and repeatability with which the system could image the breast. A system comprising two Kinects, calibrated to provide a complete 3D image of the mannequin was developed. Digital measurements of Euclidean and surface distances between landmarks showed acceptable agreement with manual measurements. The mean differences for Euclidean and surface distances were 1.9mm and 2.2mm, respectively. The system also demonstrated good intra- and inter-rater reliability (ICCs>0.999). The Kinect-based 3D surface imaging system offers a low-cost, readily accessible alternative to more expensive, commercially available systems, which have had limited clinical use. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
The traveling salesman problem: a hierarchical model.
Graham, S M; Joshi, A; Pizlo, Z
2000-10-01
Our review of prior literature on spatial information processing in perception, attention, and memory indicates that these cognitive functions involve similar mechanisms based on a hierarchical architecture. The present study extends the application of hierarchical models to the area of problem solving. First, we report results of an experiment in which human subjects were tested on a Euclidean traveling salesman problem (TSP) with 6 to 30 cities. The subject's solutions were either optimal or near-optimal in length and were produced in a time that was, on average, a linear function of the number of cities. Next, the performance of the subjects is compared with that of five representative artificial intelligence and operations research algorithms, that produce approximate solutions for Euclidean problems. None of these algorithms was found to be an adequate psychological model. Finally, we present a new algorithm for solving the TSP, which is based on a hierarchical pyramid architecture. The performance of this new algorithm is quite similar to the performance of the subjects.
Stochastic Spectral Descent for Discrete Graphical Models
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...
2015-12-14
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less
Cataractogenic potential of ionizing radiations in animal models that simulate man
NASA Astrophysics Data System (ADS)
Lett, J. T.; Cox, A. B.; Lee, A. C.
Aspects of experiments on radiation-induced lenticular opacification during the life spans of two animal models, the New Zealand white rabbit and the rhesus monkey, are compared and contrasted with published results from a life span study of another animal model, the beagle dog, and the most recent data from the ongoing study of the survivors from radiation exposure at Hiroshima and Nagasaki. An important connection among the three animal studies is that all the measurements of cataract indices were made by one of the authors (A.C.L.), so variation from personal subjectivity was reduced to a minimum. The primary objective of the rabbit experiments (radiations involved: 56Fe, 40Ar and 20Ne ions and 60Co γ photons) is an evaluation of hazards to astronauts from galactic particulate radiations. An analogous evaluation of hazards from solar flares during space flight is being made with monkeys exposed to 32, 55, 138 and 400 MeV protons. Conclusions are drawn about the proper use of animal models to simulate radiation responses in man and the levels of radiation-induced lenticular opacification that pose risks to man in space.
Epistemic uncertainty propagation in energy flows between structural vibrating systems
NASA Astrophysics Data System (ADS)
Xu, Menghui; Du, Xiaoping; Qiu, Zhiping; Wang, Chong
2016-03-01
A dimension-wise method for predicting fuzzy energy flows between structural vibrating systems coupled by joints with epistemic uncertainties is established. Based on its Legendre polynomial approximation at α=0, both the minimum and maximum point vectors of the energy flow of interest are calculated dimension by dimension within the space spanned by the interval parameters determined by fuzzy those at α=0 and the resulted interval bounds are used to assemble the concerned fuzzy energy flows. Besides the proposed method, vertex method as well as two current methods is also applied. Comparisons among results by different methods are accomplished by two numerical examples and the accuracy of all methods is simultaneously verified by Monte Carlo simulation.
Xu, Tianhua; Shevchenko, Nikita A; Lavery, Domaniç; Semrau, Daniel; Liga, Gabriele; Alvarado, Alex; Killey, Robert I; Bayvel, Polina
2017-02-20
The relationship between modulation format and the performance of multi-channel digital back-propagation (MC-DBP) in ideal Nyquist-spaced optical communication systems is investigated. It is found that the nonlinear distortions behave independent of modulation format in the case of full-field DBP, in contrast to the cases of electronic dispersion compensation and partial-bandwidth DBP. It is shown that the minimum number of steps per span required for MC-DBP depends on the chosen modulation format. For any given target information rate, there exists a possible trade-off between modulation format and back-propagated bandwidth, which could be used to reduce the computational complexity requirement of MC-DBP.
The nature of radio emission from distant galaxies
NASA Astrophysics Data System (ADS)
Richards, Eric A.
I describe an observational program aimed at understanding the radio emission from distant, rapidly evolving galaxy populations. These observations were carried out at 1.4 and 8.5 GHz with the VLA centered on the Hubble Deep Field. Further MERLIN observations of the HDF region at 1.4 GHz provided an angular resolution of 0.2'' and when combined with the VLA data produced an image with an unprecedented rms noise of 4 μJy. All radio sources detected in the VLA complete sample are resolved with a median angular size of 1-2''. The differential count of the radio sources is marginally sub-Euclidean (γ = -2.4 +/- 0.1) and fluctuation analysis suggests nearly 60 sources per armin2 are present at the 1 μJy level. A correlation analysis indicates spatial clustering among the 371 radio sources on angular scales of 1-40 arcmin. Optical identifications are made primarily with bright (I = 22) disk systems composed of irregulars, peculiars, interacting/merging galaxies, and a few isolated field spirals. Available redshifts span the range 0.2-3. These clues coupled with the steep spectral index of the 1.4 GHz selected sample are indicative of diffuse synchrotron radiation in distant galactic disks. Thus the evolution in the microjansky radio population is driven principally by star-formation. I have isolated a number of optically faint radio sources (about 25% of the overall sample) which remain unidentified to I = 26-28 in the HDF and flanking optical fields. Several of these objects have extremely red counterparts and constitute a new class of radio sources which are candidate high redshift dusty protogalaxies.
Antoine's Necklace or How to Keep a Necklace from Falling Apart.
ERIC Educational Resources Information Center
Brechner, Beverly L.; Mayer, John C.
1988-01-01
A construction in geometric topology is presented for an imaginary string of beads, but without the string, forming a necklace that cannot fall apart. Some well-known applications and generalizations of Antoine's Necklace are provided, with all examples subsets of Euclidean spaces. (MNS)
BIBLIOGRAPHIES, HIGH SCHOOL MATHEMATICS.
ERIC Educational Resources Information Center
WOODS, PAUL E.
THIS ANNOTATED BIBLIOGRAPHY IS A COMPILATION OF A NUMBER OF HIGHLY REGARDED BOOK LISTS CONSISTING OF LIBRARY BOOKS AND TEXTBOOKS FOR GRADES 7-12. THE BOOKS IN THIS LIST ARE CURRENTLY IN PRINT AND THE CONTENT IS REPRESENTATIVE OF THE FOLLOWING AREAS OF MATHEMATICS--MATHEMATICAL RECREATION, COMPUTERS, ARITHMETIC, ALGEBRA, EUCLIDEAN GEOMETRY,…
NASA Astrophysics Data System (ADS)
Hiram Moon, Parry; Eberle Spencer, Domina
2005-09-01
Preface; Nomenclature; Historical introduction; Part I. Holors: 1. Index notation; 2. Holor algebra; 3. Gamma products; Part II. Transformations: 4. Tensors; 5. Akinetors; 6. Geometric spaces; Part III. Holor Calculus: 7. The linear connection; 8. The Riemann-Christoffel tensors; Part IV. Space Structure: 9. Non-Riemannian spaces; 10. Riemannian space; 11. Euclidean space; References; Index.
Geometrical Constructions in Dynamic and Interactive Mathematics Learning Environment
ERIC Educational Resources Information Center
Kondratieva, Margo
2013-01-01
This paper concerns teaching Euclidean geometry at the university level. It is based on the authors' personal experience. It describes a sequence of learning activities that combine geometrical constructions with explorations, observations, and explanations of facts related to the geometry of triangle. Within this approach, a discussion of the…
NASA Astrophysics Data System (ADS)
Menezes, G.; Svaiter, N. F.
2006-07-01
We use the method of stochastic quantization in a topological field theory defined in an Euclidean space, assuming a Langevin equation with a memory kernel. We show that our procedure for the Abelian Chern-Simons theory converges regardless of the nature of the Chern-Simons coefficient.
NASA Astrophysics Data System (ADS)
Bloshanskiĭ, I. L.
1984-02-01
The precise geometry is found of measurable sets in N-dimensional Euclidean space on which generalized localization almost everywhere holds for multiple Fourier series which are rectangularly summable.Bibliography: 14 titles.
Using Multidimensional Scaling To Assess the Dimensionality of Dichotomous Item Data.
ERIC Educational Resources Information Center
Meara, Kevin; Robin, Frederic; Sireci, Stephen G.
2000-01-01
Investigated the usefulness of multidimensional scaling (MDS) for assessing the dimensionality of dichotomous test data. Focused on two MDS proximity measures, one based on the PC statistic (T. Chen and M. Davidson, 1996) and other, on interitem Euclidean distances. Simulation results show that both MDS procedures correctly identify…
Mathematical Formulation of Multivariate Euclidean Models for Discrimination Methods.
ERIC Educational Resources Information Center
Mullen, Kenneth; Ennis, Daniel M.
1987-01-01
Multivariate models for the triangular and duo-trio methods are described, and theoretical methods are compared to a Monte Carlo simulation. Implications are discussed for a new theory of multidimensional scaling which challenges the traditional assumption that proximity measures and perceptual distances are monotonically related. (Author/GDC)
ERIC Educational Resources Information Center
Henry, Gary T.; And Others
1992-01-01
A statistical technique is presented for developing performance standards based on benchmark groups. The benchmark groups are selected using a multivariate technique that relies on a squared Euclidean distance method. For each observation unit (a school district in the example), a unique comparison group is selected. (SLD)
ERIC Educational Resources Information Center
Pipinos, Savas
2010-01-01
This article describes one classroom activity in which the author simulates the Newtonian gravity, and employs the Euclidean Geometry with the use of new technologies (NT). The prerequisites for this activity were some knowledge of the formulae for a particle free fall in Physics and most certainly, a good understanding of the notion of similarity…
The Equivalence of Three Statistical Packages for Performing Hierarchical Cluster Analysis
ERIC Educational Resources Information Center
Blashfield, Roger
1977-01-01
Three different software programs which contain hierarchical agglomerative cluster analysis procedures were shown to generate different solutions on the same data set using apparently the same options. The basis for the differences in the solutions was the formulae used to calculate Euclidean distance. (Author/JKS)
Probability Distributions of Minkowski Distances between Discrete Random Variables.
ERIC Educational Resources Information Center
Schroger, Erich; And Others
1993-01-01
Minkowski distances are used to indicate similarity of two vectors in an N-dimensional space. How to compute the probability function, the expectation, and the variance for Minkowski distances and the special cases City-block distance and Euclidean distance. Critical values for tests of significance are presented in tables. (SLD)
Finite Topological Spaces as a Pedagogical Tool
ERIC Educational Resources Information Center
Helmstutler, Randall D.; Higginbottom, Ryan S.
2012-01-01
We propose the use of finite topological spaces as examples in a point-set topology class especially suited to help students transition into abstract mathematics. We describe how carefully chosen examples involving finite spaces may be used to reinforce concepts, highlight pathologies, and develop students' non-Euclidean intuition. We end with a…
Fraction Reduction through Continued Fractions
ERIC Educational Resources Information Center
Carley, Holly
2011-01-01
This article presents a method of reducing fractions without factoring. The ideas presented may be useful as a project for motivated students in an undergraduate number theory course. The discussion is related to the Euclidean Algorithm and its variations may lead to projects or early examples involving efficiency of an algorithm.
Finite Trigonometry: A Resource for Teachers.
ERIC Educational Resources Information Center
Malcom, Paul Scott
This investigation extends a 25-point geometric system for defining a 25-point trigonometry whose properties are analogous to those of the trigonometry of the Euclidean plane. These properties include definitions of trigonometric functions arising from ratios of sides of right triangles, the relations of elements of a given triangle through the…