Unconventional Hamilton-type variational principle in phase space and symplectic algorithm
NASA Astrophysics Data System (ADS)
Luo, En; Huang, Weijiang; Zhang, Hexin
2003-06-01
By a novel approach proposed by Luo, the unconventional Hamilton-type variational principle in phase space for elastodynamics of multidegree-of-freedom system is established in this paper. It not only can fully characterize the initial-value problem of this dynamic, but also has a natural symplectic structure. Based on this variational principle, a symplectic algorithm which is called a symplectic time-subdomain method is proposed. A non-difference scheme is constructed by applying Lagrange interpolation polynomial to the time subdomain. Furthermore, it is also proved that the presented symplectic algorithm is an unconditionally stable one. From the results of the two numerical examples of different types, it can be seen that the accuracy and the computational efficiency of the new method excel obviously those of widely used Wilson-θ and Newmark-β methods. Therefore, this new algorithm is a highly efficient one with better computational performance.
Variational and symplectic integrators for satellite relative orbit propagation including drag
NASA Astrophysics Data System (ADS)
Palacios, Leonel; Gurfil, Pini
2018-04-01
Orbit propagation algorithms for satellite relative motion relying on Runge-Kutta integrators are non-symplectic—a situation that leads to incorrect global behavior and degraded accuracy. Thus, attempts have been made to apply symplectic methods to integrate satellite relative motion. However, so far all these symplectic propagation schemes have not taken into account the effect of atmospheric drag. In this paper, drag-generalized symplectic and variational algorithms for satellite relative orbit propagation are developed in different reference frames, and numerical simulations with and without the effect of atmospheric drag are presented. It is also shown that high-order versions of the newly-developed variational and symplectic propagators are more accurate and are significantly faster than Runge-Kutta-based integrators, even in the presence of atmospheric drag.
Gauge properties of the guiding center variational symplectic integrator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Squire, J.; Tang, W. M.; Qin, H.
Variational symplectic algorithms have recently been developed for carrying out long-time simulation of charged particles in magnetic fields [H. Qin and X. Guan, Phys. Rev. Lett. 100, 035006 (2008); H. Qin, X. Guan, and W. Tang, Phys. Plasmas (2009); J. Li, H. Qin, Z. Pu, L. Xie, and S. Fu, Phys. Plasmas 18, 052902 (2011)]. As a direct consequence of their derivation from a discrete variational principle, these algorithms have very good long-time energy conservation, as well as exactly preserving discrete momenta. We present stability results for these algorithms, focusing on understanding how explicit variational integrators can be designed formore » this type of system. It is found that for explicit algorithms, an instability arises because the discrete symplectic structure does not become the continuous structure in the t{yields}0 limit. We examine how a generalized gauge transformation can be used to put the Lagrangian in the 'antisymmetric discretization gauge,' in which the discrete symplectic structure has the correct form, thus eliminating the numerical instability. Finally, it is noted that the variational guiding center algorithms are not electromagnetically gauge invariant. By designing a model discrete Lagrangian, we show that the algorithms are approximately gauge invariant as long as A and {phi} are relatively smooth. A gauge invariant discrete Lagrangian is very important in a variational particle-in-cell algorithm where it ensures current continuity and preservation of Gauss's law [J. Squire, H. Qin, and W. Tang (to be published)].« less
NASA Astrophysics Data System (ADS)
Zhang, Shuangxi; Jia, Yuesong; Sun, Qizhi
2015-02-01
Webb [1] proposed a method to get symplectic integrators of magnetic systems by Taylor expanding the discrete Euler-Lagrangian equations (DEL) which resulted from variational symplectic method by making the variation of the discrete action [2], and approximating the results to the order of O (h2), where h is the time step. And in that paper, Webb thought that the integrators obtained by that method are symplectic ones, especially, he treated Boris integrator (BI) as the symplectic one. However, we have questions about Webb's results. Theoretically the transformation of phase-space coordinates between two adjacent points induced by symplectic algorithm should conserve a symplectic 2-form [2-5]. As proved in Refs. [2,3], the transformations induced by the standard symplectic integrator derived from Hamilton and the variational symplectic integrator (VSI) [2,6] from Lagrangian should conserve a symplectic 2-forms. But the approximation of VSI to O (h2) obtained by that paper is hard to conserve a symplectic 2-form, contrary to the claim of [1]. In the next section, we will use BI as an example to support our point and will prove BI not to be a symplectic one but an integrator conserving discrete phase-space volume.
Degenerate variational integrators for magnetic field line flow and guiding center trajectories
NASA Astrophysics Data System (ADS)
Ellison, C. L.; Finn, J. M.; Burby, J. W.; Kraus, M.; Qin, H.; Tang, W. M.
2018-05-01
Symplectic integrators offer many benefits for numerically approximating solutions to Hamiltonian differential equations, including bounded energy error and the preservation of invariant sets. Two important Hamiltonian systems encountered in plasma physics—the flow of magnetic field lines and the guiding center motion of magnetized charged particles—resist symplectic integration by conventional means because the dynamics are most naturally formulated in non-canonical coordinates. New algorithms were recently developed using the variational integration formalism; however, those integrators were found to admit parasitic mode instabilities due to their multistep character. This work eliminates the multistep character, and therefore the parasitic mode instabilities via an adaptation of the variational integration formalism that we deem "degenerate variational integration." Both the magnetic field line and guiding center Lagrangians are degenerate in the sense that the resultant Euler-Lagrange equations are systems of first-order ordinary differential equations. We show that retaining the same degree of degeneracy when constructing discrete Lagrangians yields one-step variational integrators preserving a non-canonical symplectic structure. Numerical examples demonstrate the benefits of the new algorithms, including superior stability relative to the existing variational integrators for these systems and superior qualitative behavior relative to non-conservative algorithms.
Explicit symplectic algorithms based on generating functions for charged particle dynamics.
Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan
2016-07-01
Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H(x,p)=p_{i}f(x) or H(x,p)=x_{i}g(p). Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.
Explicit symplectic algorithms based on generating functions for charged particle dynamics
NASA Astrophysics Data System (ADS)
Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan
2016-07-01
Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H (x ,p ) =pif (x ) or H (x ,p ) =xig (p ) . Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.
Development of Multistep and Degenerate Variational Integrators for Applications in Plasma Physics
NASA Astrophysics Data System (ADS)
Ellison, Charles Leland
Geometric integrators yield high-fidelity numerical results by retaining conservation laws in the time advance. A particularly powerful class of geometric integrators is symplectic integrators, which are widely used in orbital mechanics and accelerator physics. An important application presently lacking symplectic integrators is the guiding center motion of magnetized particles represented by non-canonical coordinates. Because guiding center trajectories are foundational to many simulations of magnetically confined plasmas, geometric guiding center algorithms have high potential for impact. The motivation is compounded by the need to simulate long-pulse fusion devices, including ITER, and opportunities in high performance computing, including the use of petascale resources and beyond. This dissertation uses a systematic procedure for constructing geometric integrators --- known as variational integration --- to deliver new algorithms for guiding center trajectories and other plasma-relevant dynamical systems. These variational integrators are non-trivial because the Lagrangians of interest are degenerate - the Euler-Lagrange equations are first-order differential equations and the Legendre transform is not invertible. The first contribution of this dissertation is that variational integrators for degenerate Lagrangian systems are typically multistep methods. Multistep methods admit parasitic mode instabilities that can ruin the numerical results. These instabilities motivate the second major contribution: degenerate variational integrators. By replicating the degeneracy of the continuous system, degenerate variational integrators avoid parasitic mode instabilities. The new methods are therefore robust geometric integrators for degenerate Lagrangian systems. These developments in variational integration theory culminate in one-step degenerate variational integrators for non-canonical magnetic field line flow and guiding center dynamics. The guiding center integrator assumes coordinates such that one component of the magnetic field is zero; it is shown how to construct such coordinates for nested magnetic surface configurations. Additionally, collisional drag effects are incorporated in the variational guiding center algorithm for the first time, allowing simulation of energetic particle thermalization. Advantages relative to existing canonical-symplectic and non-geometric algorithms are numerically demonstrated. All algorithms have been implemented as part of a modern, parallel, ODE-solving library, suitable for use in high-performance simulations.
Structure and structure-preserving algorithms for plasma physics
NASA Astrophysics Data System (ADS)
Morrison, P. J.
2016-10-01
Conventional simulation studies of plasma physics are based on numerically solving the underpinning differential (or integro-differential) equations. Usual algorithms in general do not preserve known geometric structure of the physical systems, such as the local energy-momentum conservation law, Casimir invariants, and the symplectic structure (Poincaré invariants). As a consequence, numerical errors may accumulate coherently with time and long-term simulation results may be unreliable. Recently, a series of geometric algorithms that preserve the geometric structures resulting from the Hamiltonian and action principle (HAP) form of theoretical models in plasma physics have been developed by several authors. The superiority of these geometric algorithms has been demonstrated with many test cases. For example, symplectic integrators for guiding-center dynamics have been constructed to preserve the noncanonical symplectic structures and bound the energy-momentum errors for all simulation time-steps; variational and symplectic algorithms have been discovered and successfully applied to the Vlasov-Maxwell system, MHD, and other magnetofluid equations as well. Hamiltonian truncations of the full Vlasov-Maxwell system have opened the field of discrete gyrokinetics and led to the GEMPIC algorithm. The vision that future numerical capabilities in plasma physics should be based on structure-preserving geometric algorithms will be presented. It will be argued that the geometric consequences of HAP form and resulting geometric algorithms suitable for plasma physics studies cannot be adapted from existing mathematical literature but, rather, need to be discovered and worked out by theoretical plasma physicists. The talk will review existing HAP structures of plasma physics for a variety of models, and how they have been adapted for numerical implementation. Supported by DOE DE-FG02-04ER-54742.
Ellison, C. L.; Burby, J. W.; Qin, H.
2015-11-01
One popular technique for the numerical time advance of charged particles interacting with electric and magnetic fields according to the Lorentz force law [1], [2], [3] and [4] is the Boris algorithm. Its popularity stems from simple implementation, rapid iteration, and excellent long-term numerical fidelity [1] and [5]. Excellent long-term behavior strongly suggests the numerical dynamics exhibit conservation laws analogous to those governing the continuous Lorentz force system [6]. Moreover, without conserved quantities to constrain the numerical dynamics, algorithms typically dissipate or accumulate important observables such as energy and momentum over long periods of simulated time [6]. Identification of themore » conservative properties of an algorithm is important for establishing rigorous expectations on the long-term behavior; energy-preserving, symplectic, and volume-preserving methods each have particular implications for the qualitative numerical behavior [6], [7], [8], [9], [10] and [11].« less
NASA Astrophysics Data System (ADS)
Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa
2018-02-01
Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.
NASA Astrophysics Data System (ADS)
Lu, Wei-Tao; Zhang, Hua; Wang, Shun-Jin
2008-07-01
Symplectic algebraic dynamics algorithm (SADA) for ordinary differential equations is applied to solve numerically the circular restricted three-body problem (CR3BP) in dynamical astronomy for both stable motion and chaotic motion. The result is compared with those of Runge-Kutta algorithm and symplectic algorithm under the fourth order, which shows that SADA has higher accuracy than the others in the long-term calculations of the CR3BP.
Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms
NASA Astrophysics Data System (ADS)
Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei
2016-01-01
In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).
Chen, Qiang; Qin, Hong; Liu, Jian; ...
2017-08-24
An infinite dimensional canonical symplectic structure and structure-preserving geometric algorithms are developed for the photon–matter interactions described by the Schrödinger–Maxwell equations. The algorithms preserve the symplectic structure of the system and the unitary nature of the wavefunctions, and bound the energy error of the simulation for all time-steps. Here, this new numerical capability enables us to carry out first-principle based simulation study of important photon–matter interactions, such as the high harmonic generation and stabilization of ionization, with long-term accuracy and fidelity.
Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Jianyuan; Qin, Hong; Liu, Jian
2015-11-01
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces fivemore » exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.« less
Variational symplectic algorithm for guiding center dynamics in the inner magnetosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Jinxing; Pu Zuyin; Xie Lun
Charged particle dynamics in magnetosphere has temporal and spatial multiscale; therefore, numerical accuracy over a long integration time is required. A variational symplectic integrator (VSI) [H. Qin and X. Guan, Phys. Rev. Lett. 100, 035006 (2008) and H. Qin, X. Guan, and W. M. Tang, Phys. Plasmas 16, 042510 (2009)] for the guiding-center motion of charged particles in general magnetic field is applied to study the dynamics of charged particles in magnetosphere. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing themore » dynamics. The VSI conserves exactly a discrete Lagrangian symplectic structure and has better numerical properties over a long integration time, compared with standard integrators, such as the standard and adaptive fourth order Runge-Kutta (RK4) methods. Applying the VSI method to guiding-center dynamics in the inner magnetosphere, we can accurately calculate the particles'orbits for an arbitrary long simulating time with good conservation property. When a time-independent convection and corotation electric field is considered, the VSI method can give the accurate single particle orbit, while the RK4 method gives an incorrect orbit due to its intrinsic error accumulation over a long integrating time.« less
Variational Algorithms for Test Particle Trajectories
NASA Astrophysics Data System (ADS)
Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.
2015-11-01
The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.
Modified symplectic schemes with nearly-analytic discrete operators for acoustic wave simulations
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Yang, Dinghui; Lang, Chao; Wang, Wenshuai; Pan, Zhide
2017-04-01
Using a structure-preserving algorithm significantly increases the computational efficiency of solving wave equations. However, only a few explicit symplectic schemes are available in the literature, and the capabilities of these symplectic schemes have not been sufficiently exploited. Here, we propose a modified strategy to construct explicit symplectic schemes for time advance. The acoustic wave equation is transformed into a Hamiltonian system. The classical symplectic partitioned Runge-Kutta (PRK) method is used for the temporal discretization. Additional spatial differential terms are added to the PRK schemes to form the modified symplectic methods and then two modified time-advancing symplectic methods with all of positive symplectic coefficients are then constructed. The spatial differential operators are approximated by nearly-analytic discrete (NAD) operators, and we call the fully discretized scheme modified symplectic nearly analytic discrete (MSNAD) method. Theoretical analyses show that the MSNAD methods exhibit less numerical dispersion and higher stability limits than conventional methods. Three numerical experiments are conducted to verify the advantages of the MSNAD methods, such as their numerical accuracy, computational cost, stability, and long-term calculation capability.
Infinitesimal Deformations of a Formal Symplectic Groupoid
NASA Astrophysics Data System (ADS)
Karabegov, Alexander
2011-09-01
Given a formal symplectic groupoid G over a Poisson manifold ( M, π 0), we define a new object, an infinitesimal deformation of G, which can be thought of as a formal symplectic groupoid over the manifold M equipped with an infinitesimal deformation {π_0 + \\varepsilon π_1} of the Poisson bivector field π 0. To any pair of natural star products {(ast,tildeast)} having the same formal symplectic groupoid G we relate an infinitesimal deformation of G. We call it the deformation groupoid of the pair {(ast,tildeast)} . To each star product with separation of variables {ast} on a Kähler-Poisson manifold M we relate another star product with separation of variables {hatast} on M. We build an algorithm for calculating the principal symbols of the components of the logarithm of the formal Berezin transform of a star product with separation of variables {ast} . This algorithm is based upon the deformation groupoid of the pair {(ast,hatast)}.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Jianyuan; Liu, Jian; He, Yang
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint http://arxiv.org/abs/arXiv:1505.06076 (2015)], which produces five exactlymore » soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave.« less
Covariant symplectic structure of the complex Monge-Ampère equation
NASA Astrophysics Data System (ADS)
Nutku, Y.
2000-04-01
The complex Monge-Ampère equation is invariant under arbitrary holomorphic changes of the independent variables with unit Jacobian. We present its variational formulation where the action remains invariant under this infinite group. The new Lagrangian enables us to obtain the first symplectic 2-form for the complex Monge-Ampère equation in the framework of the covariant Witten-Zuckerman approach to symplectic structure. We base our considerations on a reformulation of the Witten-Zuckerman theory in terms of holomorphic differential forms. The first closed and conserved Witten-Zuckerman symplectic 2-form for the complex Monge-Ampère equation is obtained in arbitrary dimension and for all cases elliptic, hyperbolic and homogeneous. The connection of the complex Monge-Ampère equation with Ricci-flat Kähler geometry suggests the use of the Hilbert action principle as an alternative variational formulation. However, we point out that Hilbert's Lagrangian is a divergence for Kähler metrics and serves as a topological invariant rather than yielding the Euclideanized Einstein field equations. Nevertheless, since the Witten-Zuckerman theory employs only the boundary terms in the first variation of the action, Hilbert's Lagrangian can be used to obtain the second Witten-Zuckerman symplectic 2-form. This symplectic 2-form vanishes on shell, thus defining a Lagrangian submanifold. In its derivation the connection of the second symplectic 2-form with the complex Monge-Ampère equation is indirect but we show that it satisfies all the properties required of a symplectic 2-form for the complex elliptic, or hyperbolic Monge-Ampère equation when the dimension of the complex manifold is 3 or higher. The complex Monge-Ampère equation admits covariant bisymplectic structure for complex dimension 3, or higher. However, in the physically interesting case of n=2 we have only one symplectic 2-form. The extension of these results to the case of complex Monge-Ampère-Liouville equation is also presented.
“SLIMPLECTIC” INTEGRATORS: VARIATIONAL INTEGRATORS FOR GENERAL NONCONSERVATIVE SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsang, David; Turner, Alec; Galley, Chad R.
2015-08-10
Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. In this Letter, we develop the “slimplectic” integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to the principle of stationary nonconservative action developed in Galley et al. As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. Wemore » discuss several example systems, including damped harmonic oscillators, Poynting–Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Qiang; Qin, Hong; Liu, Jian
An infinite dimensional canonical symplectic structure and structure-preserving geometric algorithms are developed for the photon–matter interactions described by the Schrödinger–Maxwell equations. The algorithms preserve the symplectic structure of the system and the unitary nature of the wavefunctions, and bound the energy error of the simulation for all time-steps. Here, this new numerical capability enables us to carry out first-principle based simulation study of important photon–matter interactions, such as the high harmonic generation and stabilization of ionization, with long-term accuracy and fidelity.
Higher order temporal finite element methods through mixed formalisms.
Kim, Jinkyu
2014-01-01
The extended framework of Hamilton's principle and the mixed convolved action principle provide new rigorous weak variational formalism for a broad range of initial boundary value problems in mathematical physics and mechanics. In this paper, their potential when adopting temporally higher order approximations is investigated. The classical single-degree-of-freedom dynamical systems are primarily considered to validate and to investigate the performance of the numerical algorithms developed from both formulations. For the undamped system, all the algorithms are symplectic and unconditionally stable with respect to the time step. For the damped system, they are shown to be accurate with good convergence characteristics.
NASA Technical Reports Server (NTRS)
Wisdom, Jack
2002-01-01
In these 18 years, the research has touched every major dynamical problem in the solar system, including: the effect of chaotic zones on the distribution of asteroids, the delivery of meteorites along chaotic pathways, the chaotic motion of Pluto, the chaotic motion of the outer planets and that of the whole solar system, the delivery of short period comets from the Kuiper belt, the tidal evolution of the Uranian arid Galilean satellites, the chaotic tumbling of Hyperion and other irregular satellites, the large chaotic variations of the obliquity of Mars, the evolution of the Earth-Moon system, and the resonant core- mantle dynamics of Earth and Venus. It has introduced new analytical and numerical tools that are in widespread use. Today, nearly every long-term integration of our solar system, its subsystems, and other solar systems uses algorithms that was invented. This research has all been primarily Supported by this sequence of PGG NASA grants. During this period published major investigations of tidal evolution of the Earth-Moon system and of the passage of the Earth and Venus through non-linear core-mantle resonances were completed. It has published a major innovation in symplectic algorithms: the symplectic corrector. A paper was completed on non-perturbative hydrostatic equilibrium.
Complete characterization of fourth-order symplectic integrators with extended-linear coefficients.
Chin, Siu A
2006-02-01
The structure of symplectic integrators up to fourth order can be completely and analytically understood when the factorization (split) coefficients are related linearly but with a uniform nonlinear proportional factor. The analytic form of these extended-linear symplectic integrators greatly simplified proofs of their general properties and allowed easy construction of both forward and nonforward fourth-order algorithms with an arbitrary number of operators. Most fourth-order forward integrators can now be derived analytically from this extended-linear formulation without the use of symbolic algebra.
HNBody: A Simulation Package for Hierarchical N-Body Systems
NASA Astrophysics Data System (ADS)
Rauch, Kevin P.
2018-04-01
HNBody (http://www.hnbody.org/) is an extensible software package forintegrating the dynamics of N-body systems. Although general purpose, itincorporates several features and algorithms particularly well-suited tosystems containing a hierarchy (wide dynamic range) of masses. HNBodyversion 1 focused heavily on symplectic integration of nearly-Kepleriansystems. Here I describe the capabilities of the redesigned and expandedpackage version 2, which includes: symplectic integrators up to eighth order(both leap frog and Wisdom-Holman type methods), with symplectic corrector andclose encounter support; variable-order, variable-timestep Bulirsch-Stoer andStörmer integrators; post-Newtonian and multipole physics options; advancedround-off control for improved long-term stability; multi-threading and SIMDvectorization enhancements; seamless availability of extended precisionarithmetic for all calculations; extremely flexible configuration andoutput. Tests of the physical correctness of the algorithms are presentedusing JPL Horizons ephemerides (https://ssd.jpl.nasa.gov/?horizons) andpreviously published results for reference. The features and performanceof HNBody are also compared to several other freely available N-body codes,including MERCURY (Chambers), SWIFT (Levison & Duncan) and WHFAST (Rein &Tamayo).
Discontinuous Galerkin methods for Hamiltonian ODEs and PDEs
NASA Astrophysics Data System (ADS)
Tang, Wensheng; Sun, Yajuan; Cai, Wenjun
2017-02-01
In this article, we present a unified framework of discontinuous Galerkin (DG) discretizations for Hamiltonian ODEs and PDEs. We show that with appropriate numerical fluxes the numerical algorithms deduced from DG discretizations can be combined with the symplectic methods in time to derive the multi-symplectic PRK schemes. The resulting numerical discretizations are applied to the linear and nonlinear Schrödinger equations. Some conservative properties of the numerical schemes are investigated and confirmed in the numerical experiments.
EXPLICIT SYMPLECTIC-LIKE INTEGRATORS WITH MIDPOINT PERMUTATIONS FOR SPINNING COMPACT BINARIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Junjie; Wu, Xin; Huang, Guoqing
2017-01-01
We refine the recently developed fourth-order extended phase space explicit symplectic-like methods for inseparable Hamiltonians using Yoshida’s triple product combined with a midpoint permuted map. The midpoint between the original variables and their corresponding extended variables at every integration step is readjusted as the initial values of the original variables and their corresponding extended ones at the next step integration. The triple-product construction is apparently superior to the composition of two triple products in computational efficiency. Above all, the new midpoint permutations are more effective in restraining the equality of the original variables and their corresponding extended ones at each integration step thanmore » the existing sequent permutations of momenta and coordinates. As a result, our new construction shares the benefit of implicit symplectic integrators in the conservation of the second post-Newtonian Hamiltonian of spinning compact binaries. Especially for the chaotic case, it can work well, but the existing sequent permuted algorithm cannot. When dissipative effects from the gravitational radiation reaction are included, the new symplectic-like method has a secular drift in the energy error of the dissipative system for the orbits that are regular in the absence of radiation, as an implicit symplectic integrator does. In spite of this, it is superior to the same-order implicit symplectic integrator in accuracy and efficiency. The new method is particularly useful in discussing the long-term evolution of inseparable Hamiltonian problems.« less
Vorticity and symplecticity in multi-symplectic, Lagrangian gas dynamics
NASA Astrophysics Data System (ADS)
Webb, G. M.; Anco, S. C.
2016-02-01
The Lagrangian, multi-dimensional, ideal, compressible gas dynamic equations are written in a multi-symplectic form, in which the Lagrangian fluid labels, m i (the Lagrangian mass coordinates) and time t are the independent variables, and in which the Eulerian position of the fluid element {x}={x}({m},t) and the entropy S=S({m},t) are the dependent variables. Constraints in the variational principle are incorporated by means of Lagrange multipliers. The constraints are: the entropy advection equation S t = 0, the Lagrangian map equation {{x}}t={u} where {u} is the fluid velocity, and the mass continuity equation which has the form J=τ where J={det}({x}{ij}) is the Jacobian of the Lagrangian map in which {x}{ij}=\\partial {x}i/\\partial {m}j and τ =1/ρ is the specific volume of the gas. The internal energy per unit volume of the gas \\varepsilon =\\varepsilon (ρ ,S) corresponds to a non-barotropic gas. The Lagrangian is used to define multi-momenta, and to develop de Donder-Weyl Hamiltonian equations. The de Donder-Weyl equations are cast in a multi-symplectic form. The pullback conservation laws and the symplecticity conservation laws are obtained. One class of symplecticity conservation laws give rise to vorticity and potential vorticity type conservation laws, and another class of symplecticity laws are related to derivatives of the Lagrangian energy conservation law with respect to the Lagrangian mass coordinates m i . We show that the vorticity-symplecticity laws can be derived by a Lie dragging method, and also by using Noether’s second theorem and a fluid relabelling symmetry which is a divergence symmetry of the action. We obtain the Cartan-Poincaré form describing the equations and we discuss a set of differential forms representing the equation system.
Dissipation-preserving spectral element method for damped seismic wave equations
NASA Astrophysics Data System (ADS)
Cai, Wenjun; Zhang, Huai; Wang, Yushun
2017-12-01
This article describes the extension of the conformal symplectic method to solve the damped acoustic wave equation and the elastic wave equations in the framework of the spectral element method. The conformal symplectic method is a variation of conventional symplectic methods to treat non-conservative time evolution problems, which has superior behaviors in long-time stability and dissipation preservation. To reveal the intrinsic dissipative properties of the model equations, we first reformulate the original systems in their equivalent conformal multi-symplectic structures and derive the corresponding conformal symplectic conservation laws. We thereafter separate each system into a conservative Hamiltonian system and a purely dissipative ordinary differential equation system. Based on the splitting methodology, we solve the two subsystems respectively. The dissipative one is cheaply solved by its analytic solution. While for the conservative system, we combine a fourth-order symplectic Nyström method in time and the spectral element method in space to cover the circumstances in realistic geological structures involving complex free-surface topography. The Strang composition method is adopted thereby to concatenate the corresponding two parts of solutions and generate the completed conformal symplectic method. A relative larger Courant number than that of the traditional Newmark scheme is found in the numerical experiments in conjunction with a spatial sampling of approximately 5 points per wavelength. A benchmark test for the damped acoustic wave equation validates the effectiveness of our proposed method in precisely capturing dissipation rate. The classical Lamb problem is used to demonstrate the ability of modeling Rayleigh wave in elastic wave propagation. More comprehensive numerical experiments are presented to investigate the long-time simulation, low dispersion and energy conservation properties of the conformal symplectic methods in both the attenuating homogeneous and heterogeneous media.
A Survey of Symplectic and Collocation Integration Methods for Orbit Propagation
NASA Technical Reports Server (NTRS)
Jones, Brandon A.; Anderson, Rodney L.
2012-01-01
Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Hong; Liu, Jian; Xiao, Jianyuan
Particle-in-cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretising its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g. 10(9), degrees of freedom. The long-term accuracy and fidelity of the algorithm enables us to numerically confirm Mouhot and Villani's theory and conjecture on nonlinearmore » Landau damping over several orders of magnitude using the PIC method, and to calculate the nonlinear evolution of the reflectivity during the mode conversion process from extraordinary waves to Bernstein waves.« less
Structure-preserving spectral element method in attenuating seismic wave modeling
NASA Astrophysics Data System (ADS)
Cai, Wenjun; Zhang, Huai
2016-04-01
This work describes the extension of the conformal symplectic method to solve the damped acoustic wave equation and the elastic wave equations in the framework of the spectral element method. The conformal symplectic method is a variation of conventional symplectic methods to treat non-conservative time evolution problems which has superior behaviors in long-time stability and dissipation preservation. To construct the conformal symplectic method, we first reformulate the damped acoustic wave equation and the elastic wave equations in their equivalent conformal multi-symplectic structures, which naturally reveal the intrinsic properties of the original systems, especially, the dissipation laws. We thereafter separate each structures into a conservative Hamiltonian system and a purely dissipative ordinary differential equation system. Based on the splitting methodology, we solve the two subsystems respectively. The dissipative one is cheaply solved by its analytic solution. While for the conservative system, we combine a fourth-order symplectic Nyström method in time and the spectral element method in space to cover the circumstances in realistic geological structures involving complex free-surface topography. The Strang composition method is adopted thereby to concatenate the corresponding two parts of solutions and generate the completed numerical scheme, which is conformal symplectic and can therefore guarantee the numerical stability and dissipation preservation after a large time modeling. Additionally, a relative larger Courant number than that of the traditional Newmark scheme is found in the numerical experiments in conjunction with a spatial sampling of approximately 5 points per wavelength. A benchmark test for the damped acoustic wave equation validates the effectiveness of our proposed method in precisely capturing dissipation rate. The classical Lamb problem is used to demonstrate the ability of modeling Rayleigh-wave propagation. More comprehensive numerical experiments are presented to investigate the long-time simulation, low dispersion and energy conservation properties of the conformal symplectic method in both the attenuating homogeneous and heterogeneous mediums.
NASA Astrophysics Data System (ADS)
Barbosa, Gabriel D.; Thibes, Ronaldo
2018-06-01
We consider a second-degree algebraic curve describing a general conic constraint imposed on the motion of a massive spinless particle. The problem is trivial at classical level but becomes involved and interesting concerning its quantum counterpart with subtleties in its symplectic structure and symmetries. We start with a second-class version of the general conic constrained particle, which encompasses previous versions of circular and elliptical paths discussed in the literature. By applying the symplectic FJBW iteration program, we proceed on to show how a gauge invariant version for the model can be achieved from the originally second-class system. We pursue the complete constraint analysis in phase space and perform the Faddeev-Jackiw symplectic quantization following the Barcelos-Wotzasek iteration program to unravel the essential aspects of the constraint structure. While in the standard Dirac-Bergmann approach there are four second-class constraints, in the FJBW they reduce to two. By using the symplectic potential obtained in the last step of the FJBW iteration process, we construct a gauge invariant model exhibiting explicitly its BRST symmetry. We obtain the quantum BRST charge and write the Green functions generator for the gauge invariant version. Our results reproduce and neatly generalize the known BRST symmetry of the rigid rotor, clearly showing that this last one constitutes a particular case of a broader class of theories.
Pre-symplectic algebroids and their applications
NASA Astrophysics Data System (ADS)
Liu, Jiefeng; Sheng, Yunhe; Bai, Chengming
2018-03-01
In this paper, we introduce the notion of a pre-symplectic algebroid and show that there is a one-to-one correspondence between pre-symplectic algebroids and symplectic Lie algebroids. This result is the geometric generalization of the relation between left-symmetric algebras and symplectic (Frobenius) Lie algebras. Although pre-symplectic algebroids are not left-symmetric algebroids, they still can be viewed as the underlying structures of symplectic Lie algebroids. Then we study exact pre-symplectic algebroids and show that they are classified by the third cohomology group of a left-symmetric algebroid. Finally, we study para-complex pre-symplectic algebroids. Associated with a para-complex pre-symplectic algebroid, there is a pseudo-Riemannian Lie algebroid. The multiplication in a para-complex pre-symplectic algebroid characterizes the restriction to the Lagrangian subalgebroids of the Levi-Civita connection in the corresponding pseudo-Riemannian Lie algebroid.
An hp symplectic pseudospectral method for nonlinear optimal control
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong
2017-01-01
An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Xie, Hong-Bo; Dokos, Socrates
2013-06-01
We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.
Xie, Hong-Bo; Dokos, Socrates
2013-06-01
We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.
Higher order explicit symmetric integrators for inseparable forms of coordinates and momenta
NASA Astrophysics Data System (ADS)
Liu, Lei; Wu, Xin; Huang, Guoqing; Liu, Fuyao
2016-06-01
Pihajoki proposed the extended phase-space second-order explicit symmetric leapfrog methods for inseparable Hamiltonian systems. On the basis of this work, we survey a critical problem on how to mix the variables in the extended phase space. Numerical tests show that sequent permutations of coordinates and momenta can make the leapfrog-like methods yield the most accurate results and the optimal long-term stabilized error behaviour. We also present a novel method to construct many fourth-order extended phase-space explicit symmetric integration schemes. Each scheme represents the symmetric production of six usual second-order leapfrogs without any permutations. This construction consists of four segments: the permuted coordinates, triple product of the usual second-order leapfrog without permutations, the permuted momenta and the triple product of the usual second-order leapfrog without permutations. Similarly, extended phase-space sixth, eighth and other higher order explicit symmetric algorithms are available. We used several inseparable Hamiltonian examples, such as the post-Newtonian approach of non-spinning compact binaries, to show that one of the proposed fourth-order methods is more efficient than the existing methods; examples include the fourth-order explicit symplectic integrators of Chin and the fourth-order explicit and implicit mixed symplectic integrators of Zhong et al. Given a moderate choice for the related mixing and projection maps, the extended phase-space explicit symplectic-like methods are well suited for various inseparable Hamiltonian problems. Samples of these problems involve the algorithmic regularization of gravitational systems with velocity-dependent perturbations in the Solar system and post-Newtonian Hamiltonian formulations of spinning compact objects.
Wigner functions on non-standard symplectic vector spaces
NASA Astrophysics Data System (ADS)
Dias, Nuno Costa; Prata, João Nuno
2018-01-01
We consider the Weyl quantization on a flat non-standard symplectic vector space. We focus mainly on the properties of the Wigner functions defined therein. In particular we show that the sets of Wigner functions on distinct symplectic spaces are different but have non-empty intersections. This extends previous results to arbitrary dimension and arbitrary (constant) symplectic structure. As a by-product we introduce and prove several concepts and results on non-standard symplectic spaces which generalize those on the standard symplectic space, namely, the symplectic spectrum, Williamson's theorem, and Narcowich-Wigner spectra. We also show how Wigner functions on non-standard symplectic spaces behave under the action of an arbitrary linear coordinate transformation.
Equilibrium Solutions of the Logarithmic Hamiltonian Leapfrog for the N-body Problem
NASA Astrophysics Data System (ADS)
Minesaki, Yukitaka
2018-04-01
We prove that a second-order logarithmic Hamiltonian leapfrog for the classical general N-body problem (CGNBP) designed by Mikkola and Tanikawa and some higher-order logarithmic Hamiltonian methods based on symmetric multicompositions of the logarithmic algorithm exactly reproduce the orbits of elliptic relative equilibrium solutions in the original CGNBP. These methods are explicit symplectic methods. Before this proof, only some implicit discrete-time CGNBPs proposed by Minesaki had been analytically shown to trace the orbits of elliptic relative equilibrium solutions. The proof is therefore the first existence proof for explicit symplectic methods. Such logarithmic Hamiltonian methods with a variable time step can also precisely retain periodic orbits in the classical general three-body problem, which generic numerical methods with a constant time step cannot do.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Jianbo, E-mail: jianbocui@lsec.cc.ac.cn; Hong, Jialin, E-mail: hjl@lsec.cc.ac.cn; Liu, Zhihui, E-mail: liuzhihui@lsec.cc.ac.cn
We indicate that the nonlinear Schrödinger equation with white noise dispersion possesses stochastic symplectic and multi-symplectic structures. Based on these structures, we propose the stochastic symplectic and multi-symplectic methods, which preserve the continuous and discrete charge conservation laws, respectively. Moreover, we show that the proposed methods are convergent with temporal order one in probability. Numerical experiments are presented to verify our theoretical results.
NASA Astrophysics Data System (ADS)
Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.
2013-10-01
In this work we derive symplectic EF/TF RKN methods by symplectic EF/TF PRK methods. Also EF/TF symplectic RKN methods are constructed directly from classical symplectic RKN methods. Several numerical examples will be given in order to decide which is the most favourable implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Punjabi, Alkesh; Ali, Halima
2011-02-15
Any canonical transformation of Hamiltonian equations is symplectic, and any area-preserving transformation in 2D is a symplectomorphism. Based on these, a discrete symplectic map and its continuous symplectic analog are derived for forward magnetic field line trajectories in natural canonical coordinates. The unperturbed axisymmetric Hamiltonian for magnetic field lines is constructed from the experimental data in the DIII-D [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)]. The equilibrium Hamiltonian is a highly accurate, analytic, and realistic representation of the magnetic geometry of the DIII-D. These symplectic mathematical maps are used to calculate the magnetic footprint onmore » the inboard collector plate in the DIII-D. Internal statistical topological noise and field errors are irreducible and ubiquitous in magnetic confinement schemes for fusion. It is important to know the stochasticity and magnetic footprint from noise and error fields. The estimates of the spectrum and mode amplitudes of the spatial topological noise and magnetic errors in the DIII-D are used as magnetic perturbation. The discrete and continuous symplectic maps are used to calculate the magnetic footprint on the inboard collector plate of the DIII-D by inverting the natural coordinates to physical coordinates. The combination of highly accurate equilibrium generating function, natural canonical coordinates, symplecticity, and small step-size together gives a very accurate calculation of magnetic footprint. Radial variation of magnetic perturbation and the response of plasma to perturbation are not included. The inboard footprint from noise and errors are dominated by m=3, n=1 mode. The footprint is in the form of a toroidally winding helical strip. The width of stochastic layer scales as (1/2) power of amplitude. The area of footprint scales as first power of amplitude. The physical parameters such as toroidal angle, length, and poloidal angle covered before striking, and the safety factor all have fractal structure. The average field diffusion near the X-point for lines that strike and that do not strike differs by about three to four orders of magnitude. The magnetic footprint gives the maximal bounds on size and heat flux density on collector plate.« less
Minimal models of compact symplectic semitoric manifolds
NASA Astrophysics Data System (ADS)
Kane, D. M.; Palmer, J.; Pelayo, Á.
2018-02-01
A symplectic semitoric manifold is a symplectic 4-manifold endowed with a Hamiltonian (S1 × R) -action satisfying certain conditions. The goal of this paper is to construct a new symplectic invariant of symplectic semitoric manifolds, the helix, and give applications. The helix is a symplectic analogue of the fan of a nonsingular complete toric variety in algebraic geometry, that takes into account the effects of the monodromy near focus-focus singularities. We give two applications of the helix: first, we use it to give a classification of the minimal models of symplectic semitoric manifolds, where "minimal" is in the sense of not admitting any blowdowns. The second application is an extension to the compact case of a well known result of Vũ Ngọc about the constraints posed on a symplectic semitoric manifold by the existence of focus-focus singularities. The helix permits to translate a symplectic geometric problem into an algebraic problem, and the paper describes a method to solve this type of algebraic problem.
Life on the Edge of Chaos: Orbital Mechanics and Symplectic Integration
NASA Astrophysics Data System (ADS)
Newman, William I.; Hyman, James M.
1998-09-01
Symplectic mapping techniques have become very popular among celestial mechanicians and molecular dynamicists. The word "symplectic" was coined by Hermann Weyl (1939), exploiting the Greek root for a word meaning "complex," to describe a Lie group with special geometric properties. A symplectic integration method is one whose time-derivative satisfies Hamilton's equations of motion (Goldstein, 1980). When due care is paid to the standard computational triad of consistency, accuracy, and stability, a numerical method that is also symplectic offers some potential advantages. Varadarajan (1974) at UCLA was the first to formally explore, for a very restrictive class of problems, the geometric implications of symplectic splittings through the use of Lie series and group representations. Over the years, however, a "mythology" has emerged regarding the nature of symplectic mappings and what features are preserved. Some of these myths have already been shattered by the computational mathematics community. These results, together with new ones we present here for the first time, show where important pitfalls and misconceptions reside. These misconceptions include that: (a) symplectic maps preserve conserved quantities like the energy; (b) symplectic maps are equivalent to the exact computation of the trajectory of a nearby, time-independent Hamiltonian; (c) complicated splitting methods (i.e., "maps in composition") are not symplectic; (d) symplectic maps preserve the geometry associated with separatrices and homoclinic points; and (e) symplectic maps possess artificial resonances at triple and quadruple frequencies. We verify, nevertheless, that using symplectic methods together with traditional safeguards, e.g. convergence and scaling checks using reduced step sizes for integration schemes of sufficient order, can provide an important exploratory and development tool for Solar System applications.
Symplecticity in Beam Dynamics: An Introduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rees, John R
2003-06-10
A particle in a particle accelerator can often be considered a Hamiltonian system, and when that is the case, its motion obeys the constraints of the Symplectic Condition. This tutorial monograph derives the condition from the requirement that a canonical transformation must yield a new Hamiltonian system from an old one. It then explains some of the consequences of symplecticity and discusses examples of its applications, touching on symplectic matrices, phase space and Liouville's Theorem, Lagrange and Poisson brackets, Lie algebra, Lie operators and Lie transformations, symplectic maps and symplectic integrators.
Poly-symplectic Groupoids and Poly-Poisson Structures
NASA Astrophysics Data System (ADS)
Martinez, Nicolas
2015-05-01
We introduce poly-symplectic groupoids, which are natural extensions of symplectic groupoids to the context of poly-symplectic geometry, and define poly-Poisson structures as their infinitesimal counterparts. We present equivalent descriptions of poly-Poisson structures, including one related with AV-Dirac structures. We also discuss symmetries and reduction in the setting of poly-symplectic groupoids and poly-Poisson structures, and use our viewpoint to revisit results and develop new aspects of the theory initiated in Iglesias et al. (Lett Math Phys 103:1103-1133, 2013).
Formal Symplectic Groupoid of a Deformation Quantization
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
2005-08-01
We give a self-contained algebraic description of a formal symplectic groupoid over a Poisson manifold M. To each natural star product on M we then associate a canonical formal symplectic groupoid over M. Finally, we construct a unique formal symplectic groupoid ‘with separation of variables’ over an arbitrary Kähler-Poisson manifold.
Multi-symplectic integrators: numerical schemes for Hamiltonian PDEs that conserve symplecticity
NASA Astrophysics Data System (ADS)
Bridges, Thomas J.; Reich, Sebastian
2001-06-01
The symplectic numerical integration of finite-dimensional Hamiltonian systems is a well established subject and has led to a deeper understanding of existing methods as well as to the development of new very efficient and accurate schemes, e.g., for rigid body, constrained, and molecular dynamics. The numerical integration of infinite-dimensional Hamiltonian systems or Hamiltonian PDEs is much less explored. In this Letter, we suggest a new theoretical framework for generalizing symplectic numerical integrators for ODEs to Hamiltonian PDEs in R2: time plus one space dimension. The central idea is that symplecticity for Hamiltonian PDEs is directional: the symplectic structure of the PDE is decomposed into distinct components representing space and time independently. In this setting PDE integrators can be constructed by concatenating uni-directional ODE symplectic integrators. This suggests a natural definition of multi-symplectic integrator as a discretization that conserves a discrete version of the conservation of symplecticity for Hamiltonian PDEs. We show that this approach leads to a general framework for geometric numerical schemes for Hamiltonian PDEs, which have remarkable energy and momentum conservation properties. Generalizations, including development of higher-order methods, application to the Euler equations in fluid mechanics, application to perturbed systems, and extension to more than one space dimension are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forest, E.; Bengtsson, J.; Reusch, M.F.
1991-04-01
The full power of Yoshida's technique is exploited to produce an arbitrary order implicit symplectic integrator and multi-map explicit integrator. This implicit integrator uses a characteristic function involving the force term alone. Also we point out the usefulness of the plain Ruth algorithm in computing Taylor series map using the techniques first introduced by Berz in his 'COSY-INFINITY' code.
Alternative bi-Hamiltonian structures for WDVV equations of associativity
NASA Astrophysics Data System (ADS)
Kalayci, J.; Nutku, Y.
1998-01-01
The WDVV equations of associativity in two-dimensional topological field theory are completely integrable third-order Monge-Ampère equations which admit bi-Hamiltonian structure. The time variable plays a distinguished role in the discussion of Hamiltonian structure, whereas in the theory of WDVV equations none of the independent variables merits such a distinction. WDVV equations admit very different alternative Hamiltonian structures under different possible choices of the time variable, but all these various Hamiltonian formulations can be brought together in the framework of the covariant theory of symplectic structure. They can be identified as different components of the covariant Witten-Zuckerman symplectic 2-form current density where a variational formulation of the WDVV equation that leads to the Hamiltonian operator through the Dirac bracket is available.
A modified symplectic PRK scheme for seismic wave modeling
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Yang, Dinghui; Ma, Jian
2017-02-01
A new scheme for the temporal discretization of the seismic wave equation is constructed based on symplectic geometric theory and a modified strategy. The ordinary differential equation in terms of time, which is obtained after spatial discretization via the spectral-element method, is transformed into a Hamiltonian system. A symplectic partitioned Runge-Kutta (PRK) scheme is used to solve the Hamiltonian system. A term related to the multiplication of the spatial discretization operator with the seismic wave velocity vector is added into the symplectic PRK scheme to create a modified symplectic PRK scheme. The symplectic coefficients of the new scheme are determined via Taylor series expansion. The positive coefficients of the scheme indicate that its long-term computational capability is more powerful than that of conventional symplectic schemes. An exhaustive theoretical analysis reveals that the new scheme is highly stable and has low numerical dispersion. The results of three numerical experiments demonstrate the high efficiency of this method for seismic wave modeling.
Highly accurate symplectic element based on two variational principles
NASA Astrophysics Data System (ADS)
Qing, Guanghui; Tian, Jia
2018-02-01
For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.
Mirror symmetry in emergent gravity
NASA Astrophysics Data System (ADS)
Yang, Hyun Seok
2017-09-01
Given a six-dimensional symplectic manifold (M , B), a nondegenerate, co-closed four-form C introduces a dual symplectic structure B ˜ = * C independent of B via the Hodge duality *. We show that the doubling of symplectic structures due to the Hodge duality results in two independent classes of noncommutative U (1) gauge fields by considering the Seiberg-Witten map for each symplectic structure. As a result, emergent gravity suggests a beautiful picture that the variety of six-dimensional manifolds emergent from noncommutative U (1) gauge fields is doubled. In particular, the doubling for the variety of emergent Calabi-Yau manifolds allows us to arrange a pair of Calabi-Yau manifolds such that they are mirror to each other. Therefore, we argue that the mirror symmetry of Calabi-Yau manifolds is the Hodge theory for the deformation of symplectic and dual symplectic structures.
Symmetries of the Space of Linear Symplectic Connections
NASA Astrophysics Data System (ADS)
Fox, Daniel J. F.
2017-01-01
There is constructed a family of Lie algebras that act in a Hamiltonian way on the symplectic affine space of linear symplectic connections on a symplectic manifold. The associated equivariant moment map is a formal sum of the Cahen-Gutt moment map, the Ricci tensor, and a translational term. The critical points of a functional constructed from it interpolate between the equations for preferred symplectic connections and the equations for critical symplectic connections. The commutative algebra of formal sums of symmetric tensors on a symplectic manifold carries a pair of compatible Poisson structures, one induced from the canonical Poisson bracket on the space of functions on the cotangent bundle polynomial in the fibers, and the other induced from the algebraic fiberwise Schouten bracket on the symmetric algebra of each fiber of the cotangent bundle. These structures are shown to be compatible, and the required Lie algebras are constructed as central extensions of their! linear combinations restricted to formal sums of symmetric tensors whose first order term is a multiple of the differential of its zeroth order term.
Constant symplectic 2-groupoids
NASA Astrophysics Data System (ADS)
Mehta, Rajan Amit; Tang, Xiang
2018-05-01
We propose a definition of symplectic 2-groupoid which includes integrations of Courant algebroids that have been recently constructed. We study in detail the simple but illustrative case of constant symplectic 2-groupoids. We show that the constant symplectic 2-groupoids are, up to equivalence, in one-to-one correspondence with a simple class of Courant algebroids that we call constant Courant algebroids. Furthermore, we find a correspondence between certain Dirac structures and Lagrangian sub-2-groupoids.
Explicit methods in extended phase space for inseparable Hamiltonian problems
NASA Astrophysics Data System (ADS)
Pihajoki, Pauli
2015-03-01
We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.
Symplectic discretization for spectral element solution of Maxwell's equations
NASA Astrophysics Data System (ADS)
Zhao, Yanmin; Dai, Guidong; Tang, Yifa; Liu, Qinghuo
2009-08-01
Applying the spectral element method (SEM) based on the Gauss-Lobatto-Legendre (GLL) polynomial to discretize Maxwell's equations, we obtain a Poisson system or a Poisson system with at most a perturbation. For the system, we prove that any symplectic partitioned Runge-Kutta (PRK) method preserves the Poisson structure and its implied symplectic structure. Numerical examples show the high accuracy of SEM and the benefit of conserving energy due to the use of symplectic methods.
SMD-based numerical stochastic perturbation theory
NASA Astrophysics Data System (ADS)
Dalla Brida, Mattia; Lüscher, Martin
2017-05-01
The viability of a variant of numerical stochastic perturbation theory, where the Langevin equation is replaced by the SMD algorithm, is examined. In particular, the convergence of the process to a unique stationary state is rigorously established and the use of higher-order symplectic integration schemes is shown to be highly profitable in this context. For illustration, the gradient-flow coupling in finite volume with Schrödinger functional boundary conditions is computed to two-loop (i.e. NNL) order in the SU(3) gauge theory. The scaling behaviour of the algorithm turns out to be rather favourable in this case, which allows the computations to be driven close to the continuum limit.
Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes
NASA Astrophysics Data System (ADS)
Harrington, James William
Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.
Symplectic exponential Runge-Kutta methods for solving nonlinear Hamiltonian systems
NASA Astrophysics Data System (ADS)
Mei, Lijie; Wu, Xinyuan
2017-06-01
Symplecticity is also an important property for exponential Runge-Kutta (ERK) methods in the sense of structure preservation once the underlying problem is a Hamiltonian system, though ERK methods provide a good performance of higher accuracy and better efficiency than classical Runge-Kutta (RK) methods in dealing with stiff problems: y‧ (t) = My + f (y). On account of this observation, the main theme of this paper is to derive and analyze the symplectic conditions for ERK methods. Using the fundamental analysis of geometric integrators, we first establish one class of sufficient conditions for symplectic ERK methods. It is shown that these conditions will reduce to the conventional ones when M → 0, and this means that these conditions of symplecticity are extensions of the conventional ones in the literature. Furthermore, we also present a new class of structure-preserving ERK methods possessing the remarkable property of symplecticity. Meanwhile, the revised stiff order conditions are proposed and investigated in detail. Since the symplectic ERK methods are implicit and iterative solutions are required in practice, we also investigate the convergence of the corresponding fixed-point iterative procedure. Finally, the numerical experiments, including a nonlinear Schrödinger equation, a sine-Gordon equation, a nonlinear Klein-Gordon equation, and the well-known Fermi-Pasta-Ulam problem, are implemented in comparison with the corresponding symplectic RK methods and the prominent numerical results definitely coincide with the theories and conclusions made in this paper.
NASA Astrophysics Data System (ADS)
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
Fedosov Deformation Quantization as a BRST Theory
NASA Astrophysics Data System (ADS)
Grigoriev, M. A.; Lyakhovich, S. L.
The relationship is established between the Fedosov deformation quantization of a general symplectic manifold and the BFV-BRST quantization of constrained dynamical systems. The original symplectic manifold M is presented as a second class constrained surface in the fibre bundle ?*ρM which is a certain modification of a usual cotangent bundle equipped with a natural symplectic structure. The second class system is converted into the first class one by continuation of the constraints into the extended manifold, being a direct sum of ?*ρM and the tangent bundle TM. This extended manifold is equipped with a nontrivial Poisson bracket which naturally involves two basic ingredients of Fedosov geometry: the symplectic structure and the symplectic connection. The constructed first class constrained theory, being equivalent to the original symplectic manifold, is quantized through the BFV-BRST procedure. The existence theorem is proven for the quantum BRST charge and the quantum BRST invariant observables. The adjoint action of the quantum BRST charge is identified with the Abelian Fedosov connection while any observable, being proven to be a unique BRST invariant continuation for the values defined in the original symplectic manifold, is identified with the Fedosov flat section of the Weyl bundle. The Fedosov fibrewise star multiplication is thus recognized as a conventional product of the quantum BRST invariant observables.
Poisson traces, D-modules, and symplectic resolutions
NASA Astrophysics Data System (ADS)
Etingof, Pavel; Schedler, Travis
2018-03-01
We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.
Poisson traces, D-modules, and symplectic resolutions.
Etingof, Pavel; Schedler, Travis
2018-01-01
We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.
Normal forms for Poisson maps and symplectic groupoids around Poisson transversals
NASA Astrophysics Data System (ADS)
Frejlich, Pedro; Mărcuț, Ioan
2018-03-01
Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.
Normal forms for Poisson maps and symplectic groupoids around Poisson transversals.
Frejlich, Pedro; Mărcuț, Ioan
2018-01-01
Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.
Block structured adaptive mesh and time refinement for hybrid, hyperbolic + N-body systems
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Colella, Phillip
2007-11-01
We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov’s method for hydrodynamics; a symmetric, time centered modified symplectic scheme for collisionless component; and a multilevel, multigrid relaxation algorithm for the elliptic equation coupling the two components. Numerical results that illustrate the accuracy of the code and the relative merit of various implemented schemes are also presented.
Symplectic multiparticle tracking model for self-consistent space-charge simulation
Qiang, Ji
2017-01-23
Symplectic tracking is important in accelerator beam dynamics simulation. So far, to the best of our knowledge, there is no self-consistent symplectic space-charge tracking model available in the accelerator community. In this paper, we present a two-dimensional and a three-dimensional symplectic multiparticle spectral model for space-charge tracking simulation. This model includes both the effect from external fields and the effect of self-consistent space-charge fields using a split-operator method. Such a model preserves the phase space structure and shows much less numerical emittance growth than the particle-in-cell model in the illustrative examples.
Symplectic multiparticle tracking model for self-consistent space-charge simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiang, Ji
Symplectic tracking is important in accelerator beam dynamics simulation. So far, to the best of our knowledge, there is no self-consistent symplectic space-charge tracking model available in the accelerator community. In this paper, we present a two-dimensional and a three-dimensional symplectic multiparticle spectral model for space-charge tracking simulation. This model includes both the effect from external fields and the effect of self-consistent space-charge fields using a split-operator method. Such a model preserves the phase space structure and shows much less numerical emittance growth than the particle-in-cell model in the illustrative examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matuttis, Hans-Georg; Wang, Xiaoxing
Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.
A new multi-symplectic scheme for the generalized Kadomtsev-Petviashvili equation
NASA Astrophysics Data System (ADS)
Li, Haochen; Sun, Jianqiang
2012-09-01
We propose a new scheme for the generalized Kadomtsev-Petviashvili (KP) equation. The multi-symplectic conservation property of the new scheme is proved. Back error analysis shows that the new multi-symplectic scheme has second order accuracy in space and time. Numerical application on studying the KPI equation and the KPII equation are presented in detail.
DYNAMIC STABILITY OF THE SOLAR SYSTEM: STATISTICALLY INCONCLUSIVE RESULTS FROM ENSEMBLE INTEGRATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeebe, Richard E., E-mail: zeebe@soest.hawaii.edu
Due to the chaotic nature of the solar system, the question of its long-term stability can only be answered in a statistical sense, for instance, based on numerical ensemble integrations of nearby orbits. Destabilization of the inner planets, leading to close encounters and/or collisions can be initiated through a large increase in Mercury's eccentricity, with a currently assumed likelihood of ∼1%. However, little is known at present about the robustness of this number. Here I report ensemble integrations of the full equations of motion of the eight planets and Pluto over 5 Gyr, including contributions from general relativity. The resultsmore » show that different numerical algorithms lead to statistically different results for the evolution of Mercury's eccentricity (e{sub M}). For instance, starting at present initial conditions (e{sub M}≃0.21), Mercury's maximum eccentricity achieved over 5 Gyr is, on average, significantly higher in symplectic ensemble integrations using heliocentric rather than Jacobi coordinates and stricter error control. In contrast, starting at a possible future configuration (e{sub M}≃0.53), Mercury's maximum eccentricity achieved over the subsequent 500 Myr is, on average, significantly lower using heliocentric rather than Jacobi coordinates. For example, the probability for e{sub M} to increase beyond 0.53 over 500 Myr is >90% (Jacobi) versus only 40%-55% (heliocentric). This poses a dilemma because the physical evolution of the real system—and its probabilistic behavior—cannot depend on the coordinate system or the numerical algorithm chosen to describe it. Some tests of the numerical algorithms suggest that symplectic integrators using heliocentric coordinates underestimate the odds for destabilization of Mercury's orbit at high initial e{sub M}.« less
Noncommutative mapping from the symplectic formalism
NASA Astrophysics Data System (ADS)
De Andrade, M. A.; Neves, C.
2018-01-01
Bopp's shifts will be generalized through a symplectic formalism. A special procedure, like "diagonalization," which drives the completely deformed symplectic matrix to the standard symplectic form was found as suggested by Faddeev-Jackiw. Consequently, the correspondent transformation matrix guides the mapping from commutative to noncommutative (NC) phase-space coordinates. Bopp's shifts may be directly generalized from this mapping. In this context, all the NC and scale parameters, introduced into the brackets, will be lifted to the Hamiltonian. Well-known results, obtained using ⋆-product, will be reproduced without considering that the NC parameters are small (≪1). Besides, it will be shown that different choices for NC algebra among the symplectic variables generate distinct dynamical systems, in which they may not even connect with each other, and that some of them can preserve, break, or restore the symmetry of the system. Further, we will also discuss the charge and mass rescaling in a simple model.
Fermi Blobs and the Symplectic Camel: A Geometric Picture of Quantum States
NASA Astrophysics Data System (ADS)
Gossona, Maurice A. De
We have explained in previous work the correspondence between the standard squeezed coherent states of quantum mechanics, and quantum blobs, which are the smallest phase space units compatible with the uncertainty principle of quantum mechanics and having the symplectic group as a group of symmetries. In this work, we discuss the relation between quantum blobs and a certain level set (which we call "Fermi blob") introduced by Enrico Fermi in 1930. Fermi blobs allows us to extend our previous results not only to the excited states of the generalized harmonic oscillator in n dimensions, but also to arbitrary quadratic Hamiltonians. As is the case for quantum blobs, we can evaluate Fermi blobs using a topological notion, related to the uncertainty principle, the symplectic capacity of a phase space set. The definition of this notion is made possible by Gromov's symplectic non-squeezing theorem, nicknamed the "principle of the symplectic camel".
SYMPLECTIC INVARIANTS AND FLOWERS' CLASSIFICATION OF SHELL MODEL STATES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmers, K.
1961-01-01
Flowers has given a classification of shell model states in j-j coupling for a fixed number of nucleons in a shell with respect to a symplectic group. The relation between these classifications for the various nucleon numbers is studied and is found to be governed by another symplectic group, the transformations of which in general change the nucleon number. (auth)
Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method
NASA Astrophysics Data System (ADS)
Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang
2017-06-01
Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.
Variational tricomplex of a local gauge system, Lagrange structure and weak Poisson bracket
NASA Astrophysics Data System (ADS)
Sharapov, A. A.
2015-09-01
We introduce the concept of a variational tricomplex, which is applicable both to variational and nonvariational gauge systems. Assigning this tricomplex with an appropriate symplectic structure and a Cauchy foliation, we establish a general correspondence between the Lagrangian and Hamiltonian pictures of one and the same (not necessarily variational) dynamics. In practical terms, this correspondence allows one to construct the generating functional of a weak Poisson structure starting from that of a Lagrange structure. As a byproduct, a covariant procedure is proposed for deriving the classical BRST charge of the BFV formalism by a given BV master action. The general approach is illustrated by the examples of Maxwell’s electrodynamics and chiral bosons in two dimensions.
NASA Astrophysics Data System (ADS)
Monovasilis, Theodore; Kalogiratou, Zacharoula; Simos, T. E.
2014-10-01
In this work we derive exponentially fitted symplectic Runge-Kutta-Nyström (RKN) methods from symplectic exponentially fitted partitioned Runge-Kutta (PRK) methods methods (for the approximate solution of general problems of this category see [18] - [40] and references therein). We construct RKN methods from PRK methods with up to five stages and fourth algebraic order.
Fedosov’s formal symplectic groupoids and contravariant connections
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
2006-10-01
Using Fedosov's approach we give a geometric construction of a formal symplectic groupoid over any Poisson manifold endowed with a torsion-free Poisson contravariant connection. In the case of Kähler-Poisson manifolds this construction provides, in particular, the formal symplectic groupoids with separation of variables. We show that the dual of a semisimple Lie algebra does not admit torsion-free Poisson contravariant connections.
On the n-symplectic structure of faithful irreducible representations
NASA Astrophysics Data System (ADS)
Norris, L. K.
2017-04-01
Each faithful irreducible representation of an N-dimensional vector space V1 on an n-dimensional vector space V2 is shown to define a unique irreducible n-symplectic structure on the product manifold V1×V2 . The basic details of the associated Poisson algebra are developed for the special case N = n2, and 2n-dimensional symplectic submanifolds are shown to exist.
A Symplectic Instanton Homology via Traceless Character Varieties
NASA Astrophysics Data System (ADS)
Horton, Henry T.
Since its inception, Floer homology has been an important tool in low-dimensional topology. Floer theoretic invariants of 3-manifolds tend to be either gauge theoretic or symplecto-geometric in nature, and there is a general philosophy that each gauge theoretic Floer homology should have a corresponding symplectic Floer homology and vice-versa. In this thesis, we construct a Lagrangian Floer invariant for any closed, oriented 3-manifold Y (called the symplectic instanton homology of Y and denoted SI(Y)) which is conjecturally equivalent to a Floer homology defined using a certain variant of Yang-Mills gauge theory. The crucial ingredient for defining SI( Y) is the use of traceless character varieties in the symplectic setting, which allow us to avoid the debilitating technical hurdles present when one attempts to define a symplectic version of instanton Floer homologies. Floer theories are also expected to roughly satisfy the axioms of a topological quantum field theory (TQFT), and furthermore Dehn surgeries on knots should induce exact triangles of Floer homologies. Following a strategy used by Ozsvath and Szabo in the context of Heegaard Floer homology, we prove that our theory is functorial with respect to connected 4-dimensional cobordisms, so that cobordisms induce homomorphisms between symplectic instanton homologies. By studying the effect of Dehn surgeries on traceless character varieties, we establish a surgery exact triangle using work of Seidel that relates the geometry of Lefschetz fibrations with exact triangles in Lagrangian Floer theory. We further prove that Dehn surgeries on a link L in a 3-manifold Y induce a spectral sequence of symplectic instanton homologies - the E2-page is isomorphic to a direct sum of symplectic instanton homologies of all possible combinations of 0- and 1-surgeries on the components of L, and the spectral sequence converges to SI(Y). For the branched double cover Sigma(L) of a link L in S3, we show there is a link surgery spectral sequence whose E 2-page is isomorphic to the reduced Khovanov homology of L and which converges to the symplectic instanton homology of Sigma( L).
On the Inverse Mapping of the Formal Symplectic Groupoid of a Deformation Quantization
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
2004-10-01
To each natural star product on a Poisson manifold $M$ we associate an antisymplectic involutive automorphism of the formal neighborhood of the zero section of the cotangent bundle of $M$. If $M$ is symplectic, this mapping is shown to be the inverse mapping of the formal symplectic groupoid of the star product. The construction of the inverse mapping involves modular automorphisms of the star product.
NASA Astrophysics Data System (ADS)
Stuchi, Teresa; Cardozo Dias, P.
2013-05-01
Abstract (2,250 Maximum Characters): On a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. How he drew the orbit may indicate how and when he developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove geometrically: Hooke’s method is a second order symplectic area preserving algorithm, and the method of curvature is a first order algorithm without special features; then we integrate the hamiltonian equations. Integration by the method of curvature can also be done exploring geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first order method. A fourth order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.
NASA Astrophysics Data System (ADS)
Cardozo Dias, Penha Maria; Stuchi, T. J.
2013-11-01
In a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. The drawing of the orbit may indicate how and when Newton developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove that Hooke’s method is a second-order symplectic area-preserving algorithm, and the method of curvature is a first-order algorithm without special features; then we integrate the Hamiltonian equations. Integration by the method of curvature can also be done, exploring the geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first-order method. A fourth-order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.
Symplectic partitioned Runge-Kutta scheme for Maxwell's equations
NASA Astrophysics Data System (ADS)
Huang, Zhi-Xiang; Wu, Xian-Liang
Using the symplectic partitioned Runge-Kutta (PRK) method, we construct a new scheme for approximating the solution to infinite dimensional nonseparable Hamiltonian systems of Maxwell's equations for the first time. The scheme is obtained by discretizing the Maxwell's equations in the time direction based on symplectic PRK method, and then evaluating the equation in the spatial direction with a suitable finite difference approximation. Several numerical examples are presented to verify the efficiency of the scheme.
Symplectic geometry spectrum regression for prediction of noisy time series
NASA Astrophysics Data System (ADS)
Xie, Hong-Bo; Dokos, Socrates; Sivakumar, Bellie; Mengersen, Kerrie
2016-05-01
We present the symplectic geometry spectrum regression (SGSR) technique as well as a regularized method based on SGSR for prediction of nonlinear time series. The main tool of analysis is the symplectic geometry spectrum analysis, which decomposes a time series into the sum of a small number of independent and interpretable components. The key to successful regularization is to damp higher order symplectic geometry spectrum components. The effectiveness of SGSR and its superiority over local approximation using ordinary least squares are demonstrated through prediction of two noisy synthetic chaotic time series (Lorenz and Rössler series), and then tested for prediction of three real-world data sets (Mississippi River flow data and electromyographic and mechanomyographic signal recorded from human body).
A Fifth-order Symplectic Trigonometrically Fitted Partitioned Runge-Kutta Method
NASA Astrophysics Data System (ADS)
Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.
2007-09-01
Trigonometrically fitted symplectic Partitioned Runge Kutta (EFSPRK) methods for the numerical integration of Hamoltonian systems with oscillatory solutions are derived. These methods integrate exactly differential systems whose solutions can be expressed as linear combinations of the set of functions sin(wx),cos(wx), w∈R. We modify a fifth order symplectic PRK method with six stages so to derive an exponentially fitted SPRK method. The methods are tested on the numerical integration of the two body problem.
Yangians and Yang-Baxter R-operators for ortho-symplectic superalgebras
NASA Astrophysics Data System (ADS)
Fuksa, J.; Isaev, A. P.; Karakhanyan, D.; Kirschner, R.
2017-04-01
Yang-Baxter relations symmetric with respect to the ortho-symplectic superalgebras are studied. We start with the formulation of graded algebras and the linear superspace carrying the vector (fundamental) representation of the ortho-symplectic supergroup. On this basis we study the analogy of the Yang-Baxter operators considered earlier for the cases of orthogonal and symplectic symmetries: the vector (fundamental) R-matrix, the L-operator defining the Yangian algebra and its first and second order evaluations. We investigate the condition for L (u) in the case of the truncated expansion in inverse powers of u and give examples of Lie algebra representations obeying these conditions. We construct the R-operator intertwining two superspinor representations and study the fusion of L-operators involving the tensor product of such representations.
Characterization and solvability of quasipolynomial symplectic mappings
NASA Astrophysics Data System (ADS)
Hernández-Bermejo, Benito; Brenig, Léon
2004-02-01
Quasipolynomial (or QP) mappings constitute a wide generalization of the well-known Lotka-Volterra mappings, of importance in different fields such as population dynamics, physics, chemistry or economy. In addition, QP mappings are a natural discrete-time analogue of the continuous QP systems, which have been extensively used in different pure and applied domains. After presenting the basic definitions and properties of QP mappings in a previous paper [1], the purpose of this work is to focus on their characterization by considering the existence of symplectic QP mappings. In what follows such QP symplectic maps are completely characterized. Moreover, use of the QP formalism can be made in order to demonstrate that all QP symplectic mappings have an analytical solution that is explicitly and generally constructed. Examples are given.
A symplectic integration method for elastic filaments
NASA Astrophysics Data System (ADS)
Ladd, Tony; Misra, Gaurav
2009-03-01
Elastic rods are a ubiquitous coarse-grained model of semi-flexible biopolymers such as DNA, actin, and microtubules. The Worm-Like Chain (WLC) is the standard numerical model for semi-flexible polymers, but it is only a linearized approximation to the dynamics of an elastic rod, valid for small deflections; typically the torsional motion is neglected as well. In the standard finite-difference and finite-element formulations of an elastic rod, the continuum equations of motion are discretized in space and time, but it is then difficult to ensure that the Hamiltonian structure of the exact equations is preserved. Here we discretize the Hamiltonian itself, expressed as a line integral over the contour of the filament. This discrete representation of the continuum filament can then be integrated by one of the explicit symplectic integrators frequently used in molecular dynamics. The model systematically approximates the continuum partial differential equations, but has the same level of computational complexity as molecular dynamics and is constraint free. Numerical tests show that the algorithm is much more stable than a finite-difference formulation and can be used for high aspect ratio filaments, such as actin. We present numerical results for the deterministic and stochastic motion of single filaments.
Symplectic semiclassical wave packet dynamics II: non-Gaussian states
NASA Astrophysics Data System (ADS)
Ohsawa, Tomoki
2018-05-01
We generalize our earlier work on the symplectic/Hamiltonian formulation of the dynamics of the Gaussian wave packet to non-Gaussian semiclassical wave packets. We find the symplectic forms and asymptotic expansions of the Hamiltonians associated with these semiclassical wave packets, and obtain Hamiltonian systems governing their dynamics. Numerical experiments demonstrate that the dynamics give a very good approximation to the short-time dynamics of the expectation values computed by a method based on Egorov’s theorem or the initial value representation.
On the symplectic structure of harmonic superspace
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kachkachi, M.; Saidi, E.H.
In this paper, the symplectic properties of harmonic superspace are studied. It is shown that Diff(S[sup 2]) is isomorphic to Diff[sub 0](S[sup 3])/Ab(Diff[sub 0](S[sup 3])), where Diff[sub 0](S[sup 3]) is the group of the diffeomorphisms of S[sup 3] preserving the Cartan charge operator D[sup 0] and Ab(Diff[sub 0](S[sup 3])) is its Abelian subgroup generated by the Cartan vectors L[sub 0] = w[sup 0]D[sup 0]. The authors show also that the eigenvalue equation D[sup 0] [lambda](z) = 0 defines a symplectic structure in harmonic superspace, and the authors calculate the corresponding algebra. The general symplectic invariant coupling of the Maxwell prepotentialmore » is constructed in both flat and curved harmonic superspace. Other features are discussed.« less
Symplectic test particle encounters: a comparison of methods
NASA Astrophysics Data System (ADS)
Wisdom, Jack
2017-01-01
A new symplectic method for handling encounters of test particles with massive bodies is presented. The new method is compared with several popular methods (RMVS3, SYMBA, and MERCURY). The new method compares favourably.
Symplectic modeling of beam loading in electromagnetic cavities
Abell, Dan T.; Cook, Nathan M.; Webb, Stephen D.
2017-05-22
Simulating beam loading in radio frequency accelerating structures is critical for understanding higher-order mode effects on beam dynamics, such as beam break-up instability in energy recovery linacs. Full wave simulations of beam loading in radio frequency structures are computationally expensive, and while reduced models can ignore essential physics, it can be difficult to generalize. Here, we present a self-consistent algorithm derived from the least-action principle which can model an arbitrary number of cavity eigenmodes and with a generic beam distribution. It has been implemented in our new Open Library for Investigating Vacuum Electronics (OLIVE).
NASA Astrophysics Data System (ADS)
Rodríguez-Tzompantzi, Omar
2018-05-01
The Faddeev-Jackiw symplectic formalism for constrained systems is applied to analyze the dynamical content of a model describing two massive relativistic particles with interaction, which can also be interpreted as a bigravity model in one dimension. We systematically investigate the nature of the physical constraints, for which we also determine the zero-modes structure of the corresponding symplectic matrix. After identifying the whole set of constraints, we find out the transformation laws for all the set of dynamical variables corresponding to gauge symmetries, encoded in the remaining zero modes. In addition, we use an appropriate gauge-fixing procedure, the conformal gauge, to compute the quantization brackets (Faddeev-Jackiw brackets) and also obtain the number of physical degree of freedom. Finally, we argue that this symplectic approach can be helpful for assessing physical constraints and understanding the gauge structure of theories of interacting spin-2 fields.
SimTrack: A compact c++ code for particle orbit and spin tracking in accelerators
Luo, Yun
2015-08-29
SimTrack is a compact c++ code of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam–beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam–beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam–beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this article, I will present the code architecture,more » physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.« less
SimTrack: A compact c++ library for particle orbit and spin tracking in accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Yun
2015-06-24
SimTrack is a compact c++ library of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam-beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam-beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam-beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this article, I will present the code architecture,more » physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.« less
Canonical and symplectic analysis for three dimensional gravity without dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escalante, Alberto, E-mail: aescalan@ifuap.buap.mx; Osmart Ochoa-Gutiérrez, H.
2017-03-15
In this paper a detailed Hamiltonian analysis of three-dimensional gravity without dynamics proposed by V. Hussain is performed. We report the complete structure of the constraints and the Dirac brackets are explicitly computed. In addition, the Faddeev–Jackiw symplectic approach is developed; we report the complete set of Faddeev–Jackiw constraints and the generalized brackets, then we show that the Dirac and the generalized Faddeev–Jackiw brackets coincide to each other. Finally, the similarities and advantages between Faddeev–Jackiw and Dirac’s formalism are briefly discussed. - Highlights: • We report the symplectic analysis for three dimensional gravity without dynamics. • We report the Faddeev–Jackiwmore » constraints. • A pure Dirac’s analysis is performed. • The complete structure of Dirac’s constraints is reported. • We show that symplectic and Dirac’s brackets coincide to each other.« less
Unity of quark and lepton interactions with symplectic gauge symmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajpoot, S.
1982-07-01
Properties of symplectic groups are reviewed and the gauge structure of Sp(2n) derived. The electroweak unification of leptons within Sp(8) gauge symmetry and grand unification of quarks and leptons within Sp(10) gauge symmetry are discussed.
NASA Astrophysics Data System (ADS)
Rodríguez-Tzompantzi, Omar; Escalante, Alberto
2018-05-01
By applying the Faddeev-Jackiw symplectic approach we systematically show that both the local gauge symmetry and the constraint structure of topologically massive gravity with a cosmological constant Λ , elegantly encoded in the zero-modes of the symplectic matrix, can be identified. Thereafter, via a suitable partial gauge-fixing procedure, the time gauge, we calculate the quantization bracket structure (generalized Faddeev-Jackiw brackets) for the dynamic variables and confirm that the number of physical degrees of freedom is one. This approach provides an alternative to explore the dynamical content of massive gravity models.
NASA Astrophysics Data System (ADS)
Wang, Dongling; Xiao, Aiguo; Li, Xueyang
2013-02-01
Based on W-transformation, some parametric symplectic partitioned Runge-Kutta (PRK) methods depending on a real parameter α are developed. For α=0, the corresponding methods become the usual PRK methods, including Radau IA-IA¯ and Lobatto IIIA-IIIB methods as examples. For any α≠0, the corresponding methods are symplectic and there exists a value α∗ such that energy is preserved in the numerical solution at each step. The existence of the parameter and the order of the numerical methods are discussed. Some numerical examples are presented to illustrate these results.
Rotation number of integrable symplectic mappings of the plane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zolkin, Timofey; Nagaitsev, Sergei; Danilov, Viatcheslav
2017-04-11
Symplectic mappings are discrete-time analogs of Hamiltonian systems. They appear in many areas of physics, including, for example, accelerators, plasma, and fluids. Integrable mappings, a subclass of symplectic mappings, are equivalent to a Twist map, with a rotation number, constant along the phase trajectory. In this letter, we propose a succinct expression to determine the rotation number and present two examples. Similar to the period of the bounded motion in Hamiltonian systems, the rotation number is the most fundamental property of integrable maps and it provides a way to analyze the phase-space dynamics.
Long-Term Obliquity Variations of a Moonless Earth
NASA Astrophysics Data System (ADS)
Barnes, Jason W.; Lissauer, J. J.; Chambers, J. E.
2012-05-01
Earth's present-day obliquity varies by +/-1.2 degrees over 100,000-year timescales. Without the Moon's gravity increasing the rotation axis precession rate, prior theory predicted that a moonless Earth's obliquity would be allowed to vary between 0 and 85 degrees -- moreso even than present-day Mars (0 - 60 degrees). We use a modified version of the symplectic orbital integrator `mercury' to numerically investigate the obliquity evolution of hypothetical moonless Earths. Contrary to the large theoretically allowed range, we find that moonless Earths more typically experience obliquity variations of just +/- 10 degrees over Gyr timescales. Some initial conditions for the moonless Earth's rotation rate and obliquity yield slightly greater variations, but the majority have smaller variations. In particular, retrograde rotators are quite stable and should constitute 50% of the population if initial terrestrial planet rotation is isotropic. Our results have important implications for the prospects of long-term habitability of moonless planets in extrasolar systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Punjabi, Alkesh; Ali, Halima
A new approach to integration of magnetic field lines in divertor tokamaks is proposed. In this approach, an analytic equilibrium generating function (EGF) is constructed in natural canonical coordinates ({psi},{theta}) from experimental data from a Grad-Shafranov equilibrium solver for a tokamak. {psi} is the toroidal magnetic flux and {theta} is the poloidal angle. Natural canonical coordinates ({psi},{theta},{phi}) can be transformed to physical position (R,Z,{phi}) using a canonical transformation. (R,Z,{phi}) are cylindrical coordinates. Another canonical transformation is used to construct a symplectic map for integration of magnetic field lines. Trajectories of field lines calculated from this symplectic map in natural canonicalmore » coordinates can be transformed to trajectories in real physical space. Unlike in magnetic coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)], the symplectic map in natural canonical coordinates can integrate trajectories across the separatrix surface, and at the same time, give trajectories in physical space. Unlike symplectic maps in physical coordinates (x,y) or (R,Z), the continuous analog of a symplectic map in natural canonical coordinates does not distort trajectories in toroidal planes intervening the discrete map. This approach is applied to the DIII-D tokamak [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)]. The EGF for the DIII-D gives quite an accurate representation of equilibrium magnetic surfaces close to the separatrix surface. This new approach is applied to demonstrate the sensitivity of stochastic broadening using a set of perturbations that generically approximate the size of the field errors and statistical topological noise expected in a poloidally diverted tokamak. Plans for future application of this approach are discussed.« less
NASA Astrophysics Data System (ADS)
Punjabi, Alkesh; Ali, Halima
2008-12-01
A new approach to integration of magnetic field lines in divertor tokamaks is proposed. In this approach, an analytic equilibrium generating function (EGF) is constructed in natural canonical coordinates (ψ,θ) from experimental data from a Grad-Shafranov equilibrium solver for a tokamak. ψ is the toroidal magnetic flux and θ is the poloidal angle. Natural canonical coordinates (ψ,θ,φ) can be transformed to physical position (R,Z,φ) using a canonical transformation. (R,Z,φ) are cylindrical coordinates. Another canonical transformation is used to construct a symplectic map for integration of magnetic field lines. Trajectories of field lines calculated from this symplectic map in natural canonical coordinates can be transformed to trajectories in real physical space. Unlike in magnetic coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)], the symplectic map in natural canonical coordinates can integrate trajectories across the separatrix surface, and at the same time, give trajectories in physical space. Unlike symplectic maps in physical coordinates (x,y) or (R,Z), the continuous analog of a symplectic map in natural canonical coordinates does not distort trajectories in toroidal planes intervening the discrete map. This approach is applied to the DIII-D tokamak [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)]. The EGF for the DIII-D gives quite an accurate representation of equilibrium magnetic surfaces close to the separatrix surface. This new approach is applied to demonstrate the sensitivity of stochastic broadening using a set of perturbations that generically approximate the size of the field errors and statistical topological noise expected in a poloidally diverted tokamak. Plans for future application of this approach are discussed.
Obstructions for twist star products
NASA Astrophysics Data System (ADS)
Bieliavsky, Pierre; Esposito, Chiara; Waldmann, Stefan; Weber, Thomas
2018-05-01
In this short note, we point out that not every star product is induced by a Drinfel'd twist by showing that not every Poisson structure is induced by a classical r-matrix. Examples include the higher genus symplectic Pretzel surfaces and the symplectic sphere S^2.
Symplectic multi-particle tracking on GPUs
NASA Astrophysics Data System (ADS)
Liu, Zhicong; Qiang, Ji
2018-05-01
A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.
Two modified symplectic partitioned Runge-Kutta methods for solving the elastic wave equation
NASA Astrophysics Data System (ADS)
Su, Bo; Tuo, Xianguo; Xu, Ling
2017-08-01
Based on a modified strategy, two modified symplectic partitioned Runge-Kutta (PRK) methods are proposed for the temporal discretization of the elastic wave equation. The two symplectic schemes are similar in form but are different in nature. After the spatial discretization of the elastic wave equation, the ordinary Hamiltonian formulation for the elastic wave equation is presented. The PRK scheme is then applied for time integration. An additional term associated with spatial discretization is inserted into the different stages of the PRK scheme. Theoretical analyses are conducted to evaluate the numerical dispersion and stability of the two novel PRK methods. A finite difference method is used to approximate the spatial derivatives since the two schemes are independent of the spatial discretization technique used. The numerical solutions computed by the two new schemes are compared with those computed by a conventional symplectic PRK. The numerical results, which verify the new method, are superior to those generated by traditional conventional methods in seismic wave modeling.
Optimal control of underactuated mechanical systems: A geometric approach
NASA Astrophysics Data System (ADS)
Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela
2010-08-01
In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.
Variational discretization of the nonequilibrium thermodynamics of simple systems
NASA Astrophysics Data System (ADS)
Gay-Balmaz, François; Yoshimura, Hiroaki
2018-04-01
In this paper, we develop variational integrators for the nonequilibrium thermodynamics of simple closed systems. These integrators are obtained by a discretization of the Lagrangian variational formulation of nonequilibrium thermodynamics developed in (Gay-Balmaz and Yoshimura 2017a J. Geom. Phys. part I 111 169–93 Gay-Balmaz and Yoshimura 2017b J. Geom. Phys. part II 111 194–212) and thus extend the variational integrators of Lagrangian mechanics, to include irreversible processes. In the continuous setting, we derive the structure preserving property of the flow of such systems. This property is an extension of the symplectic property of the flow of the Euler–Lagrange equations. In the discrete setting, we show that the discrete flow solution of our numerical scheme verifies a discrete version of this property. We also present the regularity conditions which ensure the existence of the discrete flow. We finally illustrate our discrete variational schemes with the implementation of an example of a simple and closed system.
Symplectic Quantization of a Reducible Theory
NASA Astrophysics Data System (ADS)
Barcelos-Neto, J.; Silva, M. B. D.
We use the symplectic formalism to quantize the Abelian antisymmetric tensor gauge field. It is related to a reducible theory in the sense that all of its constraints are not independent. A procedure like ghost-of-ghost of the BFV method has to be used, but in terms of Lagrange multipliers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bravetti, Alessandro, E-mail: alessandro.bravetti@iimas.unam.mx; Cruz, Hans, E-mail: hans@ciencias.unam.mx; Tapias, Diego, E-mail: diego.tapias@nucleares.unam.mx
In this work we introduce contact Hamiltonian mechanics, an extension of symplectic Hamiltonian mechanics, and show that it is a natural candidate for a geometric description of non-dissipative and dissipative systems. For this purpose we review in detail the major features of standard symplectic Hamiltonian dynamics and show that all of them can be generalized to the contact case.
Symplectic potentials and resolved Ricci-flat ACG metrics
NASA Astrophysics Data System (ADS)
Balasubramanian, Aswin K.; Govindarajan, Suresh; Gowdigere, Chethan N.
2007-12-01
We pursue the symplectic description of toric Kähler manifolds. There exists a general local classification of metrics on toric Kähler manifolds equipped with Hamiltonian 2-forms due to Apostolov, Calderbank and Gauduchon (ACG). We derive the symplectic potential for these metrics. Using a method due to Abreu, we relate the symplectic potential to the canonical potential written by Guillemin. This enables us to recover the moment polytope associated with metrics and we thus obtain global information about the metric. We illustrate these general considerations by focusing on six-dimensional Ricci-flat metrics and obtain Ricci-flat metrics associated with real cones over Lpqr and Ypq manifolds. The metrics associated with cones over Ypq manifolds turn out to be partially resolved with two blow-up parameters taking special (non-zero) values. For a fixed Ypq manifold, we find explicit metrics for several inequivalent blow-ups parametrized by a natural number k in the range 0 < k < p. We also show that all known examples of resolved metrics such as the resolved conifold and the resolution of {\\bb C}^3/{\\bb Z}_3 also fit the ACG classification.
Symplectic homoclinic tangles of the ideal separatrix of the DIII-D from type I ELMs
NASA Astrophysics Data System (ADS)
Punjabi, Alkesh; Ali, Halima
2012-10-01
The ideal separatrix of the divertor tokamaks is a degenerate manifold where both the stable and unstable manifolds coincide. Non-axisymmetric magnetic perturbations remove the degeneracy; and split the separatrix manifold. This creates an extremely complex topological structure, called homoclinic tangles. The unstable manifold intersects the stable manifold and creates alternating inner and outer lobes at successive homoclinic points. The Hamiltonian system must preserve the symplectic topological invariance, and this controls the size and radial extent of the lobes. Very recently, lobes near the X-point have been experimentally observed in MAST [A. Kirk et al, PRL 108, 255003 (2012)]. We have used the DIII-D map [A. Punjabi, NF 49, 115020 (2009)] to calculate symplectic homoclinic tangles of the ideal separatrix of the DIII-D from the type I ELMs represented by the peeling-ballooning modes (m,n)=(30,10)+(40,10). The DIII-D map is symplectic, accurate, and is in natural canonical coordinates which are invertible to physical coordinates [A. Punjabi and H. Ali, POP 15, 122502 (2008)]. To our knowledge, we are the first to symplectically calculate these tangles in physical space. Homoclinic tangles of separatrix can cause radial displacement of mobile passing electrons and create sheared radial electric fields and currents, resulting in radial flows, drifts, differential spinning, and reduction in turbulence, and other effects. This work is supported by the grants DE-FG02-01ER54624 and DE-FG02-04ER54793.
NASA Astrophysics Data System (ADS)
Hellmich, S.; Mottola, S.; Hahn, G.; Kührt, E.; Hlawitschka, M.
2014-07-01
Simulations of dynamical processes in planetary systems represent an important tool for studying the orbital evolution of the systems [1--3]. Using modern numerical integration methods, it is possible to model systems containing many thousands of objects over timescales of several hundred million years. However, in general, supercomputers are needed to get reasonable simulation results in acceptable execution times [3]. To exploit the ever-growing computation power of Graphics Processing Units (GPUs) in modern desktop computers, we implemented cuSwift, a library of numerical integration methods for studying long-term dynamical processes in planetary systems. cuSwift can be seen as a re-implementation of the famous SWIFT integrator package written by Hal Levison and Martin Duncan. cuSwift is written in C/CUDA and contains different integration methods for various purposes. So far, we have implemented three algorithms: a 15th-order Radau integrator [4], the Wisdom-Holman Mapping (WHM) integrator [5], and the Regularized Mixed Variable Symplectic (RMVS) Method [6]. These algorithms treat only the planets as mutually gravitationally interacting bodies whereas asteroids and comets (or other minor bodies of interest) are treated as massless test particles which are gravitationally influenced by the massive bodies but do not affect each other or the massive bodies. The main focus of this work is on the symplectic methods (WHM and RMVS) which use a larger time step and thus are capable of integrating many particles over a large time span. As an additional feature, we implemented the non-gravitational Yarkovsky effect as described by M. Brož [7]. With cuSwift, we show that the use of modern GPUs makes it possible to speed up these methods by more than one order of magnitude compared to the single-core CPU implementation, thereby enabling modest workstation computers to perform long-term dynamical simulations. We use these methods to study the influence of the Yarkovsky effect on resonant asteroids. We present first results and compare them with integrations done with the original algorithms implemented in SWIFT in order to assess the numerical precision of cuSwift and to demonstrate the speed-up we achieved using the GPU.
Quasi-hamiltonian quotients as disjoint unions of symplectic manifolds
NASA Astrophysics Data System (ADS)
Schaffhauser, Florent
2007-08-01
The main result of this paper is Theorem 2.12 which says that the quotient μ-1({1})/U associated to a quasi-hamiltonian space (M, ω, μ: M → U) has a symplectic structure even when 1 is not a regular value of the momentum map μ. Namely, it is a disjoint union of symplectic manifolds of possibly different dimensions, which generalizes the result of Alekseev, Malkin and Meinrenken in [AMM98]. We illustrate this theorem with the example of representation spaces of surface groups. As an intermediary step, we give a new class of examples of quasi-hamiltonian spaces: the isotropy submanifold MK whose points are the points of M with isotropy group K ⊂ U. The notion of quasi-hamiltonian space was introduced by Alekseev, Malkin and Meinrenken in their paper [AMM98]. The main motivation for it was the existence, under some regularity assumptions, of a symplectic structure on the associated quasi-hamiltonian quotient. Throughout their paper, the analogy with usual hamiltonian spaces is often used as a guiding principle, replacing Lie-algebra-valued momentum maps with Lie-group-valued momentum maps. In the hamiltonian setting, when the usual regularity assumptions on the group action or the momentum map are dropped, Lerman and Sjamaar showed in [LS91] that the quotient associated to a hamiltonian space carries a stratified symplectic structure. In particular, this quotient space is a disjoint union of symplectic manifolds. In this paper, we prove an analogous result for quasi-hamiltonian quotients. More precisely, we show that for any quasi-hamiltonian space (M, ω, μ: M → U), the associated quotient M//U := μ-1({1})/U is a disjoint union of symplectic manifolds (Theorem 2.12): [ mu^{-1}(\\{1\\})/U = bigsqcup_{jin J} (mu^{-1}(\\{1\\})\\cap M_{K_j})/L_{K_j} . ] Here Kj denotes a closed subgroup of U and M
A Variational Method in Out-of-Equilibrium Physical Systems
Pinheiro, Mario J.
2013-01-01
We propose a new variational principle for out-of-equilibrium dynamic systems that are fundamentally based on the method of Lagrange multipliers applied to the total entropy of an ensemble of particles. However, we use the fundamental equation of thermodynamics on differential forms, considering U and S as 0-forms. We obtain a set of two first order differential equations that reveal the same formal symplectic structure shared by classical mechanics, fluid mechanics and thermodynamics. From this approach, a topological torsion current emerges of the form , where Aj and ωk denote the components of the vector potential (gravitational and/or electromagnetic) and where ω denotes the angular velocity of the accelerated frame. We derive a special form of the Umov-Poynting theorem for rotating gravito-electromagnetic systems. The variational method is then applied to clarify the working mechanism of particular devices. PMID:24316718
Second-order evaluations of orthogonal and symplectic Yangians
NASA Astrophysics Data System (ADS)
Karakhanyan, D. R.; Kirschner, R.
2017-08-01
Orthogonal or symplectic Yangians are defined by the Yang-Baxter RLL relation involving the fundamental R-matrix with the corresponding so( n) or sp(2 m) symmetry. We investigate the second-order solution conditions, where the expansion of L( u) in u -1 is truncated at the second power, and we derive the relations for the two nontrivial terms in L( u).
Special Bohr-Sommerfeld Lagrangian submanifolds
NASA Astrophysics Data System (ADS)
Tyurin, N. A.
2016-12-01
We introduce a new notion in symplectic geometry, that of speciality for Lagrangian submanifolds satisfying the Bohr- Sommerfeld condition. We show that it enables one to construct finite-dimensional moduli spaces of special Bohr- Sommerfeld Lagrangian submanifolds with respect to any ample line bundle on an algebraic variety with a Hodge metric regarded as the symplectic form. This construction can be used to study mirror symmetry.
Dirichlet to Neumann operator for Abelian Yang-Mills gauge fields
NASA Astrophysics Data System (ADS)
Díaz-Marín, Homero G.
We consider the Dirichlet to Neumann operator for Abelian Yang-Mills boundary conditions. The aim is constructing a complex structure for the symplectic space of boundary conditions of Euler-Lagrange solutions modulo gauge for space-time manifolds with smooth boundary. Thus we prepare a suitable scenario for geometric quantization within the reduced symplectic space of boundary conditions of Abelian gauge fields.
NASA Astrophysics Data System (ADS)
Cai, Jiaxiang; Liang, Hua; Zhang, Chun
2018-06-01
Based on the multi-symplectic Hamiltonian formula of the generalized Rosenau-type equation, a multi-symplectic scheme and an energy-preserving scheme are proposed. To improve the accuracy of the solution, we apply the composition technique to the obtained schemes to develop high-order schemes which are also multi-symplectic and energy-preserving respectively. Discrete fast Fourier transform makes a significant improvement to the computational efficiency of schemes. Numerical results verify that all the proposed schemes have satisfactory performance in providing accurate solution and preserving the discrete mass and energy invariants. Numerical results also show that although each basic time step is divided into several composition steps, the computational efficiency of the composition schemes is much higher than that of the non-composite schemes.
NASA Astrophysics Data System (ADS)
Zhou, Zhenhuan; Li, Yuejie; Fan, Junhai; Rong, Dalun; Sui, Guohao; Xu, Chenghui
2018-05-01
A new Hamiltonian-based approach is presented for finding exact solutions for transverse vibrations of double-nanobeam-systems embedded in an elastic medium. The continuum model is established within the frameworks of the symplectic methodology and the nonlocal Euler-Bernoulli and Timoshenko beam beams. The symplectic eigenfunctions are obtained after expressing the governing equations in a Hamiltonian form. Exact frequency equations, vibration modes and displacement amplitudes are obtained by using symplectic eigenfunctions and end conditions. Comparisons with previously published work are presented to illustrate the accuracy and reliability of the proposed method. The comprehensive results for arbitrary boundary conditions could serve as benchmark results for verifying numerically obtained solutions. In addition, a study on the difference between the nonlocal beam and the nonlocal plate is also included.
Geometric integration in Born-Oppenheimer molecular dynamics.
Odell, Anders; Delin, Anna; Johansson, Börje; Cawkwell, Marc J; Niklasson, Anders M N
2011-12-14
Geometric integration schemes for extended Lagrangian self-consistent Born-Oppenheimer molecular dynamics, including a weak dissipation to remove numerical noise, are developed and analyzed. The extended Lagrangian framework enables the geometric integration of both the nuclear and electronic degrees of freedom. This provides highly efficient simulations that are stable and energy conserving even under incomplete and approximate self-consistent field (SCF) convergence. We investigate three different geometric integration schemes: (1) regular time reversible Verlet, (2) second order optimal symplectic, and (3) third order optimal symplectic. We look at energy conservation, accuracy, and stability as a function of dissipation, integration time step, and SCF convergence. We find that the inclusion of dissipation in the symplectic integration methods gives an efficient damping of numerical noise or perturbations that otherwise may accumulate from finite arithmetics in a perfect reversible dynamics. © 2011 American Institute of Physics
Symplectic Quantization of a Vector-Tensor Gauge Theory with Topological Coupling
NASA Astrophysics Data System (ADS)
Barcelos-Neto, J.; Silva, M. B. D.
We use the symplectic formalism to quantize a gauge theory where vectors and tensors fields are coupled in a topological way. This is an example of reducible theory and a procedure like of ghosts-of-ghosts of the BFV method is applied but in terms of Lagrange multipliers. Our final results are in agreement with the ones found in the literature by using the Dirac method.
Time irreversibility from symplectic non-squeezing
NASA Astrophysics Data System (ADS)
Kalogeropoulos, Nikolaos
2018-04-01
The issue of how time reversible microscopic dynamics gives rise to macroscopic irreversible processes has been a recurrent issue in Physics since the time of Boltzmann whose ideas shaped, and essentially resolved, such an apparent contradiction. Following Boltzmann's spirit and ideas, but employing Gibbs's approach, we advance the view that macroscopic irreversibility of Hamiltonian systems of many degrees of freedom can be also seen as a result of the symplectic non-squeezing theorem.
NASA Astrophysics Data System (ADS)
Li, G. Q.; Zhu, Z. H.
2015-12-01
Dynamic modeling of tethered spacecraft with the consideration of elasticity of tether is prone to the numerical instability and error accumulation over long-term numerical integration. This paper addresses the challenges by proposing a globally stable numerical approach with the nodal position finite element method (NPFEM) and the implicit, symplectic, 2-stage and 4th order Gaussian-Legendre Runge-Kutta time integration. The NPFEM eliminates the numerical error accumulation by using the position instead of displacement of tether as the state variable, while the symplectic integration enforces the energy and momentum conservation of the discretized finite element model to ensure the global stability of numerical solution. The effectiveness and robustness of the proposed approach is assessed by an elastic pendulum problem, whose dynamic response resembles that of tethered spacecraft, in comparison with the commonly used time integrators such as the classical 4th order Runge-Kutta schemes and other families of non-symplectic Runge-Kutta schemes. Numerical results show that the proposed approach is accurate and the energy of the corresponding numerical model is conservative over the long-term numerical integration. Finally, the proposed approach is applied to the dynamic modeling of deorbiting process of tethered spacecraft over a long period.
Aubry-Mather Theory for Conformally Symplectic Systems
NASA Astrophysics Data System (ADS)
Marò, Stefano; Sorrentino, Alfonso
2017-09-01
In this article we develop an analogue of Aubry-Mather theory for a class of dissipative systems, namely conformally symplectic systems, and prove the existence of interesting invariant sets, which, in analogy to the conservative case, will be called the Aubry and the Mather sets. Besides describing their structure and their dynamical significance, we shall analyze their attracting/repelling properties, as well as their noteworthy role in driving the asymptotic dynamics of the system.
The BRST complex of homological Poisson reduction
NASA Astrophysics Data System (ADS)
Müller-Lennert, Martin
2017-02-01
BRST complexes are differential graded Poisson algebras. They are associated with a coisotropic ideal J of a Poisson algebra P and provide a description of the Poisson algebra (P/J)^J as their cohomology in degree zero. Using the notion of stable equivalence introduced in Felder and Kazhdan (Contemporary Mathematics 610, Perspectives in representation theory, 2014), we prove that any two BRST complexes associated with the same coisotropic ideal are quasi-isomorphic in the case P = R[V] where V is a finite-dimensional symplectic vector space and the bracket on P is induced by the symplectic structure on V. As a corollary, the cohomology of the BRST complexes is canonically associated with the coisotropic ideal J in the symplectic case. We do not require any regularity assumptions on the constraints generating the ideal J. We finally quantize the BRST complex rigorously in the presence of infinitely many ghost variables and discuss the uniqueness of the quantization procedure.
NASA Astrophysics Data System (ADS)
Caprio, Mark A.; McCoy, Anna E.; Dytrych, Tomas
2017-09-01
Rotational band structure is readily apparent as an emergent phenomenon in ab initio nuclear many-body calculations of light nuclei, despite the incompletely converged nature of most such calculations at present. Nuclear rotation in light nuclei can be analyzed in terms of approximate dynamical symmetries of the nuclear many-body problem: in particular, Elliott's SU (3) symmetry of the three-dimensional harmonic oscillator and the symplectic Sp (3 , R) symmetry of three-dimensional phase space. Calculations for rotational band members in the ab initio symplectic no-core configuration interaction (SpNCCI) framework allow us to directly examine the SU (3) and Sp (3 , R) nature of rotational states. We present results for rotational bands in p-shell nuclei. Supported by the US DOE under Award No. DE-FG02-95ER-40934 and the Czech Science Foundation under Grant No. 16-16772S.
NASA Astrophysics Data System (ADS)
McKeon, D. G. C.
2003-11-01
The simplest supersymmetric extension of the group SO(4) is discussed. The superalgebra is realized in a superspace whose Bosonic subspace is the surface of a sphere S-3 embedded in four-dimensional Euclidean space. By using Fermionic coordinates in this superspace, which are chiral symplectic Majorana spinors, it proves possible to devise superfield models involving a complex scalar, a pair of chiral symplectic Majorana spinors, and a complex auxiliary scalar. Kinetic terms involve operators that are isometry generators on S-3.
Trees, B-series and G-symplectic methods
NASA Astrophysics Data System (ADS)
Butcher, J. C.
2017-07-01
The order conditions for Runge-Kutta methods are intimately connected with the graphs known as rooted trees. The conditions can be expressed in terms of Taylor expansions written as weighted sums of elementary differentials, that is as B-series. Polish notation provides a unifying structure for representing many of the quantities appearing in this theory. Applications include the analysis of general linear methods with special reference to G-symplectic methods. A new order 6 method has recently been constructed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Yajun
A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.
Time-symmetric integration in astrophysics
NASA Astrophysics Data System (ADS)
Hernandez, David M.; Bertschinger, Edmund
2018-04-01
Calculating the long-term solution of ordinary differential equations, such as those of the N-body problem, is central to understanding a wide range of dynamics in astrophysics, from galaxy formation to planetary chaos. Because generally no analytic solution exists to these equations, researchers rely on numerical methods that are prone to various errors. In an effort to mitigate these errors, powerful symplectic integrators have been employed. But symplectic integrators can be severely limited because they are not compatible with adaptive stepping and thus they have difficulty in accommodating changing time and length scales. A promising alternative is time-reversible integration, which can handle adaptive time-stepping, but the errors due to time-reversible integration in astrophysics are less understood. The goal of this work is to study analytically and numerically the errors caused by time-reversible integration, with and without adaptive stepping. We derive the modified differential equations of these integrators to perform the error analysis. As an example, we consider the trapezoidal rule, a reversible non-symplectic integrator, and show that it gives secular energy error increase for a pendulum problem and for a Hénon-Heiles orbit. We conclude that using reversible integration does not guarantee good energy conservation and that, when possible, use of symplectic integrators is favoured. We also show that time-symmetry and time-reversibility are properties that are distinct for an integrator.
NASA Astrophysics Data System (ADS)
Shukla, Pramod
2016-10-01
In the context of studying the 4D-effective potentials of type IIB nongeometric flux compactifications, this article has a twofold goal. First, we present a modular invariant symplectic rearrangement of the tree level nongeometric scalar potential arising from a flux superpotential which includes the S-dual pairs of nongeometric fluxes (Q , P ), the standard NS-NS and RR three-form fluxes (F3 , H3 ), and the geometric flux (ω ). This "symplectic formulation" is valid for arbitrary numbers of Kähler moduli, and the complex structure moduli which are implicitly encoded in a set of symplectic matrices. In the second part, we further explicitly rewrite all the symplectic ingredients in terms of saxionic and axionic components of the complex structure moduli. The same leads to a compact form of the generic scalar potential being explicitly written out in terms of all the real moduli/axions. Moreover, the final form of the scalar potential needs only the knowledge of some topological data (such as Hodge numbers and the triple-intersection numbers) of the compactifying threefolds and their respective mirrors. Finally, we demonstrate how the same is equivalent to say that, for a given concrete example, various pieces of the scalar potential can be directly read off from our generic proposal, without the need of starting from the Kähler and superpotentials.
NASA Technical Reports Server (NTRS)
Garzia, M. R.; Loparo, K. A.; Martin, C. F.
1982-01-01
This paper looks at the structure of the solution of a matrix Riccati differential equation under a predefined group of transformations. The group of transformations used is an expanded form of the feedback group. It is shown that this group of transformations is a subgroup of the symplectic group. The orbits of the Riccati differential equation under the action of this group are studied and it is seen how these techniques apply to a decentralized optimal control problem.
Realization of Uq(sp(2n)) within the Differential Algebra on Quantum Symplectic Space
NASA Astrophysics Data System (ADS)
Zhang, Jiao; Hu, Naihong
2017-10-01
We realize the Hopf algebra U_q({sp}_{2n}) as an algebra of quantum differential operators on the quantum symplectic space X(f_s;R) and prove that X(f_s;R) is a U_q({sp}_{2n})-module algebra whose irreducible summands are just its homogeneous subspaces. We give a coherence realization for all the positive root vectors under the actions of Lusztig's braid automorphisms of U_q({sp}_{2n}).
Fast and reliable symplectic integration for planetary system N-body problems
NASA Astrophysics Data System (ADS)
Hernandez, David M.
2016-06-01
We apply one of the exactly symplectic integrators, which we call HB15, of Hernandez & Bertschinger, along with the Kepler problem solver of Wisdom & Hernandez, to solve planetary system N-body problems. We compare the method to Wisdom-Holman (WH) methods in the MERCURY software package, the MERCURY switching integrator, and others and find HB15 to be the most efficient method or tied for the most efficient method in many cases. Unlike WH, HB15 solved N-body problems exhibiting close encounters with small, acceptable error, although frequent encounters slowed the code. Switching maps like MERCURY change between two methods and are not exactly symplectic. We carry out careful tests on their properties and suggest that they must be used with caution. We then use different integrators to solve a three-body problem consisting of a binary planet orbiting a star. For all tested tolerances and time steps, MERCURY unbinds the binary after 0 to 25 years. However, in the solutions of HB15, a time-symmetric HERMITE code, and a symplectic Yoshida method, the binary remains bound for >1000 years. The methods' solutions are qualitatively different, despite small errors in the first integrals in most cases. Several checks suggest that the qualitative binary behaviour of HB15's solution is correct. The Bulirsch-Stoer and Radau methods in the MERCURY package also unbind the binary before a time of 50 years, suggesting that this dynamical error is due to a MERCURY bug.
On the Cohomology of Almost Complex Manifolds
NASA Astrophysics Data System (ADS)
Fino, Anna; Tomassini, Adriano
2010-07-01
We review some properties of two special types of almost complex structures, introduced by T.-J. Li and W. Zhang in [11], in relation to the existence of compatible symplectic structures and to the Hard Lefschetz condition. The two types of almost complex structures are defined respectively in terms of differential forms and currents. The paper is based on the results obtained in [9]. We give a new example of an 8-dimensional compact solvmanifold endowed with a C∞ pure and full almost complex structure calibrated by a symplectic form satisfying the Hard Lefschetz condition.
Faddeev-Jackiw quantization of topological invariants: Euler and Pontryagin classes
NASA Astrophysics Data System (ADS)
Escalante, Alberto; Medel-Portugal, C.
2018-04-01
The symplectic analysis for the four dimensional Pontryagin and Euler invariants is performed within the Faddeev-Jackiw context. The Faddeev-Jackiw constraints and the generalized Faddeev-Jackiw brackets are reported; we show that in spite of the Pontryagin and Euler classes give rise the same equations of motion, its respective symplectic structures are different to each other. In addition, a quantum state that solves the Faddeev-Jackiw constraints is found, and we show that the quantum states for these invariants are different to each other. Finally, we present some remarks and conclusions.
A classification of open Gaussian dynamics
NASA Astrophysics Data System (ADS)
Grimmer, Daniel; Brown, Eric; Kempf, Achim; Mann, Robert B.; Martín-Martínez, Eduardo
2018-06-01
We introduce a classification scheme for the generators of bosonic open Gaussian dynamics, providing instructive diagrams description for each type of dynamics. Using this classification, we discuss the consequences of imposing complete positivity on Gaussian dynamics. In particular, we show that non-symplectic operations must be active to allow for complete positivity. In addition, non-symplectic operations can, in fact, conserve the volume of phase space only if the restriction of complete positivity is lifted. We then discuss the implications for the relationship between information and energy flows in open quantum mechanics.
Importance sampling with imperfect cloning for the computation of generalized Lyapunov exponents
NASA Astrophysics Data System (ADS)
Anteneodo, Celia; Camargo, Sabrina; Vallejos, Raúl O.
2017-12-01
We revisit the numerical calculation of generalized Lyapunov exponents, L (q ) , in deterministic dynamical systems. The standard method consists of adding noise to the dynamics in order to use importance sampling algorithms. Then L (q ) is obtained by taking the limit noise-amplitude → 0 after the calculation. We focus on a particular method that involves periodic cloning and pruning of a set of trajectories. However, instead of considering a noisy dynamics, we implement an imperfect (noisy) cloning. This alternative method is compared with the standard one and, when possible, with analytical results. As a workbench we use the asymmetric tent map, the standard map, and a system of coupled symplectic maps. The general conclusion of this study is that the imperfect-cloning method performs as well as the standard one, with the advantage of preserving the deterministic dynamics.
NASA Astrophysics Data System (ADS)
Vela Vela, Luis; Sanchez, Raul; Geiger, Joachim
2018-03-01
A method is presented to obtain initial conditions for Smoothed Particle Hydrodynamic (SPH) scenarios where arbitrarily complex density distributions and low particle noise are needed. Our method, named ALARIC, tampers with the evolution of the internal variables to obtain a fast and efficient profile evolution towards the desired goal. The result has very low levels of particle noise and constitutes a perfect candidate to study the equilibrium and stability properties of SPH/SPMHD systems. The method uses the iso-thermal SPH equations to calculate hydrodynamical forces under the presence of an external fictitious potential and evolves them in time with a 2nd-order symplectic integrator. The proposed method generates tailored initial conditions that perform better in many cases than those based on purely crystalline lattices, since it prevents the appearance of anisotropies.
NASA Astrophysics Data System (ADS)
Bruhwiler, D. L.; Cary, J. R.; Shasharina, S.
1998-04-01
The MAPA accelerator modeling code symplectically advances the full nonlinear map, tangent map and tangent map derivative through all accelerator elements. The tangent map and its derivative are nonlinear generalizations of Browns first- and second-order matrices(K. Brown, SLAC-75, Rev. 4 (1982), pp. 107-118.), and they are valid even near the edges of the dynamic aperture, which may be beyond the radius of convergence for a truncated Taylor series. In order to avoid truncation of the map and its derivatives, the Hamiltonian is split into pieces for which the map can be obtained analytically. Yoshidas method(H. Yoshida, Phys. Lett. A 150 (1990), pp. 262-268.) is then used to obtain a symplectic approximation to the map, while the tangent map and its derivative are appropriately composed at each step to obtain them with equal accuracy. We discuss our splitting of the quadrupole and combined-function dipole Hamiltonians and show that typically few steps are required for a high-energy accelerator.
Structure of the low-lying positive parity states in the proton-neutron symplectic model
NASA Astrophysics Data System (ADS)
Ganev, H. G.
2018-05-01
The proton-neutron symplectic model with Sp(12, R) dynamical symmetry is applied for the simultaneous description of the microscopic structure of the low-lying states of the ground state, γ and β bands in 166 Er. For this purpose, the model Hamiltonian is diagonalized in the space of stretched states by exploiting the SUp (3) ⊗ SUn (3) symmetry-adapted basis. The theoretical predictions are compared with experiment and some other microscopic collective models, like the one-component Sp(6, R) symplectic and pseudo-SU(3) models. A good description of the energy levels of the three bands under consideration, as well as the enhanced intraband B(E2) transition strengths between the states of the ground and γ bands is obtained without the use of effective charges. The results show the presence of a good SU(3) dynamical symmetry. It is also shown that, in contrast to the Sp(6, R) case, the lowest excited bands, e.g., the β and γ bands, naturally appear together with the ground state band within a single Sp(12, R) irreducible representation.
Boundaries, mirror symmetry, and symplectic duality in 3d N = 4 gauge theory
Bullimore, Mathew; Dimofte, Tudor; Gaiotto, Davide; ...
2016-10-20
We introduce several families of N = (2, 2) UV boundary conditions in 3d N=4 gauge theories and study their IR images in sigma-models to the Higgs and Coulomb branches. In the presence of Omega deformations, a UV boundary condition defines a pair of modules for quantized algebras of chiral Higgs- and Coulomb-branch operators, respectively, whose structure we derive. In the case of abelian theories, we use the formalism of hyperplane arrangements to make our constructions very explicit, and construct a half-BPS interface that implements the action of 3d mirror symmetry on gauge theories and boundary conditions. Finally, by studyingmore » two-dimensional compactifications of 3d N = 4 gauge theories and their boundary conditions, we propose a physical origin for symplectic duality $-$ an equivalence of categories of modules associated to families of Higgs and Coulomb branches that has recently appeared in the mathematics literature, and generalizes classic results on Koszul duality in geometric representation theory. We make several predictions about the structure of symplectic duality, and identify Koszul duality as a special case of wall crossing.« less
Explicit symplectic orbit and spin tracking method for electric storage ring
Hwang, Kilean; Lee, S. Y.
2016-08-18
We develop a symplectic charged particle tracking method for phase space coordinates and polarization in all electric storage rings. Near the magic energy, the spin precession tune is proportional to the fractional momentum deviation δ m from the magic energy, and the amplitude of the radial and longitudinal spin precession is proportional to η/δ m, where η is the electric dipole moment for an initially vertically polarized beam. As a result, the method can be used to extract the electron electric dipole moment of a charged particle by employing narrow band frequency analysis of polarization around the magic energy.
Symplectic no-core shell-model approach to intermediate-mass nuclei
NASA Astrophysics Data System (ADS)
Tobin, G. K.; Ferriss, M. C.; Launey, K. D.; Dytrych, T.; Draayer, J. P.; Dreyfuss, A. C.; Bahri, C.
2014-03-01
We present a microscopic description of nuclei in the intermediate-mass region, including the proximity to the proton drip line, based on a no-core shell model with a schematic many-nucleon long-range interaction with no parameter adjustments. The outcome confirms the essential role played by the symplectic symmetry to inform the interaction and the winnowing of shell-model spaces. We show that it is imperative that model spaces be expanded well beyond the current limits up through 15 major shells to accommodate particle excitations, which appear critical to highly deformed spatial structures and the convergence of associated observables.
Dynamical Chaos in the Wisdom-Holman Integrator: Origins and Solutions
NASA Technical Reports Server (NTRS)
Rauch, Kevin P.; Holman, Matthew
1999-01-01
We examine the nonlinear stability of the Wisdom-Holman (WH) symplectic mapping applied to the integration of perturbed, highly eccentric (e-0.9) two-body orbits. We find that the method is unstable and introduces artificial chaos into the computed trajectories for this class of problems, unless the step size chosen 1s small enough that PeriaPse is always resolved, in which case the method is generically stable. This 'radial orbit instability' persists even for weakly perturbed systems. Using the Stark problem as a fiducial test case, we investigate the dynamical origin of this instability and argue that the numerical chaos results from the overlap of step-size resonances; interestingly, for the Stark-problem many of these resonances appear to be absolutely stable. We similarly examine the robustness of several alternative integration methods: a time-regularized version of the WH mapping suggested by Mikkola; the potential-splitting (PS) method of Duncan, Levison, Lee; and two original methods incorporating approximations based on Stark motion instead of Keplerian motion. The two fixed point problem and a related, more general problem are used to conduct a comparative test of the various methods for several types of motion. Among the algorithms tested, the time-transformed WH mapping is clearly the most efficient and stable method of integrating eccentric, nearly Keplerian orbits in the absence of close encounters. For test particles subject to both high eccentricities and very close encounters, we find an enhanced version of the PS method-incorporating time regularization, force-center switching, and an improved kernel function-to be both economical and highly versatile. We conclude that Stark-based methods are of marginal utility in N-body type integrations. Additional implications for the symplectic integration of N-body systems are discussed.
Ring polymer dynamics in curved spaces
NASA Astrophysics Data System (ADS)
Wolf, S.; Curotto, E.
2012-07-01
We formulate an extension of the ring polymer dynamics approach to curved spaces using stereographic projection coordinates. We test the theory by simulating the particle in a ring, {T}^1, mapped by a stereographic projection using three potentials. Two of these are quadratic, and one is a nonconfining sinusoidal model. We propose a new class of algorithms for the integration of the ring polymer Hamilton equations in curved spaces. These are designed to improve the energy conservation of symplectic integrators based on the split operator approach. For manifolds, the position-position autocorrelation function can be formulated in numerous ways. We find that the position-position autocorrelation function computed from configurations in the Euclidean space {R}^2 that contains {T}^1 as a submanifold has the best statistical properties. The agreement with exact results obtained with vector space methods is excellent for all three potentials, for all values of time in the interval simulated, and for a relatively broad range of temperatures.
Mathematics of Quantization and Quantum Fields
NASA Astrophysics Data System (ADS)
Dereziński, Jan; Gérard, Christian
2013-03-01
Preface; 1. Vector spaces; 2. Operators in Hilbert spaces; 3. Tensor algebras; 4. Analysis in L2(Rd); 5. Measures; 6. Algebras; 7. Anti-symmetric calculus; 8. Canonical commutation relations; 9. CCR on Fock spaces; 10. Symplectic invariance of CCR in finite dimensions; 11. Symplectic invariance of the CCR on Fock spaces; 12. Canonical anti-commutation relations; 13. CAR on Fock spaces; 14. Orthogonal invariance of CAR algebras; 15. Clifford relations; 16. Orthogonal invariance of the CAR on Fock spaces; 17. Quasi-free states; 18. Dynamics of quantum fields; 19. Quantum fields on space-time; 20. Diagrammatics; 21. Euclidean approach for bosons; 22. Interacting bosonic fields; Subject index; Symbols index.
Invariant measures on multimode quantum Gaussian states
NASA Astrophysics Data System (ADS)
Lupo, C.; Mancini, S.; De Pasquale, A.; Facchi, P.; Florio, G.; Pascazio, S.
2012-12-01
We derive the invariant measure on the manifold of multimode quantum Gaussian states, induced by the Haar measure on the group of Gaussian unitary transformations. To this end, by introducing a bipartition of the system in two disjoint subsystems, we use a parameterization highlighting the role of nonlocal degrees of freedom—the symplectic eigenvalues—which characterize quantum entanglement across the given bipartition. A finite measure is then obtained by imposing a physically motivated energy constraint. By averaging over the local degrees of freedom we finally derive the invariant distribution of the symplectic eigenvalues in some cases of particular interest for applications in quantum optics and quantum information.
Two Virasoro symmetries in stringy warped AdS 3
Compere, Geoffrey; Guica, Monica; Rodriguez, Maria J.
2014-12-02
We study three-dimensional consistent truncations of type IIB supergravity which admit warped AdS 3 solutions. These theories contain subsectors that have no bulk dynamics. We show that the symplectic form for these theories, when restricted to the non-dynamical subsectors, equals the symplectic form for pure Einstein gravity in AdS 3. Consequently, for each consistent choice of boundary conditions in AdS 3, we can define a consistent phase space in warped AdS 3 with identical conserved charges. This way, we easily obtain a Virasoro × Virasoro asymptotic symmetry algebra in warped AdS 3; two different types of Virasoro × Kač-Moody symmetriesmore » are also consistent alternatives. Next, we study the phase space of these theories when propagating modes are included. We show that, as long as one can define a conserved symplectic form without introducing instabilities, the Virasoro × Virasoro asymptotic symmetries can be extended to the entire (linearised) phase space. In conclusion, this implies that, at least at semi-classical level, consistent theories of gravity in warped AdS 3 are described by a two-dimensional conformal field theory, as long as stability is not an issue.« less
Two Virasoro symmetries in stringy warped AdS 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Compere, Geoffrey; Guica, Monica; Rodriguez, Maria J.
We study three-dimensional consistent truncations of type IIB supergravity which admit warped AdS 3 solutions. These theories contain subsectors that have no bulk dynamics. We show that the symplectic form for these theories, when restricted to the non-dynamical subsectors, equals the symplectic form for pure Einstein gravity in AdS 3. Consequently, for each consistent choice of boundary conditions in AdS 3, we can define a consistent phase space in warped AdS 3 with identical conserved charges. This way, we easily obtain a Virasoro × Virasoro asymptotic symmetry algebra in warped AdS 3; two different types of Virasoro × Kač-Moody symmetriesmore » are also consistent alternatives. Next, we study the phase space of these theories when propagating modes are included. We show that, as long as one can define a conserved symplectic form without introducing instabilities, the Virasoro × Virasoro asymptotic symmetries can be extended to the entire (linearised) phase space. In conclusion, this implies that, at least at semi-classical level, consistent theories of gravity in warped AdS 3 are described by a two-dimensional conformal field theory, as long as stability is not an issue.« less
The canonical Lagrangian approach to three-space general relativity
NASA Astrophysics Data System (ADS)
Shyam, Vasudev; Venkatesh, Madhavan
2013-07-01
We study the action for the three-space formalism of general relativity, better known as the Barbour-Foster-Ó Murchadha action, which is a square-root Baierlein-Sharp-Wheeler action. In particular, we explore the (pre)symplectic structure by pulling it back via a Legendre map to the tangent bundle of the configuration space of this action. With it we attain the canonical Lagrangian vector field which generates the gauge transformations (3-diffeomorphisms) and the true physical evolution of the system. This vector field encapsulates all the dynamics of the system. We also discuss briefly the observables and perennials for this theory. We then present a symplectic reduction of the constrained phase space.
Symmetry and conservation laws in semiclassical wave packet dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohsawa, Tomoki, E-mail: tomoki@utdallas.edu
2015-03-15
We formulate symmetries in semiclassical Gaussian wave packet dynamics and find the corresponding conserved quantities, particularly the semiclassical angular momentum, via Noether’s theorem. We consider two slightly different formulations of Gaussian wave packet dynamics; one is based on earlier works of Heller and Hagedorn and the other based on the symplectic-geometric approach by Lubich and others. In either case, we reveal the symplectic and Hamiltonian nature of the dynamics and formulate natural symmetry group actions in the setting to derive the corresponding conserved quantities (momentum maps). The semiclassical angular momentum inherits the essential properties of the classical angular momentum asmore » well as naturally corresponds to the quantum picture.« less
Localization in a quantum spin Hall system.
Onoda, Masaru; Avishai, Yshai; Nagaosa, Naoto
2007-02-16
The localization problem of electronic states in a two-dimensional quantum spin Hall system (that is, a symplectic ensemble with topological term) is studied by the transfer matrix method. The phase diagram in the plane of energy and disorder strength is exposed, and demonstrates "levitation" and "pair annihilation" of the domains of extended states analogous to that of the integer quantum Hall system. The critical exponent nu for the divergence of the localization length is estimated as nu congruent with 1.6, which is distinct from both exponents pertaining to the conventional symplectic and the unitary quantum Hall systems. Our analysis strongly suggests a different universality class related to the topology of the pertinent system.
Symplectic analysis of vertical random vibration for coupled vehicle track systems
NASA Astrophysics Data System (ADS)
Lu, F.; Kennedy, D.; Williams, F. W.; Lin, J. H.
2008-10-01
A computational model for random vibration analysis of vehicle-track systems is proposed and solutions use the pseudo excitation method (PEM) and the symplectic method. The vehicle is modelled as a mass, spring and damping system with 10 degrees of freedom (dofs) which consist of vertical and pitching motion for the vehicle body and its two bogies and vertical motion for the four wheelsets. The track is treated as an infinite Bernoulli-Euler beam connected to sleepers and hence to ballast and is regarded as a periodic structure. Linear springs couple the vehicle and the track. Hence, the coupled vehicle-track system has only 26 dofs. A fixed excitation model is used, i.e. the vehicle does not move along the track but instead the track irregularity profile moves backwards at the vehicle velocity. This irregularity is assumed to be a stationary random process. Random vibration theory is used to obtain the response power spectral densities (PSDs), by using PEM to transform this random multiexcitation problem into a deterministic harmonic excitation one and then applying symplectic solution methodology. Numerical results for an example include verification of the proposed method by comparing with finite element method (FEM) results; comparison between the present model and the traditional rigid track model and; discussion of the influences of track damping and vehicle velocity.
Efficient adaptive pseudo-symplectic numerical integration techniques for Landau-Lifshitz dynamics
NASA Astrophysics Data System (ADS)
d'Aquino, M.; Capuano, F.; Coppola, G.; Serpico, C.; Mayergoyz, I. D.
2018-05-01
Numerical time integration schemes for Landau-Lifshitz magnetization dynamics are considered. Such dynamics preserves the magnetization amplitude and, in the absence of dissipation, also implies the conservation of the free energy. This property is generally lost when time discretization is performed for the numerical solution. In this work, explicit numerical schemes based on Runge-Kutta methods are introduced. The schemes are termed pseudo-symplectic in that they are accurate to order p, but preserve magnetization amplitude and free energy to order q > p. An effective strategy for adaptive time-stepping control is discussed for schemes of this class. Numerical tests against analytical solutions for the simulation of fast precessional dynamics are performed in order to point out the effectiveness of the proposed methods.
Betti numbers of holomorphic symplectic quotients via arithmetic Fourier transform.
Hausel, Tamás
2006-04-18
A Fourier transform technique is introduced for counting the number of solutions of holomorphic moment map equations over a finite field. This technique in turn gives information on Betti numbers of holomorphic symplectic quotients. As a consequence, simple unified proofs are obtained for formulas of Poincaré polynomials of toric hyperkähler varieties (recovering results of Bielawski-Dancer and Hausel-Sturmfels), Poincaré polynomials of Hilbert schemes of points and twisted Atiyah-Drinfeld-Hitchin-Manin (ADHM) spaces of instantons on C2 (recovering results of Nakajima-Yoshioka), and Poincaré polynomials of all Nakajima quiver varieties. As an application, a proof of a conjecture of Kac on the number of absolutely indecomposable representations of a quiver is announced.
NASA Astrophysics Data System (ADS)
Goto, Shin-itiro; Umeno, Ken
2018-03-01
Maps on a parameter space for expressing distribution functions are exactly derived from the Perron-Frobenius equations for a generalized Boole transform family. Here the generalized Boole transform family is a one-parameter family of maps, where it is defined on a subset of the real line and its probability distribution function is the Cauchy distribution with some parameters. With this reduction, some relations between the statistical picture and the orbital one are shown. From the viewpoint of information geometry, the parameter space can be identified with a statistical manifold, and then it is shown that the derived maps can be characterized. Also, with an induced symplectic structure from a statistical structure, symplectic and information geometric aspects of the derived maps are discussed.
JANUS: a bit-wise reversible integrator for N-body dynamics
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2018-01-01
Hamiltonian systems such as the gravitational N-body problem have time-reversal symmetry. However, all numerical N-body integration schemes, including symplectic ones, respect this property only approximately. In this paper, we present the new N-body integrator JANUS , for which we achieve exact time-reversal symmetry by combining integer and floating point arithmetic. JANUS is explicit, formally symplectic and satisfies Liouville's theorem exactly. Its order is even and can be adjusted between two and ten. We discuss the implementation of JANUS and present tests of its accuracy and speed by performing and analysing long-term integrations of the Solar system. We show that JANUS is fast and accurate enough to tackle a broad class of dynamical problems. We also discuss the practical and philosophical implications of running exactly time-reversible simulations.
Symplectic maps and chromatic optics in particle accelerators
Cai, Yunhai
2015-07-06
Here, we have applied the nonlinear map method to comprehensively characterize the chromatic optics in particle accelerators. Our approach is built on the foundation of symplectic transfer maps of magnetic elements. The chromatic lattice parameters can be transported from one element to another by the maps. We also introduce a Jacobian operator that provides an intrinsic linkage between the maps and the matrix with parameter dependence. The link allows us to directly apply the formulation of the linear optics to compute the chromatic lattice parameters. As an illustration, we analyze an alternating-gradient cell with nonlinear sextupoles, octupoles, and decapoles andmore » derive analytically their settings for the local chromatic compensation. Finally, the cell becomes nearly perfect up to the third-order of the momentum deviation.« less
Dirac structures in nonequilibrium thermodynamics
NASA Astrophysics Data System (ADS)
Gay-Balmaz, François; Yoshimura, Hiroaki
2018-01-01
Dirac structures are geometric objects that generalize both Poisson structures and presymplectic structures on manifolds. They naturally appear in the formulation of constrained mechanical systems. In this paper, we show that the evolution equations for nonequilibrium thermodynamics admit an intrinsic formulation in terms of Dirac structures, both on the Lagrangian and the Hamiltonian settings. In the absence of irreversible processes, these Dirac structures reduce to canonical Dirac structures associated with canonical symplectic forms on phase spaces. Our geometric formulation of nonequilibrium thermodynamic thus consistently extends the geometric formulation of mechanics, to which it reduces in the absence of irreversible processes. The Dirac structures are associated with the variational formulation of nonequilibrium thermodynamics developed in the work of Gay-Balmaz and Yoshimura, J. Geom. Phys. 111, 169-193 (2017a) and are induced from a nonlinear nonholonomic constraint given by the expression of the entropy production of the system.
Extended Riemannian geometry II: local heterotic double field theory
NASA Astrophysics Data System (ADS)
Deser, Andreas; Heller, Marc Andre; Sämann, Christian
2018-04-01
We continue our exploration of local Double Field Theory (DFT) in terms of symplectic graded manifolds carrying compatible derivations and study the case of heterotic DFT. We start by developing in detail the differential graded manifold that captures heterotic Generalized Geometry which leads to new observations on the generalized metric and its twists. We then give a symplectic pre-N Q-manifold that captures the symmetries and the geometry of local heterotic DFT. We derive a weakened form of the section condition, which arises algebraically from consistency of the symmetry Lie 2-algebra and its action on extended tensors. We also give appropriate notions of twists — which are required for global formulations — and of the torsion and Riemann tensors. Finally, we show how the observed α'-corrections are interpreted naturally in our framework.
Relational symplectic groupoid quantization for constant poisson structures
NASA Astrophysics Data System (ADS)
Cattaneo, Alberto S.; Moshayedi, Nima; Wernli, Konstantin
2017-09-01
As a detailed application of the BV-BFV formalism for the quantization of field theories on manifolds with boundary, this note describes a quantization of the relational symplectic groupoid for a constant Poisson structure. The presence of mixed boundary conditions and the globalization of results are also addressed. In particular, the paper includes an extension to space-times with boundary of some formal geometry considerations in the BV-BFV formalism, and specifically introduces into the BV-BFV framework a "differential" version of the classical and quantum master equations. The quantization constructed in this paper induces Kontsevich's deformation quantization on the underlying Poisson manifold, i.e., the Moyal product, which is known in full details. This allows focussing on the BV-BFV technology and testing it. For the inexperienced reader, this is also a practical and reasonably simple way to learn it.
Supersymmetric symplectic quantum mechanics
NASA Astrophysics Data System (ADS)
de Menezes, Miralvo B.; Fernandes, M. C. B.; Martins, Maria das Graças R.; Santana, A. E.; Vianna, J. D. M.
2018-02-01
Symplectic Quantum Mechanics SQM considers a non-commutative algebra of functions on a phase space Γ and an associated Hilbert space HΓ to construct a unitary representation for the Galilei group. From this unitary representation the Schrödinger equation is rewritten in phase space variables and the Wigner function can be derived without the use of the Liouville-von Neumann equation. In this article we extend the methods of supersymmetric quantum mechanics SUSYQM to SQM. With the purpose of applications in quantum systems, the factorization method of the quantum mechanical formalism is then set within supersymmetric SQM. A hierarchy of simpler hamiltonians is generated leading to new computation tools for solving the eigenvalue problem in SQM. We illustrate the results by computing the states and spectra of the problem of a charged particle in a homogeneous magnetic field as well as the corresponding Wigner function.
Quantum gravity from noncommutative spacetime
NASA Astrophysics Data System (ADS)
Lee, Jungjai; Yang, Hyun Seok
2014-12-01
We review a novel and authentic way to quantize gravity. This novel approach is based on the fact that Einstein gravity can be formulated in terms of a symplectic geometry rather than a Riemannian geometry in the context of emergent gravity. An essential step for emergent gravity is to realize the equivalence principle, the most important property in the theory of gravity (general relativity), from U(1) gauge theory on a symplectic or Poisson manifold. Through the realization of the equivalence principle, which is an intrinsic property in symplectic geometry known as the Darboux theorem or the Moser lemma, one can understand how diffeomorphism symmetry arises from noncommutative U(1) gauge theory; thus, gravity can emerge from the noncommutative electromagnetism, which is also an interacting theory. As a consequence, a background-independent quantum gravity in which the prior existence of any spacetime structure is not a priori assumed but is defined by using the fundamental ingredients in quantum gravity theory can be formulated. This scheme for quantum gravity can be used to resolve many notorious problems in theoretical physics, such as the cosmological constant problem, to understand the nature of dark energy, and to explain why gravity is so weak compared to other forces. In particular, it leads to a remarkable picture of what matter is. A matter field, such as leptons and quarks, simply arises as a stable localized geometry, which is a topological object in the defining algebra (noncommutative ★-algebra) of quantum gravity.
Su, Hongling; Li, Shengtai
2016-02-03
In this study, we propose two new energy/dissipation-preserving Birkhoffian multi-symplectic methods (Birkhoffian and Birkhoffian box) for Maxwell's equations with dissipation terms. After investigating the non-autonomous and autonomous Birkhoffian formalism for Maxwell's equations with dissipation terms, we first apply a novel generating functional theory to the non-autonomous Birkhoffian formalism to propose our Birkhoffian scheme, and then implement a central box method to the autonomous Birkhoffian formalism to derive the Birkhoffian box scheme. We have obtained four formal local conservation laws and three formal energy global conservation laws. We have also proved that both of our derived schemes preserve the discrete versionmore » of the global/local conservation laws. Furthermore, the stability, dissipation and dispersion relations are also investigated for the schemes. Theoretical analysis shows that the schemes are unconditionally stable, dissipation-preserving for Maxwell's equations in a perfectly matched layer (PML) medium and have second order accuracy in both time and space. Numerical experiments for problems with exact theoretical results are given to demonstrate that the Birkhoffian multi-symplectic schemes are much more accurate in preserving energy than both the exponential finite-difference time-domain (FDTD) method and traditional Hamiltonian scheme. Finally, we also solve the electromagnetic pulse (EMP) propagation problem and the numerical results show that the Birkhoffian scheme recovers the magnitude of the current source and reaction history very well even after long time propagation.« less
K-decompositions and 3d gauge theories
NASA Astrophysics Data System (ADS)
Dimofte, Tudor; Gabella, Maxime; Goncharov, Alexander B.
2016-11-01
This paper combines several new constructions in mathematics and physics. Mathematically, we study framed flat PGL( K, ℂ)-connections on a large class of 3-manifolds M with boundary. We introduce a moduli space ℒ K ( M) of framed flat connections on the boundary ∂ M that extend to M. Our goal is to understand an open part of ℒ K ( M) as a Lagrangian subvariety in the symplectic moduli space {{X}}_K^{un}(partial M) of framed flat connections on the boundary — and more so, as a "K2-Lagrangian," meaning that the K2-avatar of the symplectic form restricts to zero. We construct an open part of ℒ K ( M) from elementary data associated with the hypersimplicial K-decomposition of an ideal triangulation of M, in a way that generalizes (and combines) both Thurston's gluing equations in 3d hyperbolic geometry and the cluster coordinates for framed flat PGL( K, ℂ)-connections on surfaces. By using a canonical map from the complex of configurations of decorated flags to the Bloch complex, we prove that any generic component of ℒ K ( M) is K2-isotropic as long as ∂ M satisfies certain topological constraints (theorem 4.2). In some cases this easily implies that ℒ K ( M) is K2-Lagrangian. For general M, we extend a classic result of Neumann and Zagier on symplectic properties of PGL(2) gluing equations to reduce the K2-Lagrangian property to a combinatorial statement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Hongling; Li, Shengtai
In this study, we propose two new energy/dissipation-preserving Birkhoffian multi-symplectic methods (Birkhoffian and Birkhoffian box) for Maxwell's equations with dissipation terms. After investigating the non-autonomous and autonomous Birkhoffian formalism for Maxwell's equations with dissipation terms, we first apply a novel generating functional theory to the non-autonomous Birkhoffian formalism to propose our Birkhoffian scheme, and then implement a central box method to the autonomous Birkhoffian formalism to derive the Birkhoffian box scheme. We have obtained four formal local conservation laws and three formal energy global conservation laws. We have also proved that both of our derived schemes preserve the discrete versionmore » of the global/local conservation laws. Furthermore, the stability, dissipation and dispersion relations are also investigated for the schemes. Theoretical analysis shows that the schemes are unconditionally stable, dissipation-preserving for Maxwell's equations in a perfectly matched layer (PML) medium and have second order accuracy in both time and space. Numerical experiments for problems with exact theoretical results are given to demonstrate that the Birkhoffian multi-symplectic schemes are much more accurate in preserving energy than both the exponential finite-difference time-domain (FDTD) method and traditional Hamiltonian scheme. Finally, we also solve the electromagnetic pulse (EMP) propagation problem and the numerical results show that the Birkhoffian scheme recovers the magnitude of the current source and reaction history very well even after long time propagation.« less
NASA Astrophysics Data System (ADS)
Dwivedi, Vatsal
This thesis presents some work on two quite disparate kinds of dynamical systems described by Hamiltonian dynamics. The first part describes a computation of gauge anomalies and their macroscopic effects in a semiclassical picture. The geometric (symplectic) formulation of classical mechanics is used to describe the dynamics of Weyl fermions in even spacetime dimensions, the only quantum input to the symplectic form being the Berry curvature that encodes the spin-momentum locking. The (semi-)classical equations of motion are used in a kinetic theory setup to compute the gauge and singlet currents, whose conservation laws reproduce the nonabelian gauge and singlet anomalies. Anomalous contributions to the hydrodynamic currents for a gas of Weyl fermions at a finite temperature and chemical potential are also calculated, and are in agreement with similar results in literature which were obtained using thermodynamic and/or quantum field theoretical arguments. The second part describes a generalized transfer matrix formalism for noninteracting tight-binding models. The formalism is used to study the bulk and edge spectra, both of which are encoded in the spectrum of the transfer matrices, for some of the common tight-binding models for noninteracting electronic topological phases of matter. The topological invariants associated with the boundary states are interpreted as winding numbers for windings around noncontractible loops on a Riemann sheet constructed using the algebraic structure of the transfer matrices, as well as with a Maslov index on a symplectic group manifold, which is the space of transfer matrices.
Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling
NASA Astrophysics Data System (ADS)
Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.
2018-02-01
A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.
Thengumpallil, Sheeba; Germond, Jean-François; Bourhis, Jean; Bochud, François; Moeckli, Raphaël
2016-06-01
To investigate the impact of Toshiba phase- and amplitude-sorting algorithms on the margin strategies for free-breathing lung radiotherapy treatments in the presence of breathing variations. 4D CT of a sphere inside a dynamic thorax phantom was acquired. The 4D CT was reconstructed according to the phase- and amplitude-sorting algorithms. The phantom was moved by reproducing amplitude, frequency, and a mix of amplitude and frequency variations. Artefact analysis was performed for Mid-Ventilation and ITV-based strategies on the images reconstructed by phase- and amplitude-sorting algorithms. The target volume deviation was assessed by comparing the target volume acquired during irregular motion to the volume acquired during regular motion. The amplitude-sorting algorithm shows reduced artefacts for only amplitude variations while the phase-sorting algorithm for only frequency variations. For amplitude and frequency variations, both algorithms perform similarly. Most of the artefacts are blurring and incomplete structures. We found larger artefacts and volume differences for the Mid-Ventilation with respect to the ITV strategy, resulting in a higher relative difference of the surface distortion value which ranges between maximum 14.6% and minimum 4.1%. The amplitude- is superior to the phase-sorting algorithm in the reduction of motion artefacts for amplitude variations while phase-sorting for frequency variations. A proper choice of 4D CT sorting algorithm is important in order to reduce motion artefacts, especially if Mid-Ventilation strategy is used. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Iterative Nonlocal Total Variation Regularization Method for Image Restoration
Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen
2013-01-01
In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560
NASA Astrophysics Data System (ADS)
Goodrich, Cyrena Anne; Harlow, George E.; Van Orman, James A.; Sutton, Stephen R.; Jercinovic, Michael J.; Mikouchi, Takashi
2014-06-01
Ureilites are ultramafic achondrites thought to be residues of partial melting on a carbon-rich asteroid. They show a trend of FeO-variation (olivine Fo from ∼74 to 95) that suggests variation in oxidation state. Whether this variation was established during high-temperature igneous processing on the ureilite parent body (UPB), or preserved from nebular precursors, is a subject of debate. The behavior of chromium in ureilites offers a way to assess redox conditions during their formation and address this issue, independent of Fo. We conducted a petrographic and mineral compositional study of occurrences of chromite (Cr-rich spinel) in ureilites, aimed at determining the origin of the chromite in each occurrence and using primary occurrences to constrain models of ureilite petrogenesis. Chromite was studied in LEW 88774 (Fo 74.2), NWA 766 (Fo 76.7), NWA 3109 (Fo 76.3), HaH 064 (Fo 77.5), LAP 03587 (Fo 74.9), CMS 04048 (Fo 76.4), LAP 02382 (Fo 78.6) and EET 96328 (Fo 85.2). Chromite occurs in LEW 88774 (∼5 vol.%), NWA 766 (<1 vol.%), NWA 3109 (<1 vol.%) and HaH 064 (<1 vol.%) as subhedral to anhedral grains comparable in size (∼30 μm to 1 mm) and/or textural setting to the major silicates (olivine and pyroxenes[s]) in each rock, indicating that it is a primary phase. The most FeO-rich chromites in these sample (rare grain cores or chadocrysts in silicates) are the most primitive compositions preserved (fe# = 0.55-0.6; Cr# varying from 0.65 to 0.72 among samples). They record olivine-chromite equilibration temperatures of ∼1040-1050 °C, reflecting subsolidus Fe/Mg reequilibration during slow cooling from ∼1200 to 1300 °C. All other chromite in these samples is reduced. Three types of zones are observed. (1) Inclusion-free interior zones showing reduction of FeO (fe# ∼0.4 → 0.28); (2) Outer zones showing further reduction of FeO (fe# ∼0.28 → 0.15) and containing abundant laths of eskolaite-corundum (Cr2O3-Al2O3); (3) Outermost zones showing extreme reduction of both FeO (fe# <0.15) and Cr2O3 (Cr# as low as 0.2). The grains are surrounded by rims of Si-Al-rich glass, graphite, Fe, Cr-carbides ([Fe,Cr]3C and [Fe,Cr]7C3), Cr-rich sulfides (daubréelite and brezinaite) and Cr-rich symplectic bands on adjacent silicates. Chromite is inferred to have been reduced by graphite, forming eskolaite-corundum and carbides as byproducts, during impact excavation. This event involved initial elevation of T (to 1300-1400 °C), followed by rapid decompression and drop in T (to <700 °C) at 1-20 °C/h. The kinetics of reduction of chromite is consistent with this scenario. The reduction was facilitated by silicate melt surrounding the chromites, which was partly generated by shock-melting of pyroxenes. Symplectic bands, consisting of fine-scale intergrowths of Ca-pyroxene, chromite and glass, formed by reaction between the Cr-enriched melt and adjacent silicates. Early chromite also occurs in a melt inclusion in olivine in HaH 064 and in a metallic spherule in olivine in LAP 02382. LAP 03587 and CMS 04048 contain ⩽μm-sized chromite + pyroxene symplectic exsolutions in olivine, indicating high Cr valence in the primary olivine. EET 96328 contains a round grain of chromite that could be a late-crystallizing phase. Tiny chromite grains in melt inclusions in EET 96328 formed in late, closed-system reactions. For 7 of the 8 ureilites we conclude that the relatively oxidizing conditions evidenced by the presence of primary or early chromite pertain to the period of high-T igneous processing. The observation that such conditions are recorded almost exclusively in low-Fo samples supports the interpretation that the ureilite FeO-variation was established during igneous processing on the UPB.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koenig, Robert
We propose a generalization of the quantum entropy power inequality involving conditional entropies. For the special case of Gaussian states, we give a proof based on perturbation theory for symplectic spectra. We discuss some implications for entanglement-assisted classical communication over additive bosonic noise channels.
Diff-invariant kinetic terms in arbitrary dimensions
NASA Astrophysics Data System (ADS)
Barbero G., J. Fernando; Villaseñor, Eduardo J.
2002-06-01
We study the physical content of quadratic diff-invariant Lagrangians in arbitrary dimensions by using covariant symplectic techniques. This paper extends previous results in dimension four. We discuss the difference between the even and odd dimensional cases.
A Hamiltonian approach to Thermodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldiotti, M.C., E-mail: baldiotti@uel.br; Fresneda, R., E-mail: rodrigo.fresneda@ufabc.edu.br; Molina, C., E-mail: cmolina@usp.br
In the present work we develop a strictly Hamiltonian approach to Thermodynamics. A thermodynamic description based on symplectic geometry is introduced, where all thermodynamic processes can be described within the framework of Analytic Mechanics. Our proposal is constructed on top of a usual symplectic manifold, where phase space is even dimensional and one has well-defined Poisson brackets. The main idea is the introduction of an extended phase space where thermodynamic equations of state are realized as constraints. We are then able to apply the canonical transformation toolkit to thermodynamic problems. Throughout this development, Dirac’s theory of constrained systems is extensivelymore » used. To illustrate the formalism, we consider paradigmatic examples, namely, the ideal, van der Waals and Clausius gases. - Highlights: • A strictly Hamiltonian approach to Thermodynamics is proposed. • Dirac’s theory of constrained systems is extensively used. • Thermodynamic equations of state are realized as constraints. • Thermodynamic potentials are related by canonical transformations.« less
Meta-Symplectic Geometry of 3rd Order Monge-Ampère Equations and their Characteristics
NASA Astrophysics Data System (ADS)
Manno, Gianni; Moreno, Giovanni
2016-03-01
This paper is a natural companion of [Alekseevsky D.V., Alonso Blanco R., Manno G., Pugliese F., Ann. Inst. Fourier (Grenoble) 62 (2012), 497-524, arXiv:1003.5177], generalising its perspectives and results to the context of third-order (2D) Monge-Ampère equations, by using the so-called ''meta-symplectic structure'' associated with the 8D prolongation M^{(1)} of a 5D contact manifold M. We write down a geometric definition of a third-order Monge-Ampère equation in terms of a (class of) differential two-form on M^{(1)}. In particular, the equations corresponding to decomposable forms admit a simple description in terms of certain three-dimensional distributions, which are made from the characteristics of the original equations. We conclude the paper with a study of the intermediate integrals of these special Monge-Ampère equations, herewith called of Goursat type.
Natural differential operations on manifolds: an algebraic approach
NASA Astrophysics Data System (ADS)
Katsylo, P. I.; Timashev, D. A.
2008-10-01
Natural algebraic differential operations on geometric quantities on smooth manifolds are considered. A method for the investigation and classification of such operations is described, the method of IT-reduction. With it the investigation of natural operations reduces to the analysis of rational maps between k-jet spaces, which are equivariant with respect to certain algebraic groups. On the basis of the method of IT-reduction a finite generation theorem is proved: for tensor bundles \\mathscr{V},\\mathscr{W}\\to M all the natural differential operations D\\colon\\Gamma(\\mathscr{V})\\to\\Gamma(\\mathscr{W}) of degree at most d can be algebraically constructed from some finite set of such operations. Conceptual proofs of known results on the classification of natural linear operations on arbitrary and symplectic manifolds are presented. A non-existence theorem is proved for natural deformation quantizations on Poisson manifolds and symplectic manifolds.Bibliography: 21 titles.
Four-dimensional gravity as an almost-Poisson system
NASA Astrophysics Data System (ADS)
Ita, Eyo Eyo
2015-04-01
In this paper, we examine the phase space structure of a noncanonical formulation of four-dimensional gravity referred to as the Instanton representation of Plebanski gravity (IRPG). The typical Hamiltonian (symplectic) approach leads to an obstruction to the definition of a symplectic structure on the full phase space of the IRPG. We circumvent this obstruction, using the Lagrange equations of motion, to find the appropriate generalization of the Poisson bracket. It is shown that the IRPG does not support a Poisson bracket except on the vector constraint surface. Yet there exists a fundamental bilinear operation on its phase space which produces the correct equations of motion and induces the correct transformation properties of the basic fields. This bilinear operation is known as the almost-Poisson bracket, which fails to satisfy the Jacobi identity and in this case also the condition of antisymmetry. We place these results into the overall context of nonsymplectic systems.
Multipole Vortex Blobs (MVB): Symplectic Geometry and Dynamics.
Holm, Darryl D; Jacobs, Henry O
2017-01-01
Vortex blob methods are typically characterized by a regularization length scale, below which the dynamics are trivial for isolated blobs. In this article, we observe that the dynamics need not be trivial if one is willing to consider distributional derivatives of Dirac delta functionals as valid vorticity distributions. More specifically, a new singular vortex theory is presented for regularized Euler fluid equations of ideal incompressible flow in the plane. We determine the conditions under which such regularized Euler fluid equations may admit vorticity singularities which are stronger than delta functions, e.g., derivatives of delta functions. We also describe the symplectic geometry associated with these augmented vortex structures, and we characterize the dynamics as Hamiltonian. Applications to the design of numerical methods similar to vortex blob methods are also discussed. Such findings illuminate the rich dynamics which occur below the regularization length scale and enlighten our perspective on the potential for regularized fluid models to capture multiscale phenomena.
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
A braided monoidal category for free super-bosons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Runkel, Ingo, E-mail: ingo.runkel@uni-hamburg.de
The chiral conformal field theory of free super-bosons is generated by weight one currents whose mode algebra is the affinisation of an abelian Lie super-algebra h with non-degenerate super-symmetric pairing. The mode algebras of a single free boson and of a single pair of symplectic fermions arise for even|odd dimension 1|0 and 0|2 of h, respectively. In this paper, the representations of the untwisted mode algebra of free super-bosons are equipped with a tensor product, a braiding, and an associator. In the symplectic fermion case, i.e., if h is purely odd, the braided monoidal structure is extended to representations ofmore » the Z/2Z-twisted mode algebra. The tensor product is obtained by computing spaces of vertex operators. The braiding and associator are determined by explicit calculations from three- and four-point conformal blocks.« less
Surfing on the edge: chaos versus near-integrability in the system of Jovian planets
NASA Astrophysics Data System (ADS)
Hayes, Wayne B.
2008-05-01
We demonstrate that the system of Sun and Jovian planets, integrated for 200Myr as an isolated five-body system using many sets of initial conditions all within the uncertainty bounds of their currently known positions, can display both chaos and near-integrability. The conclusion is consistent across four different integrators, including several comparisons against integrations utilizing quadruple precision. We demonstrate that the Wisdom-Holman symplectic map using simple symplectic correctors as implemented in MERCURY 6.2 gives a reliable characterization of the existence of chaos for a particular initial condition only with time-steps less than about 10d, corresponding to about 400 steps per orbit. We also integrate the canonical DE405 initial condition out to 5Gyr, and show that it has a Lyapunov time of 200-400Myr, opening the remote possibility of accurate prediction of the Jovian planetary positions for 5Gyr.
NASA Astrophysics Data System (ADS)
Ali, Halima; Punjabi, Alkesh; Boozer, Allen
2004-09-01
In our method of maps [Punjabi et al., Phy. Rev. Lett. 69, 3322 (1992), and Punjabi et al., J. Plasma Phys. 52, 91 (1994)], symplectic maps are used to calculate the trajectories of magnetic field lines in divertor tokamaks. Effects of the magnetic perturbations are calculated using the low MN map [Ali et al., Phys. Plasmas 11, 1908 (2004)] and the dipole map [Punjabi et al., Phys. Plasmas 10, 3992 (2003)]. The dipole map is used to calculate the effects of externally located current carrying coils on the trajectories of the field lines, the stochastic layer, the magnetic footprint, and the heat load distribution on the collector plates in divertor tokamaks [Punjabi et al., Phys. Plasmas 10, 3992 (2003)]. Symplectic maps are general, efficient, and preserve and respect the Hamiltonian nature of the dynamics. In this brief communication, a rigorous mathematical derivation of the dipole map is given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chuchu, E-mail: chenchuchu@lsec.cc.ac.cn; Hong, Jialin, E-mail: hjl@lsec.cc.ac.cn; Zhang, Liying, E-mail: lyzhang@lsec.cc.ac.cn
Stochastic Maxwell equations with additive noise are a system of stochastic Hamiltonian partial differential equations intrinsically, possessing the stochastic multi-symplectic conservation law. It is shown that the averaged energy increases linearly with respect to the evolution of time and the flow of stochastic Maxwell equations with additive noise preserves the divergence in the sense of expectation. Moreover, we propose three novel stochastic multi-symplectic methods to discretize stochastic Maxwell equations in order to investigate the preservation of these properties numerically. We make theoretical discussions and comparisons on all of the three methods to observe that all of them preserve the correspondingmore » discrete version of the averaged divergence. Meanwhile, we obtain the corresponding dissipative property of the discrete averaged energy satisfied by each method. Especially, the evolution rates of the averaged energies for all of the three methods are derived which are in accordance with the continuous case. Numerical experiments are performed to verify our theoretical results.« less
K-decompositions and 3d gauge theories
Dimofte, Tudor; Gabella, Maxime; Goncharov, Alexander B.
2016-11-24
This paper combines several new constructions in mathematics and physics. Mathematically, we study framed flat PGL(K, C)-connections on a large class of 3-manifolds M with boundary. We introduce a moduli spacemore » $$\\mathcal{L}$$ K(M) of framed flat connections on the boundary ∂M that extend to M. Our goal is to understand an open part of $$\\mathcal{L}$$ K(M) as a Lagrangian subvariety in the symplectic moduli space X un K(∂M) of framed flat connections on the boundary — and more so, as a “K 2-Lagrangian,” meaning that the K 2-avatar of the symplectic form restricts to zero. We construct an open part of $$\\mathcal{L}$$ K(M) from elementary data associated with the hypersimplicial K-decomposition of an ideal triangulation of M, in a way that generalizes (and combines) both Thurston’s gluing equations in 3d hyperbolic geometry and the cluster coordinates for framed flat PGL(K, C)-connections on surfaces. By using a canonical map from the complex of configurations of decorated flags to the Bloch complex, we prove that any generic component of $$\\mathcal{L}$$ K(M) is K 2-isotropic as long as ∂M satisfies certain topological constraints (theorem 4.2). In some cases this easily implies that $$\\mathcal{L}$$ K(M) is K 2-Lagrangian. For general M, we extend a classic result of Neumann and Zagier on symplectic properties of PGL(2) gluing equations to reduce the K 2-Lagrangian property to a combinatorial statement. Physically, we translate the K-decomposition of an ideal triangulation of M and its symplectic properties to produce an explicit construction of 3d N = 2 superconformal field theories T K [M] resulting (conjecturally) from the compactification of K M5-branes on M. This extends known constructions for K = 2. Just as for K = 2, the theories T K [M] are described as IR fixed points of abelian Chern-Simons-matter theories. Changes of triangulation (2-3 moves) lead to abelian mirror symmetries that are all generated by the elementary duality between N f = 1 SQED and the XYZ model. In the large K limit, we find evidence that the degrees of freedom of T K [M] grow cubically in K.« less
K-decompositions and 3d gauge theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimofte, Tudor; Gabella, Maxime; Goncharov, Alexander B.
This paper combines several new constructions in mathematics and physics. Mathematically, we study framed flat PGL(K, C)-connections on a large class of 3-manifolds M with boundary. We introduce a moduli spacemore » $$\\mathcal{L}$$ K(M) of framed flat connections on the boundary ∂M that extend to M. Our goal is to understand an open part of $$\\mathcal{L}$$ K(M) as a Lagrangian subvariety in the symplectic moduli space X un K(∂M) of framed flat connections on the boundary — and more so, as a “K 2-Lagrangian,” meaning that the K 2-avatar of the symplectic form restricts to zero. We construct an open part of $$\\mathcal{L}$$ K(M) from elementary data associated with the hypersimplicial K-decomposition of an ideal triangulation of M, in a way that generalizes (and combines) both Thurston’s gluing equations in 3d hyperbolic geometry and the cluster coordinates for framed flat PGL(K, C)-connections on surfaces. By using a canonical map from the complex of configurations of decorated flags to the Bloch complex, we prove that any generic component of $$\\mathcal{L}$$ K(M) is K 2-isotropic as long as ∂M satisfies certain topological constraints (theorem 4.2). In some cases this easily implies that $$\\mathcal{L}$$ K(M) is K 2-Lagrangian. For general M, we extend a classic result of Neumann and Zagier on symplectic properties of PGL(2) gluing equations to reduce the K 2-Lagrangian property to a combinatorial statement. Physically, we translate the K-decomposition of an ideal triangulation of M and its symplectic properties to produce an explicit construction of 3d N = 2 superconformal field theories T K [M] resulting (conjecturally) from the compactification of K M5-branes on M. This extends known constructions for K = 2. Just as for K = 2, the theories T K [M] are described as IR fixed points of abelian Chern-Simons-matter theories. Changes of triangulation (2-3 moves) lead to abelian mirror symmetries that are all generated by the elementary duality between N f = 1 SQED and the XYZ model. In the large K limit, we find evidence that the degrees of freedom of T K [M] grow cubically in K.« less
Perturbative Quantum Gauge Theories on Manifolds with Boundary
NASA Astrophysics Data System (ADS)
Cattaneo, Alberto S.; Mnev, Pavel; Reshetikhin, Nicolai
2018-01-01
This paper introduces a general perturbative quantization scheme for gauge theories on manifolds with boundary, compatible with cutting and gluing, in the cohomological symplectic (BV-BFV) formalism. Explicit examples, like abelian BF theory and its perturbations, including nontopological ones, are presented.
Painlevé equations, elliptic integrals and elementary functions
NASA Astrophysics Data System (ADS)
Żołądek, Henryk; Filipuk, Galina
2015-02-01
The six Painlevé equations can be written in the Hamiltonian form, with time dependent Hamilton functions. We present a rather new approach to this result, leading to rational Hamilton functions. By a natural extension of the phase space one gets corresponding autonomous Hamiltonian systems with two degrees of freedom. We realize the Bäcklund transformations of the Painlevé equations as symplectic birational transformations in C4 and we interpret the cases with classical solutions as the cases of partial integrability of the extended Hamiltonian systems. We prove that the extended Hamiltonian systems do not have any additional algebraic first integral besides the known special cases of the third and fifth Painlevé equations. We also show that the original Painlevé equations admit the first integrals expressed in terms of the elementary functions only in the special cases mentioned above. In the proofs we use equations in variations with respect to a parameter and Liouville's theory of elementary functions.
A framework for estimating potential fluid flow from digital imagery
NASA Astrophysics Data System (ADS)
Luttman, Aaron; Bollt, Erik M.; Basnayake, Ranil; Kramer, Sean; Tufillaro, Nicholas B.
2013-09-01
Given image data of a fluid flow, the flow field, ⟨u,v⟩, governing the evolution of the system can be estimated using a variational approach to optical flow. Assuming that the flow field governing the advection is the symplectic gradient of a stream function or the gradient of a potential function—both falling under the category of a potential flow—it is natural to re-frame the optical flow problem to reconstruct the stream or potential function directly rather than the components of the flow individually. There are several advantages to this framework. Minimizing a functional based on the stream or potential function rather than based on the components of the flow will ensure that the computed flow is a potential flow. Next, this approach allows a more natural method for imposing scientific priors on the computed flow, via regularization of the optical flow functional. Also, this paradigm shift gives a framework—rather than an algorithm—and can be applied to nearly any existing variational optical flow technique. In this work, we develop the mathematical formulation of the potential optical flow framework and demonstrate the technique on synthetic flows that represent important dynamics for mass transport in fluid flows, as well as a flow generated by a satellite data-verified ocean model of temperature transport.
A reformulation of mechanics and electrodynamics.
Pinheiro, Mario J
2017-07-01
Classical mechanics, as commonly taught in engineering and science, are confined to the conventional Newtonian theory. But classical mechanics has not really changed in substance since Newton formulation, describing simultaneous rotation and translation of objects with somewhat complicate drawbacks, risking interpretation of forces in non-inertial frames. In this work we introduce a new variational principle for out-of-equilibrium, rotating systems, obtaining a set of two first order differential equations that introduces a thermodynamic-mechanistic time into Newton's dynamical equation, and revealing the same formal symplectic structure shared by classical mechanics, fluid mechanics and thermodynamics. The results is a more consistent formulation of dynamics and electrodynamics, explaining natural phenomena as the outcome from a balance between energy and entropy, embedding translational with rotational motion into a single equation, showing centrifugal and Coriolis force as derivatives from the transport of angular momentum, and offering a natural method to handle variational problems, as shown with the brachistochrone problem. In consequence, a new force term appears, the topological torsion current, important for spacecraft dynamics. We describe a set of solved problems showing the potential of a competing technique, with significant interest to electrodynamics as well. We expect this new approach to have impact in a large class of scientific and technological problems.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
The Frölicher-type inequalities of foliations
NASA Astrophysics Data System (ADS)
Raźny, Paweł
2017-04-01
The purpose of this article is to adapt the Frölicher-type inequality, stated and proven for complex and symplectic manifolds in Angella and Tomassini (2015), to the case of transversely holomorphic and symplectic foliations. These inequalities provide a criterion for checking whether a foliation transversely satisfies the ∂ ∂ ¯ -lemma and the ddΛ-lemma (i.e. whether the basic forms of a given foliation satisfy them). These lemmas are linked to such properties as the formality of the basic de Rham complex of a foliation and the transverse hard Lefschetz property. In particular they provide an obstruction to the existence of a transverse Kähler structure for a given foliation. In the second section we will provide some information concerning the d‧d″-lemma for a given double complex (K • , • ,d‧ ,d″) and state the main results from Angella and Tomassini (2015). We will also recall some basic facts and definitions concerning foliations. In the third section we treat the case of transversely holomorphic foliations. We also give a brief review of some properties of the basic Bott-Chern and Aeppli cohomology theories. In Section 4 we prove the symplectic version of the Frölicher-type inequality. The final 3 sections of this paper are devoted to the applications of our main theorems. In them we verify the aforementioned lemmas for some simple examples, give the orbifold versions of the Frölicher-type inequalities and show that transversely Kähler foliations satisfy both the ∂ ∂ ¯ -lemma and the ddΛ-lemma (or in other words that our main theorems provide an obstruction to the existence of a transversely Kähler structure).
On the Restricted Toda and c-KdV Flows of Neumann Type
NASA Astrophysics Data System (ADS)
Zhou, RuGuang; Qiao, ZhiJun
2000-09-01
It is proven that on a symplectic submanifold the restricted c-KdV flow is just the interpolating Hamiltonian flow of invariant for the restricted Toda flow, which is an integrable symplectic map of Neumann type. They share the common Lax matrix, dynamical r-matrix and system of involutive conserved integrals. Furthermore, the procedure of separation of variables is considered for the restricted c-KdV flow of Neumann type. The project supported by the Chinese National Basic Research Project "Nonlinear Science" and the Doctoral Programme Foundation of Institution of High Education of China. The first author also thanks the National Natural Science Foundation of China (19801031) and "Qinglan Project" of Jiangsu Province of China; and the second author also thanks the Alexander von Humboldt Fellowships, Deutschland, the Special Grant of Excellent Ph. D Thesis of China, the Science & Technology Foundation (Youth Talent Foundation) and the Science Research Foundation of Education Committee of Liaoning Province of China.
Quantization of wave equations and hermitian structures in partial differential varieties
Paneitz, S. M.; Segal, I. E.
1980-01-01
Sufficiently close to 0, the solution variety of a nonlinear relativistic wave equation—e.g., of the form □ϕ + m2ϕ + gϕp = 0—admits a canonical Lorentz-invariant hermitian structure, uniquely determined by the consideration that the action of the differential scattering transformation in each tangent space be unitary. Similar results apply to linear time-dependent equations or to equations in a curved asymptotically flat space-time. A close relation of the Riemannian structure to the determination of vacuum expectation values is developed and illustrated by an explicit determination of a perturbative 2-point function for the case of interaction arising from curvature. The theory underlying these developments is in part a generalization of that of M. G. Krein and collaborators concerning stability of differential equations in Hilbert space and in part a precise relation between the unitarization of given symplectic linear actions and their full probabilistic quantization. The unique causal structure in the infinite symplectic group is instrumental in these developments. PMID:16592923
Quantization of Poisson Manifolds from the Integrability of the Modular Function
NASA Astrophysics Data System (ADS)
Bonechi, F.; Ciccoli, N.; Qiu, J.; Tarlini, M.
2014-10-01
We discuss a framework for quantizing a Poisson manifold via the quantization of its symplectic groupoid, combining the tools of geometric quantization with the results of Renault's theory of groupoid C*-algebras. This setting allows very singular polarizations. In particular, we consider the case when the modular function is multiplicatively integrable, i.e., when the space of leaves of the polarization inherits a groupoid structure. If suitable regularity conditions are satisfied, then one can define the quantum algebra as the convolution algebra of the subgroupoid of leaves satisfying the Bohr-Sommerfeld conditions. We apply this procedure to the case of a family of Poisson structures on , seen as Poisson homogeneous spaces of the standard Poisson-Lie group SU( n + 1). We show that a bihamiltonian system on defines a multiplicative integrable model on the symplectic groupoid; we compute the Bohr-Sommerfeld groupoid and show that it satisfies the needed properties for applying Renault theory. We recover and extend Sheu's description of quantum homogeneous spaces as groupoid C*-algebras.
Preserving Simplecticity in the Numerical Integration of Linear Beam Optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, Christopher K.
2017-07-01
Presented are mathematical tools and methods for the development of numerical integration techniques that preserve the symplectic condition inherent to mechanics. The intended audience is for beam physicists with backgrounds in numerical modeling and simulation with particular attention to beam optics applications. The paper focuses on Lie methods that are inherently symplectic regardless of the integration accuracy order. Section 2 provides the mathematically tools used in the sequel and necessary for the reader to extend the covered techniques. Section 3 places those tools in the context of charged-particle beam optics; in particular linear beam optics is presented in terms ofmore » a Lie algebraic matrix representation. Section 4 presents numerical stepping techniques with particular emphasis on a third-order leapfrog method. Section 5 discusses the modeling of field imperfections with particular attention to the fringe fields of quadrupole focusing magnets. The direct computation of a third order transfer matrix for a fringe field is shown.« less
Localization in quantum field theory
NASA Astrophysics Data System (ADS)
Balachandran, A. P.
In non-relativistic quantum mechanics, Born’s principle of localization is as follows: For a single particle, if a wave function ψK vanishes outside a spatial region K, it is said to be localized in K. In particular, if a spatial region K‧ is disjoint from K, a wave function ψK‧ localized in K‧ is orthogonal to ψK. Such a principle of localization does not exist compatibly with relativity and causality in quantum field theory (QFT) (Newton and Wigner) or interacting point particles (Currie, Jordan and Sudarshan). It is replaced by symplectic localization of observables as shown by Brunetti, Guido and Longo, Schroer and others. This localization gives a simple derivation of the spin-statistics theorem and the Unruh effect, and shows how to construct quantum fields for anyons and for massless particles with “continuous” spin. This review outlines the basic principles underlying symplectic localization and shows or mentions its deep implications. In particular, it has the potential to affect relativistic quantum information theory and black hole physics.
Total variation-based neutron computed tomography
NASA Astrophysics Data System (ADS)
Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick
2018-05-01
We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.
Variational optimization algorithms for uniform matrix product states
NASA Astrophysics Data System (ADS)
Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.
2018-01-01
We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.
Dynamics of Three Vortices on a Sphere
NASA Astrophysics Data System (ADS)
Borisov, Alexey V.; Mamaev, Ivan S.; Kilin, Alexander A.
2018-01-01
This paper is concerned with the dynamics of vortices on a sphere. It is shown that, as a result of reduction, the problem reduces to investigating a system with a nonlinear Poisson bracket. The topology of a symplectic leaf in the case of three vortices is studied.
Semiclassical geometry of integrable systems
NASA Astrophysics Data System (ADS)
Reshetikhin, Nicolai
2018-04-01
The main result of this paper is a formula for the scalar product of semiclassical eigenvectors of two integrable systems on the same symplectic manifold. An important application of this formula is the Ponzano–Regge type of asymptotic of Racah–Wigner coefficients. Dedicated to the memory of P P Kulish.
A class of parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.
An introduction to Lie group integrators – basics, new developments and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Celledoni, Elena, E-mail: elenac@math.ntnu.no; Marthinsen, Håkon, E-mail: hakonm@math.ntnu.no; Owren, Brynjulf, E-mail: bryn@math.ntnu.no
2014-01-15
We give a short and elementary introduction to Lie group methods. A selection of applications of Lie group integrators are discussed. Finally, a family of symplectic integrators on cotangent bundles of Lie groups is presented and the notion of discrete gradient methods is generalised to Lie groups.
Closed, analytic, boson realizations for Sp(4)
NASA Astrophysics Data System (ADS)
Klein, Abraham; Zhang, Qing-Ying
1986-08-01
The problem of determing a boson realization for an arbitrary irrep of the unitary simplectic algebra Sp(2d) [or of the corresponding discrete unitary irreps of the unbounded algebra Sp(2d,R)] has been solved completely in recent papers by Deenen and Quesne [J. Deenen and C. Quesne, J. Math. Phys. 23, 878, 2004 (1982); 25, 1638 (1984); 26, 2705 (1985)] and by Moshinsky and co-workers [O. Castaños, E. Chacón, M. Moshinsky, and C. Quesne, J. Math. Phys. 26, 2107 (1985); M. Moshinsky, ``Boson realization of symplectic algebras,'' to be published]. This solution is not known in closed analytic form except for d=1 and for special classes of irreps for d>1. A different method of obtaining a boson realization that solves the full problem for Sp(4) is described. The method utilizes the chain Sp(2d)⊇SU(2)×SU(2) ×ṡṡṡ×SU(2) (d times), which, for d≥4, does not provide a complete set of quantum numbers. Though a simple solution of the missing label problem can be given, this solution does not help in the construction of a mapping algorithm for general d.
Solar System Chaos and Orbital Solutions for Paleoclimate Studies: Limits and New Results
NASA Astrophysics Data System (ADS)
Zeebe, R. E.
2017-12-01
I report results from accurate numerical integrations of Solar System orbits over the past 100 Myr. The simulations used different integrator algorithms, step sizes, and initial conditions (NASA, INPOP), and included effects from general relativity, different models of the Moon, the Sun's quadrupole moment, and up to ten asteroids. In one simulation, I probed the potential effect of a hypothetical Planet 9 on the dynamics of the system. The most expensive integration required 4 months wall-clock time (Bulirsch-Stoer algorithm) and showed a maximum relative energy error < 2.5e{-13} over the past 100 Myr. The difference in Earth's eccentricity (DeE) was used to track the difference between two solutions, which were considered to diverge at time tau when DeE irreversibly crossed 10% of Earth's mean eccentricity ( 0.028 x 0.1). My results indicate that finding a unique orbital solution is limited by initial conditions from current ephemerides to 54 Myr. Bizarrely, the 4-month Bulirsch-Stoer integration and a different integration scheme that required only 5 hours wall-clock time (symplectic, 12-day time step, Moon as a simple quadrupole perturbation), agree to 63 Myr. Solutions including 3 and 10 asteroids diverge at tau 48 Myr. The effect of a hypothetical Planet 9 on DeE becomes discernible at 66 Myr. Using tau as a criterion, the current state-of-the-art solutions all differ from previously published results beyond 50 Myr. The current study provides new orbital solutions for application in geological studies. I will also comment on the prospect of constraining astronomical solutions by geologic data.
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong
2012-03-01
Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.
A generalized Condat's algorithm of 1D total variation regularization
NASA Astrophysics Data System (ADS)
Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly
2017-09-01
A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.
Space-variant restoration of images degraded by camera motion blur.
Sorel, Michal; Flusser, Jan
2008-02-01
We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.
NASA Astrophysics Data System (ADS)
Siegel, J.; Siegel, Edward Carl-Ludwig
2011-03-01
Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!
The theory of variational hybrid quantum-classical algorithms
NASA Astrophysics Data System (ADS)
McClean, Jarrod R.; Romero, Jonathan; Babbush, Ryan; Aspuru-Guzik, Alán
2016-02-01
Many quantum algorithms have daunting resource requirements when compared to what is available today. To address this discrepancy, a quantum-classical hybrid optimization scheme known as ‘the quantum variational eigensolver’ was developed (Peruzzo et al 2014 Nat. Commun. 5 4213) with the philosophy that even minimal quantum resources could be made useful when used in conjunction with classical routines. In this work we extend the general theory of this algorithm and suggest algorithmic improvements for practical implementations. Specifically, we develop a variational adiabatic ansatz and explore unitary coupled cluster where we establish a connection from second order unitary coupled cluster to universal gate sets through a relaxation of exponential operator splitting. We introduce the concept of quantum variational error suppression that allows some errors to be suppressed naturally in this algorithm on a pre-threshold quantum device. Additionally, we analyze truncation and correlated sampling in Hamiltonian averaging as ways to reduce the cost of this procedure. Finally, we show how the use of modern derivative free optimization techniques can offer dramatic computational savings of up to three orders of magnitude over previously used optimization techniques.
NASA Astrophysics Data System (ADS)
Young, Frederic; Siegel, Edward
Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!
Computer methods for sampling from the gamma distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, M.E.; Tadikamalla, P.R.
1978-01-01
Considerable attention has recently been directed at developing ever faster algorithms for generating gamma random variates on digital computers. This paper surveys the current state of the art including the leading algorithms of Ahrens and Dieter, Atkinson, Cheng, Fishman, Marsaglia, Tadikamalla, and Wallace. General random variate generation techniques are explained with reference to these gamma algorithms. Computer simulation experiments on IBM and CDC computers are reported.
Toroidal regularization of the guiding center Lagrangian
Burby, J. W.; Ellison, C. L.
2017-11-22
In the Lagrangian theory of guiding center motion, an effective magnetic field B* = B+ (m/e)v ∥∇ x b appears prominently in the equations of motion. Because the parallel component of this field can vanish, there is a range of parallel velocities where the Lagrangian guiding center equations of motion are either ill-defined or very badly behaved. Moreover, the velocity dependence of B* greatly complicates the identification of canonical variables and therefore the formulation of symplectic integrators for guiding center dynamics. Here, this letter introduces a simple coordinate transformation that alleviates both these problems simultaneously. In the new coordinates, themore » Liouville volume element is equal to the toroidal contravariant component of the magnetic field. Consequently, the large-velocity singularity is completely eliminated. Moreover, passing from the new coordinate system to canonical coordinates is extremely simple, even if the magnetic field is devoid of flux surfaces. We demonstrate the utility of this approach in regularizing the guiding center Lagrangian by presenting a new and stable one-step variational integrator for guiding centers moving in arbitrary time-dependent electromagnetic fields.« less
NASA Astrophysics Data System (ADS)
Lewis, Debra
2013-05-01
Relative equilibria of Lagrangian and Hamiltonian systems with symmetry are critical points of appropriate scalar functions parametrized by the Lie algebra (or its dual) of the symmetry group. Setting aside the structures - symplectic, Poisson, or variational - generating dynamical systems from such functions highlights the common features of their construction and analysis, and supports the construction of analogous functions in non-Hamiltonian settings. If the symmetry group is nonabelian, the functions are invariant only with respect to the isotropy subgroup of the given parameter value. Replacing the parametrized family of functions with a single function on the product manifold and extending the action using the (co)adjoint action on the algebra or its dual yields a fully invariant function. An invariant map can be used to reverse the usual perspective: rather than selecting a parametrized family of functions and finding their critical points, conditions under which functions will be critical on specific orbits, typically distinguished by isotropy class, can be derived. This strategy is illustrated using several well-known mechanical systems - the Lagrange top, the double spherical pendulum, the free rigid body, and the Riemann ellipsoids - and generalizations of these systems.
Toroidal regularization of the guiding center Lagrangian
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burby, J. W.; Ellison, C. L.
In the Lagrangian theory of guiding center motion, an effective magnetic field B* = B+ (m/e)v ∥∇ x b appears prominently in the equations of motion. Because the parallel component of this field can vanish, there is a range of parallel velocities where the Lagrangian guiding center equations of motion are either ill-defined or very badly behaved. Moreover, the velocity dependence of B* greatly complicates the identification of canonical variables and therefore the formulation of symplectic integrators for guiding center dynamics. Here, this letter introduces a simple coordinate transformation that alleviates both these problems simultaneously. In the new coordinates, themore » Liouville volume element is equal to the toroidal contravariant component of the magnetic field. Consequently, the large-velocity singularity is completely eliminated. Moreover, passing from the new coordinate system to canonical coordinates is extremely simple, even if the magnetic field is devoid of flux surfaces. We demonstrate the utility of this approach in regularizing the guiding center Lagrangian by presenting a new and stable one-step variational integrator for guiding centers moving in arbitrary time-dependent electromagnetic fields.« less
Local phase space and edge modes for diffeomorphism-invariant theories
NASA Astrophysics Data System (ADS)
Speranza, Antony J.
2018-02-01
We discuss an approach to characterizing local degrees of freedom of a subregion in diffeomorphism-invariant theories using the extended phase space of Donnelly and Freidel [36]. Such a characterization is important for defining local observables and entanglement entropy in gravitational theories. Traditional phase space constructions for subregions are not invariant with respect to diffeomorphisms that act at the boundary. The extended phase space remedies this problem by introducing edge mode fields at the boundary whose transformations under diffeomorphisms render the extended symplectic structure fully gauge invariant. In this work, we present a general construction for the edge mode symplectic structure. We show that the new fields satisfy a surface symmetry algebra generated by the Noether charges associated with the edge mode fields. For surface-preserving symmetries, the algebra is universal for all diffeomorphism-invariant theories, comprised of diffeomorphisms of the boundary, SL(2, ℝ) transformations of the normal plane, and, in some cases, normal shearing transformations. We also show that if boundary conditions are chosen such that surface translations are symmetries, the algebra acquires a central extension.
EFT for vortices with dilaton-dependent localized flux
NASA Astrophysics Data System (ADS)
Burgess, C. P.; Diener, Ross; Williams, M.
2015-11-01
We study how codimension-two objects like vortices back-react gravitationally with their environment in theories (such as 4D or higher-dimensional supergravity) where the bulk is described by a dilaton-Maxwell-Einstein system. We do so both in the full theory, for which the vortex is an explicit classical `fat brane' solution, and in the effective theory of `point branes' appropriate when the vortices are much smaller than the scales of interest for their back-reaction (such as the transverse Kaluza-Klein scale). We extend the standard Nambu-Goto description to include the physics of flux-localization wherein the ambient flux of the external Maxwell field becomes partially localized to the vortex, generalizing the results of a companion paper [4] from N=2 supergravity as the end-point of a hierarchical limit in which the Planck mass first and then the supersymmetry breaking scale are sent to infinity. We define, in the parent supergravity model, a new symplectic frame in which, in the rigid limit, manifest symplectic invariance is preserved and the electric and magnetic Fayet-Iliopoulos terms are fully originated from the dyonic components of the embedding tensor. The supergravity origin of several features of the resulting rigid supersymmetric theory are then elucidated, such as the presence of a traceless SU(2)- Lie algebra term in the Ward identity and the existence of a central charge in the supersymmetry algebra which manifests itself as a harmless gauge transformation on the gauge vectors of the rigid theory; we show that this effect can be interpreted as a kind of "superspace non-locality" which does not affect the rigid theory on space-time. To set the stage of our analysis we take the opportunity in this paper to provide and prove the relevant identities of the most general dyonic gauging of Special-Kaehler and Quaternionic-Kaehler isometries in a generic N=2 model, which include the supersymmetry Ward identity, in a fully symplectic-covariant formalism.
Research on compressive sensing reconstruction algorithm based on total variation model
NASA Astrophysics Data System (ADS)
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
Fast magnetic resonance imaging based on high degree total variation
NASA Astrophysics Data System (ADS)
Wang, Sujie; Lu, Liangliang; Zheng, Junbao; Jiang, Mingfeng
2018-04-01
In order to eliminating the artifacts and "staircase effect" of total variation in Compressive Sensing MRI, high degree total variation model is proposed for dynamic MRI reconstruction. the high degree total variation regularization term is used as a constraint to reconstruct the magnetic resonance image, and the iterative weighted MM algorithm is proposed to solve the convex optimization problem of the reconstructed MR image model, In addtion, one set of cardiac magnetic resonance data is used to verify the proposed algorithm for MRI. The results show that the high degree total variation method has a better reconstruction effect than the total variation and the total generalized variation, which can obtain higher reconstruction SNR and better structural similarity.
Multiscale 3D Shape Analysis using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2013-01-01
Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data. PMID:16685992
Multiscale 3D shape analysis using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen R
2005-01-01
Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data.
Compressed sensing with gradient total variation for low-dose CBCT reconstruction
NASA Astrophysics Data System (ADS)
Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung
2015-06-01
This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.
Mining and Querying Multimedia Data
2011-09-29
able to capture more subtle spatial variations such as repetitiveness. Local feature descriptors such as SIFT [74] and SURF [12] have also been widely...empirically set to s = 90%, r = 50%, K = 20, where small variations lead to little perturbation of the output. The pseudo-code of the algorithm is...by constructing a three-layer graph based on clustering outputs, and executing a slight variation of random walk with restart algorithm. It provided
Efficient Mean Field Variational Algorithm for Data Assimilation (Invited)
NASA Astrophysics Data System (ADS)
Vrettas, M. D.; Cornford, D.; Opper, M.
2013-12-01
Data assimilation algorithms combine available observations of physical systems with the assumed model dynamics in a systematic manner, to produce better estimates of initial conditions for prediction. Broadly they can be categorized in three main approaches: (a) sequential algorithms, (b) sampling methods and (c) variational algorithms which transform the density estimation problem to an optimization problem. However, given finite computational resources, only a handful of ensemble Kalman filters and 4DVar algorithms have been applied operationally to very high dimensional geophysical applications, such as weather forecasting. In this paper we present a recent extension to our variational Bayesian algorithm which seeks the ';optimal' posterior distribution over the continuous time states, within a family of non-stationary Gaussian processes. Our initial work on variational Bayesian approaches to data assimilation, unlike the well-known 4DVar method which seeks only the most probable solution, computes the best time varying Gaussian process approximation to the posterior smoothing distribution for dynamical systems that can be represented by stochastic differential equations. This approach was based on minimising the Kullback-Leibler divergence, over paths, between the true posterior and our Gaussian process approximation. Whilst the observations were informative enough to keep the posterior smoothing density close to Gaussian the algorithm proved very effective on low dimensional systems (e.g. O(10)D). However for higher dimensional systems, the high computational demands make the algorithm prohibitively expensive. To overcome the difficulties presented in the original framework and make our approach more efficient in higher dimensional systems we have been developing a new mean field version of the algorithm which treats the state variables at any given time as being independent in the posterior approximation, while still accounting for their relationships in the mean solution arising from the original system dynamics. Here we present this new mean field approach, illustrating its performance on a range of benchmark data assimilation problems whose dimensionality varies from O(10) to O(10^3)D. We emphasise that the variational Bayesian approach we adopt, unlike other variational approaches, provides a natural bound on the marginal likelihood of the observations given the model parameters which also allows for inference of (hyper-) parameters such as observational errors, parameters in the dynamical model and model error representation. We also stress that since our approach is intrinsically parallel it can be implemented very efficiently to address very long data assimilation time windows. Moreover, like most traditional variational approaches our Bayesian variational method has the benefit of being posed as an optimisation problem therefore its complexity can be tuned to the available computational resources. We finish with a sketch of possible future directions.
Evaluating acoustic speaker normalization algorithms: evidence from longitudinal child data.
Kohn, Mary Elizabeth; Farrington, Charlie
2012-03-01
Speaker vowel formant normalization, a technique that controls for variation introduced by physical differences between speakers, is necessary in variationist studies to compare speakers of different ages, genders, and physiological makeup in order to understand non-physiological variation patterns within populations. Many algorithms have been established to reduce variation introduced into vocalic data from physiological sources. The lack of real-time studies tracking the effectiveness of these normalization algorithms from childhood through adolescence inhibits exploration of child participation in vowel shifts. This analysis compares normalization techniques applied to data collected from ten African American children across five time points. Linear regressions compare the reduction in variation attributable to age and gender for each speaker for the vowels BEET, BAT, BOT, BUT, and BOAR. A normalization technique is successful if it maintains variation attributable to a reference sociolinguistic variable, while reducing variation attributable to age. Results indicate that normalization techniques which rely on both a measure of central tendency and range of the vowel space perform best at reducing variation attributable to age, although some variation attributable to age persists after normalization for some sections of the vowel space. © 2012 Acoustical Society of America
Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong
2018-04-12
Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.
Categorizing Variations of Student-Implemented Sorting Algorithms
ERIC Educational Resources Information Center
Taherkhani, Ahmad; Korhonen, Ari; Malmi, Lauri
2012-01-01
In this study, we examined freshmen students' sorting algorithm implementations in data structures and algorithms' course in two phases: at the beginning of the course before the students received any instruction on sorting algorithms, and after taking a lecture on sorting algorithms. The analysis revealed that many students have insufficient…
Theory and praxis pf map analsys in CHEF part 1: Linear normal form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michelotti, Leo; /Fermilab
2008-10-01
This memo begins a series which, put together, could comprise the 'CHEF Documentation Project' if there were such a thing. The first--and perhaps only--three will telegraphically describe theory, algorithms, implementation and usage of the normal form map analysis procedures encoded in CHEF's collection of libraries. [1] This one will begin the sequence by explaining the linear manipulations that connect the Jacobian matrix of a symplectic mapping to its normal form. It is a 'Reader's Digest' version of material I wrote in Intermediate Classical Dynamics (ICD) [2] and randomly scattered across technical memos, seminar viewgraphs, and lecture notes for the pastmore » quarter century. Much of its content is old, well known, and in some places borders on the trivial.1 Nevertheless, completeness requires their inclusion. The primary objective is the 'fundamental theorem' on normalization written on page 8. I plan to describe the nonlinear procedures in a subsequent memo and devote a third to laying out algorithms and lines of code, connecting them with equations written in the first two. Originally this was to be done in one short paper, but I jettisoned that approach after its first section exceeded a dozen pages. The organization of this document is as follows. A brief description of notation is followed by a section containing a general treatment of the linear problem. After the 'fundamental theorem' is proved, two further subsections discuss the generation of equilibrium distributions and issue of 'phase'. The final major section reviews parameterizations--that is, lattice functions--in two and four dimensions with a passing glance at the six-dimensional version. Appearances to the contrary, for the most part I have tried to restrict consideration to matters needed to understand the code in CHEF's libraries.« less
Hawking radiation and classical tunneling: A ray phase space approach
NASA Astrophysics Data System (ADS)
Tracy, E. R.; Zhigunov, D.
2016-01-01
Acoustic waves in fluids undergoing the transition from sub- to supersonic flow satisfy governing equations similar to those for light waves in the immediate vicinity of a black hole event horizon. This acoustic analogy has been used by Unruh and others as a conceptual model for "Hawking radiation." Here, we use variational methods, originally introduced by Brizard for the study of linearized MHD, and ray phase space methods, to analyze linearized acoustics in the presence of background flows. The variational formulation endows the evolution equations with natural Hermitian and symplectic structures that prove useful for later analysis. We derive a 2 × 2 normal form governing the wave evolution in the vicinity of the "event horizon." This shows that the acoustic model can be reduced locally (in ray phase space) to a standard (scalar) tunneling process weakly coupled to a unidirectional non-dispersive wave (the "incoming wave"). Given the normal form, the Hawking "thermal spectrum" can be derived by invoking standard tunneling theory, but only by ignoring the coupling to the incoming wave. Deriving the normal form requires a novel extension of the modular ray-based theory used previously to study tunneling and mode conversion in plasmas. We also discuss how ray phase space methods can be used to change representation, which brings the problem into a form where the wave functions are less singular than in the usual formulation, a fact that might prove useful in numerical studies.
Relativistic collisions as Yang-Baxter maps
NASA Astrophysics Data System (ADS)
Kouloukas, Theodoros E.
2017-10-01
We prove that one-dimensional elastic relativistic collisions satisfy the set-theoretical Yang-Baxter equation. The corresponding collision maps are symplectic and admit a Lax representation. Furthermore, they can be considered as reductions of a higher dimensional integrable Yang-Baxter map on an invariant manifold. In this framework, we study the integrability of transfer maps that represent particular periodic sequences of collisions.
Hamiltonian methods: BRST, BFV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, J. Antonio
2006-09-25
The range of applicability of Hamiltonian methods to gauge theories is very diverse and cover areas of research from phenomenology to mathematical physics. We review some of the areas developed in Mexico in the last decades. They cover the study of symplectic methods, BRST-BFV and BV approaches, Klauder projector program, and non perturbative technics used in the study of bound states in relativistic field theories.
Hamiltonian methods: BRST, BFV
NASA Astrophysics Data System (ADS)
García, J. Antonio
2006-09-01
The range of applicability of Hamiltonian methods to gauge theories is very diverse and cover areas of research from phenomenology to mathematical physics. We review some of the areas developed in México in the last decades. They cover the study of symplectic methods, BRST-BFV and BV approaches, Klauder projector program, and non perturbative technics used in the study of bound states in relativistic field theories.
Integral representations on supermanifolds: super Hodge duals, PCOs and Liouville forms
NASA Astrophysics Data System (ADS)
Castellani, Leonardo; Catenacci, Roberto; Grassi, Pietro Antonio
2017-01-01
We present a few types of integral transforms and integral representations that are very useful for extending to supergeometry many familiar concepts of differential geometry. Among them we discuss the construction of the super Hodge dual, the integral representation of picture changing operators of string theories and the construction of the super-Liouville form of a symplectic supermanifold.
Exploring variation-aware contig graphs for (comparative) metagenomics using MaryGold
Nijkamp, Jurgen F.; Pop, Mihai; Reinders, Marcel J. T.; de Ridder, Dick
2013-01-01
Motivation: Although many tools are available to study variation and its impact in single genomes, there is a lack of algorithms for finding such variation in metagenomes. This hampers the interpretation of metagenomics sequencing datasets, which are increasingly acquired in research on the (human) microbiome, in environmental studies and in the study of processes in the production of foods and beverages. Existing algorithms often depend on the use of reference genomes, which pose a problem when a metagenome of a priori unknown strain composition is studied. In this article, we develop a method to perform reference-free detection and visual exploration of genomic variation, both within a single metagenome and between metagenomes. Results: We present the MaryGold algorithm and its implementation, which efficiently detects bubble structures in contig graphs using graph decomposition. These bubbles represent variable genomic regions in closely related strains in metagenomic samples. The variation found is presented in a condensed Circos-based visualization, which allows for easy exploration and interpretation of the found variation. We validated the algorithm on two simulated datasets containing three respectively seven Escherichia coli genomes and showed that finding allelic variation in these genomes improves assemblies. Additionally, we applied MaryGold to publicly available real metagenomic datasets, enabling us to find within-sample genomic variation in the metagenomes of a kimchi fermentation process, the microbiome of a premature infant and in microbial communities living on acid mine drainage. Moreover, we used MaryGold for between-sample variation detection and exploration by comparing sequencing data sampled at different time points for both of these datasets. Availability: MaryGold has been written in C++ and Python and can be downloaded from http://bioinformatics.tudelft.nl/software Contact: d.deridder@tudelft.nl PMID:24058058
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, S; Farr, J; Merchant, T
Purpose: To study the effect of total-variation based noise reduction algorithms to improve the image registration of low-dose CBCT for patient positioning in radiation therapy. Methods: In low-dose CBCT, the reconstructed image is degraded by excessive quantum noise. In this study, we developed a total-variation based noise reduction algorithm and studied the effect of the algorithm on noise reduction and image registration accuracy. To study the effect of noise reduction, we have calculated the peak signal-to-noise ratio (PSNR). To study the improvement of image registration, we performed image registration between volumetric CT and MV- CBCT images of different head-and-neck patientsmore » and calculated the mutual information (MI) and Pearson correlation coefficient (PCC) as a similarity metric. The PSNR, MI and PCC were calculated for both the noisy and noise-reduced CBCT images. Results: The algorithms were shown to be effective in reducing the noise level and improving the MI and PCC for the low-dose CBCT images tested. For the different head-and-neck patients, a maximum improvement of PSNR of 10 dB with respect to the noisy image was calculated. The improvement of MI and PCC was 9% and 2% respectively. Conclusion: Total-variation based noise reduction algorithm was studied to improve the image registration between CT and low-dose CBCT. The algorithm had shown promising results in reducing the noise from low-dose CBCT images and improving the similarity metric in terms of MI and PCC.« less
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
An Analysis of Periodic Components in BL Lac Object S5 0716 +714 with MUSIC Method
NASA Astrophysics Data System (ADS)
Tang, J.
2012-01-01
Multiple signal classification (MUSIC) algorithms are introduced to the estimation of the period of variation of BL Lac objects.The principle of MUSIC spectral analysis method and theoretical analysis of the resolution of frequency spectrum using analog signals are included. From a lot of literatures, we have collected a lot of effective observation data of BL Lac object S5 0716 + 714 in V, R, I bands from 1994 to 2008. The light variation periods of S5 0716 +714 are obtained by means of the MUSIC spectral analysis method and periodogram spectral analysis method. There exist two major periods: (3.33±0.08) years and (1.24±0.01) years for all bands. The estimation of the period of variation of the algorithm based on the MUSIC spectral analysis method is compared with that of the algorithm based on the periodogram spectral analysis method. It is a super-resolution algorithm with small data length, and could be used to detect the period of variation of weak signals.
Dudik, Joshua M; Kurosu, Atsuko; Coyle, James L; Sejdić, Ervin
2015-04-01
Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differentiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dudik, Joshua M.; Kurosu, Atsuko; Coyle, James L
2015-01-01
Background Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. Methods In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Results Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differen-tiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. Conclusions In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. PMID:25658505
A nudging-based data assimilation method: the Back and Forth Nudging (BFN) algorithm
NASA Astrophysics Data System (ADS)
Auroux, D.; Blum, J.
2008-03-01
This paper deals with a new data assimilation algorithm, called Back and Forth Nudging. The standard nudging technique consists in adding to the equations of the model a relaxation term that is supposed to force the observations to the model. The BFN algorithm consists in repeatedly performing forward and backward integrations of the model with relaxation (or nudging) terms, using opposite signs in the direct and inverse integrations, so as to make the backward evolution numerically stable. This algorithm has first been tested on the standard Lorenz model with discrete observations (perfect or noisy) and compared with the variational assimilation method. The same type of study has then been performed on the viscous Burgers equation, comparing again with the variational method and focusing on the time evolution of the reconstruction error, i.e. the difference between the reference trajectory and the identified one over a time period composed of an assimilation period followed by a prediction period. The possible use of the BFN algorithm as an initialization for the variational method has also been investigated. Finally the algorithm has been tested on a layered quasi-geostrophic model with sea-surface height observations. The behaviours of the two algorithms have been compared in the presence of perfect or noisy observations, and also for imperfect models. This has allowed us to reach a conclusion concerning the relative performances of the two algorithms.
A hybrid-domain approach for modeling climate data time series
NASA Astrophysics Data System (ADS)
Wen, Qiuzi H.; Wang, Xiaolan L.; Wong, Augustine
2011-09-01
In order to model climate data time series that often contain periodic variations, trends, and sudden changes in mean (mean shifts, mostly artificial), this study proposes a hybrid-domain (HD) algorithm, which incorporates a time domain test and a newly developed frequency domain test through an iterative procedure that is analogue to the well known backfitting algorithm. A two-phase competition procedure is developed to address the confounding issue between modeling periodic variations and mean shifts. A variety of distinctive features of climate data time series, including trends, periodic variations, mean shifts, and a dependent noise structure, can be modeled in tandem using the HD algorithm. This is particularly important for homogenization of climate data from a low density observing network in which reference series are not available to help preserve climatic trends and long-term periodic variations, preventing them from being mistaken as artificial shifts. The HD algorithm is also powerful in estimating trend and periodicity in a homogeneous data time series (i.e., in the absence of any mean shift). The performance of the HD algorithm (in terms of false alarm rate and hit rate in detecting shifts/cycles, and estimation accuracy) is assessed via a simulation study. Its power is further illustrated through its application to a few climate data time series.
NASA Astrophysics Data System (ADS)
Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen
2016-11-01
To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis. PMID:27959895
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.
Experimental and analytical study of secondary path variations in active engine mounts
NASA Astrophysics Data System (ADS)
Hausberg, Fabian; Scheiblegger, Christian; Pfeffer, Peter; Plöchl, Manfred; Hecker, Simon; Rupp, Markus
2015-03-01
Active engine mounts (AEMs) provide an effective solution to further improve the acoustic and vibrational comfort of passenger cars. Typically, adaptive feedforward control algorithms, e.g., the filtered-x-least-mean-squares (FxLMS) algorithm, are applied to cancel disturbing engine vibrations. These algorithms require an accurate estimate of the AEM active dynamic characteristics, also known as the secondary path, in order to guarantee control performance and stability. This paper focuses on the experimental and theoretical study of secondary path variations in AEMs. The impact of three major influences, namely nonlinearity, change of preload and component temperature, on the AEM active dynamic characteristics is experimentally analyzed. The obtained test results are theoretically investigated with a linear AEM model which incorporates an appropriate description for elastomeric components. A special experimental set-up extends the model validation of the active dynamic characteristics to higher frequencies up to 400 Hz. The theoretical and experimental results show that significant secondary path variations are merely observed in the frequency range of the AEM actuator's resonance frequency. These variations mainly result from the change of the component temperature. As the stability of the algorithm is primarily affected by the actuator's resonance frequency, the findings of this paper facilitate the design of AEMs with simpler adaptive feedforward algorithms. From a practical point of view it may further be concluded that algorithmic countermeasures against instability are only necessary in the frequency range of the AEM actuator's resonance frequency.
An historical survey of computational methods in optimal control.
NASA Technical Reports Server (NTRS)
Polak, E.
1973-01-01
Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.
Colour based fire detection method with temporal intensity variation filtration
NASA Astrophysics Data System (ADS)
Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.
2015-02-01
Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.
Expert system constant false alarm rate processor
NASA Astrophysics Data System (ADS)
Baldygo, William J., Jr.; Wicks, Michael C.
1993-10-01
The requirements for high detection probability and low false alarm probability in modern wide area surveillance radars are rarely met due to spatial variations in clutter characteristics. Many filtering and CFAR detection algorithms have been developed to effectively deal with these variations; however, any single algorithm is likely to exhibit excessive false alarms and intolerably low detection probabilities in a dynamically changing environment. A great deal of research has led to advances in the state of the art in Artificial Intelligence (AI) and numerous areas have been identified for application to radar signal processing. The approach suggested here, discussed in a patent application submitted by the authors, is to intelligently select the filtering and CFAR detection algorithms being executed at any given time, based upon the observed characteristics of the interference environment. This approach requires sensing the environment, employing the most suitable algorithms, and applying an appropriate multiple algorithm fusion scheme or consensus algorithm to produce a global detection decision.
NASA Astrophysics Data System (ADS)
Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr
2017-12-01
There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.
NASA Technical Reports Server (NTRS)
Miller, Richard H.
1992-01-01
A study to demonstrate how the dynamics of galaxies may be investigated through the creation of galaxies within a computer model is presented. The numerical technique for simulating galaxies is shown to be both highly efficient and highly robust. Consideration is given to the anatomy of a galaxy, the gravitational N-body problem, numerical approaches to the N-body problem, use of the Poisson equation, and the symplectic integrator.
Noisy image magnification with total variation regularization and order-changed dictionary learning
NASA Astrophysics Data System (ADS)
Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi
2015-12-01
Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.
NASA Astrophysics Data System (ADS)
Parks, Helen Frances
This dissertation presents two projects related to the structured integration of large-scale mechanical systems. Structured integration uses the considerable differential geometric structure inherent in mechanical motion to inform the design of numerical integration schemes. This process improves the qualitative properties of simulations and becomes especially valuable as a measure of accuracy over long time simulations in which traditional Gronwall accuracy estimates lose their meaning. Often, structured integration schemes replicate continuous symmetries and their associated conservation laws at the discrete level. Such is the case for variational integrators, which discretely replicate the process of deriving equations of motion from variational principles. This results in the conservation of momenta associated to symmetries in the discrete system and conservation of a symplectic form when applicable. In the case of Lagrange-Dirac systems, variational integrators preserve a discrete analogue of the Dirac structure preserved in the continuous flow. In the first project of this thesis, we extend Dirac variational integrators to accommodate interconnected systems. We hope this work will find use in the fields of control, where a controlled system can be thought of as a "plant" system joined to its controller, and in the approach of very large systems, where modular modeling may prove easier than monolithically modeling the entire system. The second project of the thesis considers a different approach to large systems. Given a detailed model of the full system, can we reduce it to a more computationally efficient model without losing essential geometric structures in the system? Asked without the reference to structure, this is the essential question of the field of model reduction. The answer there has been a resounding yes, with Principal Orthogonal Decomposition (POD) with snapshots rising as one of the most successful methods. Our project builds on previous work to extend POD to structured settings. In particular, we consider systems evolving on Lie groups and make use of canonical coordinates in the reduction process. We see considerable improvement in the accuracy of the reduced model over the usual structure-agnostic POD approach.
Variational algorithms for nonlinear smoothing applications
NASA Technical Reports Server (NTRS)
Bach, R. E., Jr.
1977-01-01
A variational approach is presented for solving a nonlinear, fixed-interval smoothing problem with application to offline processing of noisy data for trajectory reconstruction and parameter estimation. The nonlinear problem is solved as a sequence of linear two-point boundary value problems. Second-order convergence properties are demonstrated. Algorithms for both continuous and discrete versions of the problem are given, and example solutions are provided.
NASA Astrophysics Data System (ADS)
Peng, Chengtao; Qiu, Bensheng; Zhang, Cheng; Ma, Changyu; Yuan, Gang; Li, Ming
2017-07-01
Over the years, the X-ray computed tomography (CT) has been successfully used in clinical diagnosis. However, when the body of the patient to be examined contains metal objects, the image reconstructed would be polluted by severe metal artifacts, which affect the doctor's diagnosis of disease. In this work, we proposed a dynamic re-weighted total variation (DRWTV) technique combined with the statistic iterative reconstruction (SIR) method to reduce the artifacts. The DRWTV method is based on the total variation (TV) and re-weighted total variation (RWTV) techniques, but it provides a sparser representation than TV and protects the tissue details better than RWTV. Besides, the DRWTV can suppress the artifacts and noise, and the SIR convergence speed is also accelerated. The performance of the algorithm is tested on both simulated phantom dataset and clinical dataset, which are the teeth phantom with two metal implants and the skull with three metal implants, respectively. The proposed algorithm (SIR-DRWTV) is compared with two traditional iterative algorithms, which are SIR and SIR constrained by RWTV regulation (SIR-RWTV). The results show that the proposed algorithm has the best performance in reducing metal artifacts and protecting tissue details.
Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters.
Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng
2016-09-07
Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers.
NASA Astrophysics Data System (ADS)
Gong, Changfei; Zeng, Dong; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua
2016-03-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for diagnosis and risk stratification of coronary artery disease by assessing the myocardial perfusion hemodynamic maps (MPHM). Meanwhile, the repeated scanning of the same region results in a relatively large radiation dose to patients potentially. In this work, we present a robust MPCT deconvolution algorithm with adaptive-weighted tensor total variation regularization to estimate residue function accurately under the low-dose context, which is termed `MPD-AwTTV'. More specifically, the AwTTV regularization takes into account the anisotropic edge property of the MPCT images compared with the conventional total variation (TV) regularization, which can mitigate the drawbacks of TV regularization. Subsequently, an effective iterative algorithm was adopted to minimize the associative objective function. Experimental results on a modified XCAT phantom demonstrated that the present MPD-AwTTV algorithm outperforms and is superior to other existing deconvolution algorithms in terms of noise-induced artifacts suppression, edge details preservation and accurate MPHM estimation.
Total variation optimization for imaging through turbid media with transmission matrix
NASA Astrophysics Data System (ADS)
Gong, Changmei; Shao, Xiaopeng; Wu, Tengfei; Liu, Jietao; Zhang, Jianqi
2016-12-01
With the transmission matrix (TM) of the whole optical system measured, the image of the object behind a turbid medium can be recovered from its speckle field by means of an image reconstruction algorithm. Instead of Tikhonov regularization algorithm (TRA), the total variation minimization by augmented Lagrangian and alternating direction algorithms (TVAL3) is introduced to recover object images. As a total variation (TV)-based approach, TVAL3 allows to effectively damp more noise and preserve more edges compared with TRA, thus providing more outstanding image quality. Different levels of detector noise and TM-measurement noise are successively added to analyze the antinoise performance of these two algorithms. Simulation results show that TVAL3 is able to recover more details and suppress more noise than TRA under different noise levels, thus providing much more excellent image quality. Furthermore, whether it be detector noise or TM-measurement noise, the reconstruction images obtained by TVAL3 at SNR=15 dB are far superior to those by TRA at SNR=50 dB.
A system of nonlinear set valued variational inclusions.
Tang, Yong-Kun; Chang, Shih-Sen; Salahuddin, Salahuddin
2014-01-01
In this paper, we studied the existence theorems and techniques for finding the solutions of a system of nonlinear set valued variational inclusions in Hilbert spaces. To overcome the difficulties, due to the presence of a proper convex lower semicontinuous function ϕ and a mapping g which appeared in the considered problems, we have used the resolvent operator technique to suggest an iterative algorithm to compute approximate solutions of the system of nonlinear set valued variational inclusions. The convergence of the iterative sequences generated by algorithm is also proved. 49J40; 47H06.
A novel iterative scheme and its application to differential equations.
Khan, Yasir; Naeem, F; Šmarda, Zdeněk
2014-01-01
The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.
Yip, Eugene; Yun, Jihyun; Gabos, Zsolt; Baker, Sarah; Yee, Don; Wachowicz, Keith; Rathee, Satyapal; Fallone, B Gino
2018-01-01
Real-time tracking of lung tumors using magnetic resonance imaging (MRI) has been proposed as a potential strategy to mitigate the ill-effects of breathing motion in radiation therapy. Several autocontouring methods have been evaluated against a "gold standard" of a single human expert user. However, contours drawn by experts have inherent intra- and interobserver variations. In this study, we aim to evaluate our user-trained autocontouring algorithm with manually drawn contours from multiple expert users, and to contextualize the accuracy of these autocontours within intra- and interobserver variations. Six nonsmall cell lung cancer patients were recruited, with institutional ethics approval. Patients were imaged with a clinical 3 T Philips MR scanner using a dynamic 2D balanced SSFP sequence under free breathing. Three radiation oncology experts, each in two separate sessions, contoured 130 dynamic images for each patient. For autocontouring, the first 30 images were used for algorithm training, and the remaining 100 images were autocontoured and evaluated. Autocontours were compared against manual contours in terms of Dice's coefficient (DC) and Hausdorff distances (d H ). Intra- and interobserver variations of the manual contours were also evaluated. When compared with the manual contours of the expert user who trained it, the algorithm generates autocontours whose evaluation metrics (same session: DC = 0.90(0.03), d H = 3.8(1.6) mm; different session DC = 0.88(0.04), d H = 4.3(1.5) mm) are similar to or better than intraobserver variations (DC = 0.88(0.04), and d H = 4.3(1.7) mm) between two sessions. The algorithm's autocontours are also compared to the manual contours from different expert users with evaluation metrics (DC = 0.87(0.04), d H = 4.8(1.7) mm) similar to interobserver variations (DC = 0.87(0.04), d H = 4.7(1.6) mm). Our autocontouring algorithm delineates tumor contours (<20 ms per contour), in dynamic MRI of lung, that are comparable to multiple human experts (several seconds per contour), but at a much faster speed. At the same time, the agreement between autocontours and manual contours is comparable to the intra- and interobserver variations. This algorithm may be a key component of the real time tumor tracking workflow for our hybrid Linac-MR device in the future. © 2017 American Association of Physicists in Medicine.
Zeroth Poisson Homology, Foliated Cohomology and Perfect Poisson Manifolds
NASA Astrophysics Data System (ADS)
Martínez-Torres, David; Miranda, Eva
2018-01-01
We prove that, for compact regular Poisson manifolds, the zeroth homology group is isomorphic to the top foliated cohomology group, and we give some applications. In particular, we show that, for regular unimodular Poisson manifolds, top Poisson and foliated cohomology groups are isomorphic. Inspired by the symplectic setting, we define what a perfect Poisson manifold is. We use these Poisson homology computations to provide families of perfect Poisson manifolds.
NASA Astrophysics Data System (ADS)
Batalin, Igor; Marnelius, Robert
1998-02-01
A general field-antifield BV formalism for antisymplectic first class constraints is proposed. It is as general as the corresponding symplectic BFV-BRST formulation and it is demonstrated to be consistent with a previously proposed formalism for antisymplectic second class constraints through a generalized conversion to corresponding first class constraints. Thereby the basic concept of gauge symmetry is extended to apply to quite a new class of gauge theories potentially possible to exist.
Identification of single-input-single-output quantum linear systems
NASA Astrophysics Data System (ADS)
Levitt, Matthew; GuÅ£ǎ, Mǎdǎlin
2017-03-01
The purpose of this paper is to investigate system identification for single-input-single-output general (active or passive) quantum linear systems. For a given input we address the following questions: (1) Which parameters can be identified by measuring the output? (2) How can we construct a system realization from sufficient input-output data? We show that for time-dependent inputs, the systems which cannot be distinguished are related by symplectic transformations acting on the space of system modes. This complements a previous result of Guţă and Yamamoto [IEEE Trans. Autom. Control 61, 921 (2016), 10.1109/TAC.2015.2448491] for passive linear systems. In the regime of stationary quantum noise input, the output is completely determined by the power spectrum. We define the notion of global minimality for a given power spectrum, and characterize globally minimal systems as those with a fully mixed stationary state. We show that in the case of systems with a cascade realization, the power spectrum completely fixes the transfer function, so the system can be identified up to a symplectic transformation. We give a method for constructing a globally minimal subsystem direct from the power spectrum. Restricting to passive systems the analysis simplifies so that identifiability may be completely understood from the eigenvalues of a particular system matrix.
Algorithm for Identifying Erroneous Rain-Gauge Readings
NASA Technical Reports Server (NTRS)
Rickman, Doug
2005-01-01
An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.
A total variation diminishing finite difference algorithm for sonic boom propagation models
NASA Technical Reports Server (NTRS)
Sparrow, Victor W.
1993-01-01
It is difficult to accurately model the rise phases of sonic boom waveforms with traditional finite difference algorithms because of finite difference phase dispersion. This paper introduces the concept of a total variation diminishing (TVD) finite difference method as a tool for accurately modeling the rise phases of sonic booms. A standard second order finite difference algorithm and its TVD modified counterpart are both applied to the one-way propagation of a square pulse. The TVD method clearly outperforms the non-TVD method, showing great potential as a new computational tool in the analysis of sonic boom propagation.
A Fast parallel tridiagonal algorithm for a class of CFD applications
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Sun, Xian-He
1996-01-01
The parallel diagonal dominant (PDD) algorithm is an efficient tridiagonal solver. This paper presents for study a variation of the PDD algorithm, the reduced PDD algorithm. The new algorithm maintains the minimum communication provided by the PDD algorithm, but has a reduced operation count. The PDD algorithm also has a smaller operation count than the conventional sequential algorithm for many applications. Accuracy analysis is provided for the reduced PDD algorithm for symmetric Toeplitz tridiagonal (STT) systems. Implementation results on Langley's Intel Paragon and IBM SP2 show that both the PDD and reduced PDD algorithms are efficient and scalable.
Variational Trajectory Optimization Tool Set: Technical description and user's manual
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.
1993-01-01
The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.
Optimally stopped variational quantum algorithms
NASA Astrophysics Data System (ADS)
Vinci, Walter; Shabani, Alireza
2018-04-01
Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.
Hou, Ying-Yu; He, Yan-Bo; Wang, Jian-Lin; Tian, Guo-Liang
2009-10-01
Based on the time series 10-day composite NOAA Pathfinder AVHRR Land (PAL) dataset (8 km x 8 km), and by using land surface energy balance equation and "VI-Ts" (vegetation index-land surface temperature) method, a new algorithm of land surface evapotranspiration (ET) was constructed. This new algorithm did not need the support from meteorological observation data, and all of its parameters and variables were directly inversed or derived from remote sensing data. A widely accepted ET model of remote sensing, i. e., SEBS model, was chosen to validate the new algorithm. The validation test showed that both the ET and its seasonal variation trend estimated by SEBS model and our new algorithm accorded well, suggesting that the ET estimated from the new algorithm was reliable, being able to reflect the actual land surface ET. The new ET algorithm of remote sensing was practical and operational, which offered a new approach to study the spatiotemporal variation of ET in continental scale and global scale based on the long-term time series satellite remote sensing images.
Tracking and recognition face in videos with incremental local sparse representation model
NASA Astrophysics Data System (ADS)
Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang
2013-10-01
This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.
Zeng, Dong; Gao, Yuanyuan; Huang, Jing; Bian, Zhaoying; Zhang, Hua; Lu, Lijun; Ma, Jianhua
2016-10-01
Multienergy computed tomography (MECT) allows identifying and differentiating different materials through simultaneous capture of multiple sets of energy-selective data belonging to specific energy windows. However, because sufficient photon counts are not available in each energy window compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise and strong streak artifacts. To address the particular challenge, this work presents a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization, which is henceforth referred to as 'PWLS-STV' for simplicity. Specifically, the STV regularization is derived by penalizing higher-order derivatives of the desired MECT images. Thus it could provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation (TV) regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Extensive experiments with a digital XCAT phantom and meat specimen clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of both quantitative and visual quality evaluations. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zeng, Dong; Bian, Zhaoying; Gong, Changfei; Huang, Jing; He, Ji; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua
2016-03-01
Multienergy computed tomography (MECT) has the potential to simultaneously offer multiple sets of energy- selective data belonging to specific energy windows. However, because sufficient photon counts are not available in the specific energy windows compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise (SNR) and strong streak artifacts. To eliminate this drawback, in this work we present a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization to improve the MECT images quality from low-milliampere-seconds (low-mAs) data acquisitions. Henceforth the present scheme is referred to as `PWLS- STV' for simplicity. Specifically, the STV regularization is derived by penalizing the eigenvalues of the structure tensor of every point in the MECT images. Thus it can provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Experiments with a digital XCAT phantom clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of noise-induced artifacts suppression, resolution preservation, and material decomposition assessment.
NASA Astrophysics Data System (ADS)
Penenko, Alexey; Penenko, Vladimir; Nuterman, Roman; Baklanov, Alexander; Mahura, Alexander
2015-11-01
Atmospheric chemistry dynamics is studied with convection-diffusion-reaction model. The numerical Data Assimilation algorithm presented is based on the additive-averaged splitting schemes. It carries out ''fine-grained'' variational data assimilation on the separate splitting stages with respect to spatial dimensions and processes i.e. the same measurement data is assimilated to different parts of the split model. This design has efficient implementation due to the direct data assimilation algorithms of the transport process along coordinate lines. Results of numerical experiments with chemical data assimilation algorithm of in situ concentration measurements on real data scenario have been presented. In order to construct the scenario, meteorological data has been taken from EnviroHIRLAM model output, initial conditions from MOZART model output and measurements from Airbase database.
Adaptive rehabilitation gaming system: on-line individualization of stroke rehabilitation.
Nirme, Jens; Duff, Armin; Verschure, Paul F M J
2011-01-01
The effects of stroke differ considerably in degree and symptoms for different patients. It has been shown that specific, individualized and varied therapy favors recovery. The Rehabilitation Gaming System (RGS) is a Virtual Reality (VR) based rehabilitation system designed following these principles. We have developed two algorithms to control the level of task difficulty that a user of the RGS is exposed to, as well as providing controlled variation in the therapy. In this paper, we compare the two algorithms by running numerical simulations and a study with healthy subjects. We show that both algorithms allow for individualization of the challenge level of the task. Further, the results reveal that the algorithm that iteratively learns a user model for each subject also allows a high variation of the task.
Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters
Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng
2016-01-01
Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers. PMID:27618046
Probabilistic representation of gene regulatory networks.
Mao, Linyong; Resat, Haluk
2004-09-22
Recent experiments have established unambiguously that biological systems can have significant cell-to-cell variations in gene expression levels even in isogenic populations. Computational approaches to studying gene expression in cellular systems should capture such biological variations for a more realistic representation. In this paper, we present a new fully probabilistic approach to the modeling of gene regulatory networks that allows for fluctuations in the gene expression levels. The new algorithm uses a very simple representation for the genes, and accounts for the repression or induction of the genes and for the biological variations among isogenic populations simultaneously. Because of its simplicity, introduced algorithm is a very promising approach to model large-scale gene regulatory networks. We have tested the new algorithm on the synthetic gene network library bioengineered recently. The good agreement between the computed and the experimental results for this library of networks, and additional tests, demonstrate that the new algorithm is robust and very successful in explaining the experimental data. The simulation software is available upon request. Supplementary material will be made available on the OUP server.
Land, P E; Haigh, J D
1997-12-20
In algorithms for the atmospheric correction of visible and near-IR satellite observations of the Earth's surface, it is generally assumed that the spectral variation of aerosol optical depth is characterized by an Angström power law or similar dependence. In an iterative fitting algorithm for atmospheric correction of ocean color imagery over case 2 waters, this assumption leads to an inability to retrieve the aerosol type and to the attribution to aerosol spectral variations of spectral effects actually caused by the water contents. An improvement to this algorithm is described in which the spectral variation of optical depth is calculated as a function of aerosol type and relative humidity, and an attempt is made to retrieve the relative humidity in addition to aerosol type. The aerosol is treated as a mixture of aerosol components (e.g., soot), rather than of aerosol types (e.g., urban). We demonstrate the improvement over the previous method by using simulated case 1 and case 2 sea-viewing wide field-of-view sensor data, although the retrieval of relative humidity was not successful.
New vertices and canonical quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Sergei
2010-07-15
We present two results on the recently proposed new spin foam models. First, we show how a (slightly modified) restriction on representations in the Engle-Pereira-Rovelli-Livine model leads to the appearance of the Ashtekar-Barbero connection, thus bringing this model even closer to loop quantum gravity. Second, we however argue that the quantization procedure used to derive the new models is inconsistent since it relies on the symplectic structure of the unconstrained BF theory.
AKSZ construction from reduction data
NASA Astrophysics Data System (ADS)
Bonechi, Francesco; Cabrera, Alejandro; Zabzine, Maxim
2012-07-01
We discuss a general procedure to encode the reduction of the target space geometry into AKSZ sigma models. This is done by considering the AKSZ construction with target the BFV model for constrained graded symplectic manifolds. We investigate the relation between this sigma model and the one with the reduced structure. We also discuss several examples in dimension two and three when the symmetries come from Lie group actions and systematically recover models already proposed in the literature.
Nonlinear Symplectic Attitude Estimation for Small Satellites
2006-08-01
Vol. 45, No. 3, 2000, pp. 477-482. 7 Gelb, A., editor, Applied Optimal Estimation, The M.I.T. Press, Cambridge, MA, 1974. ’ Brown , R. G. and Hwang , P. Y...demonstrate orders of magnitude improvement in state and constants of motion estimation when compared to extended and iterative Kalman methods...satellites have fallen into the former category, including the ubiquitous Extended Kalman Filter (EKF).2 - 9 While this approach has been used
On the Hamilton approach of the dissipative systems
NASA Astrophysics Data System (ADS)
Zimin, B. A.; Zorin, I. S.; Sventitskaya, V. E.
2018-05-01
In this paper we consider the problem of constructing equations describing the states of dissipative dynamical systems (media with absorption or damping). The approaches of Lagrange and Hamilton are discussed. A non-symplectic extension of the Poisson brackets is formulated. The application of the Hamiltonian formalism here makes it possible to obtain explicit equations for the dynamics of a nonlinear elastic system with damping and a one-dimensional continuous medium with internal friction.
Despeckling Polsar Images Based on Relative Total Variation Model
NASA Astrophysics Data System (ADS)
Jiang, C.; He, X. F.; Yang, L. J.; Jiang, J.; Wang, D. Y.; Yuan, Y.
2018-04-01
Relatively total variation (RTV) algorithm, which can effectively decompose structure information and texture in image, is employed in extracting main structures of the image. However, applying the RTV directly to polarimetric SAR (PolSAR) image filtering will not preserve polarimetric information. A new RTV approach based on the complex Wishart distribution is proposed considering the polarimetric properties of PolSAR. The proposed polarization RTV (PolRTV) algorithm can be used for PolSAR image filtering. The L-band Airborne SAR (AIRSAR) San Francisco data is used to demonstrate the effectiveness of the proposed algorithm in speckle suppression, structural information preservation, and polarimetric property preservation.
Mukherjee, Kaushik; Gupta, Sanjay
2017-03-01
Several mechanobiology algorithms have been employed to simulate bone ingrowth around porous coated implants. However, there is a scarcity of quantitative comparison between the efficacies of commonly used mechanoregulatory algorithms. The objectives of this study are: (1) to predict peri-acetabular bone ingrowth using cell-phenotype specific algorithm and to compare these predictions with those obtained using phenomenological algorithm and (2) to investigate the influences of cellular parameters on bone ingrowth. The variation in host bone material property and interfacial micromotion of the implanted pelvis were mapped onto the microscale model of implant-bone interface. An overall variation of 17-88 % in peri-acetabular bone ingrowth was observed. Despite differences in predicted tissue differentiation patterns during the initial period, both the algorithms predicted similar spatial distribution of neo-tissue layer, after attainment of equilibrium. Results indicated that phenomenological algorithm, being computationally faster than the cell-phenotype specific algorithm, might be used to predict peri-prosthetic bone ingrowth. The cell-phenotype specific algorithm, however, was found to be useful in numerically investigating the influence of alterations in cellular activities on bone ingrowth, owing to biologically related factors. Amongst the host of cellular activities, matrix production rate of bone tissue was found to have predominant influence on peri-acetabular bone ingrowth.
Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization
Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.
2014-01-01
High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076
De, Rajat K.
2015-01-01
Copy number variation (CNV) is a form of structural alteration in the mammalian DNA sequence, which are associated with many complex neurological diseases as well as cancer. The development of next generation sequencing (NGS) technology provides us a new dimension towards detection of genomic locations with copy number variations. Here we develop an algorithm for detecting CNVs, which is based on depth of coverage data generated by NGS technology. In this work, we have used a novel way to represent the read count data as a two dimensional geometrical point. A key aspect of detecting the regions with CNVs, is to devise a proper segmentation algorithm that will distinguish the genomic locations having a significant difference in read count data. We have designed a new segmentation approach in this context, using convex hull algorithm on the geometrical representation of read count data. To our knowledge, most algorithms have used a single distribution model of read count data, but here in our approach, we have considered the read count data to follow two different distribution models independently, which adds to the robustness of detection of CNVs. In addition, our algorithm calls CNVs based on the multiple sample analysis approach resulting in a low false discovery rate with high precision. PMID:26291322
Sinha, Rituparna; Samaddar, Sandip; De, Rajat K
2015-01-01
Copy number variation (CNV) is a form of structural alteration in the mammalian DNA sequence, which are associated with many complex neurological diseases as well as cancer. The development of next generation sequencing (NGS) technology provides us a new dimension towards detection of genomic locations with copy number variations. Here we develop an algorithm for detecting CNVs, which is based on depth of coverage data generated by NGS technology. In this work, we have used a novel way to represent the read count data as a two dimensional geometrical point. A key aspect of detecting the regions with CNVs, is to devise a proper segmentation algorithm that will distinguish the genomic locations having a significant difference in read count data. We have designed a new segmentation approach in this context, using convex hull algorithm on the geometrical representation of read count data. To our knowledge, most algorithms have used a single distribution model of read count data, but here in our approach, we have considered the read count data to follow two different distribution models independently, which adds to the robustness of detection of CNVs. In addition, our algorithm calls CNVs based on the multiple sample analysis approach resulting in a low false discovery rate with high precision.
Bhattacharya, Anindya; De, Rajat K
2010-08-01
Distance based clustering algorithms can group genes that show similar expression values under multiple experimental conditions. They are unable to identify a group of genes that have similar pattern of variation in their expression values. Previously we developed an algorithm called divisive correlation clustering algorithm (DCCA) to tackle this situation, which is based on the concept of correlation clustering. But this algorithm may also fail for certain cases. In order to overcome these situations, we propose a new clustering algorithm, called average correlation clustering algorithm (ACCA), which is able to produce better clustering solution than that produced by some others. ACCA is able to find groups of genes having more common transcription factors and similar pattern of variation in their expression values. Moreover, ACCA is more efficient than DCCA with respect to the time of execution. Like DCCA, we use the concept of correlation clustering concept introduced by Bansal et al. ACCA uses the correlation matrix in such a way that all genes in a cluster have the highest average correlation values with the genes in that cluster. We have applied ACCA and some well-known conventional methods including DCCA to two artificial and nine gene expression datasets, and compared the performance of the algorithms. The clustering results of ACCA are found to be more significantly relevant to the biological annotations than those of the other methods. Analysis of the results show the superiority of ACCA over some others in determining a group of genes having more common transcription factors and with similar pattern of variation in their expression profiles. Availability of the software: The software has been developed using C and Visual Basic languages, and can be executed on the Microsoft Windows platforms. The software may be downloaded as a zip file from http://www.isical.ac.in/~rajat. Then it needs to be installed. Two word files (included in the zip file) need to be consulted before installation and execution of the software. Copyright 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Emaminejad, Nastaran; Wahi-Anwar, Muhammad; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael
2018-02-01
Translation of radiomics into clinical practice requires confidence in its interpretations. This may be obtained via understanding and overcoming the limitations in current radiomic approaches. Currently there is a lack of standardization in radiomic feature extraction. In this study we examined a few factors that are potential sources of inconsistency in characterizing lung nodules, such as 1)different choices of parameters and algorithms in feature calculation, 2)two CT image dose levels, 3)different CT reconstruction algorithms (WFBP, denoised WFBP, and Iterative). We investigated the effect of variation of these factors on entropy textural feature of lung nodules. CT images of 19 lung nodules identified from our lung cancer screening program were identified by a CAD tool and contours provided. The radiomics features were extracted by calculating 36 GLCM based and 4 histogram based entropy features in addition to 2 intensity based features. A robustness index was calculated across different image acquisition parameters to illustrate the reproducibility of features. Most GLCM based and all histogram based entropy features were robust across two CT image dose levels. Denoising of images slightly improved robustness of some entropy features at WFBP. Iterative reconstruction resulted in improvement of robustness in a fewer times and caused more variation in entropy feature values and their robustness. Within different choices of parameters and algorithms texture features showed a wide range of variation, as much as 75% for individual nodules. Results indicate the need for harmonization of feature calculations and identification of optimum parameters and algorithms in a radiomics study.
Ni, Yepeng; Liu, Jianbo; Liu, Shan; Bai, Yaxin
2016-01-01
With the rapid development of smartphones and wireless networks, indoor location-based services have become more and more prevalent. Due to the sophisticated propagation of radio signals, the Received Signal Strength Indicator (RSSI) shows a significant variation during pedestrian walking, which introduces critical errors in deterministic indoor positioning. To solve this problem, we present a novel method to improve the indoor pedestrian positioning accuracy by embedding a fuzzy pattern recognition algorithm into a Hidden Markov Model. The fuzzy pattern recognition algorithm follows the rule that the RSSI fading has a positive correlation to the distance between the measuring point and the AP location even during a dynamic positioning measurement. Through this algorithm, we use the RSSI variation trend to replace the specific RSSI value to achieve a fuzzy positioning. The transition probability of the Hidden Markov Model is trained by the fuzzy pattern recognition algorithm with pedestrian trajectories. Using the Viterbi algorithm with the trained model, we can obtain a set of hidden location states. In our experiments, we demonstrate that, compared with the deterministic pattern matching algorithm, our method can greatly improve the positioning accuracy and shows robust environmental adaptability. PMID:27618053
Simple satellite orbit propagator
NASA Astrophysics Data System (ADS)
Gurfil, P.
2008-06-01
An increasing number of space missions require on-board autonomous orbit determination. The purpose of this paper is to develop a simple orbit propagator (SOP) for such missions. Since most satellites are limited by the available processing power, it is important to develop an orbit propagator that will use limited computational and memory resources. In this work, we show how to choose state variables for propagation using the simplest numerical integration scheme available-the explicit Euler integrator. The new state variables are derived by the following rationale: Apply a variation-of-parameters not on the gravity-affected orbit, but rather on the gravity-free orbit, and teart the gravity as a generalized force. This ultimately leads to a state vector comprising the inertial velocity and a modified position vector, wherein the product of velocity and time is subtracted from the inertial position. It is shown that the explicit Euler integrator, applied on the new state variables, becomes a symplectic integrator, preserving the Hamiltonian and the angular momentum (or a component thereof in the case of oblateness perturbations). The main application of the proposed propagator is estimation of mean orbital elements. It is shown that the SOP is capable of estimating the mean elements with an accuracy that is comparable to a high-order integrator that consumes an order-of-magnitude more computational time than the SOP.
GENERIC Integrators: Structure Preserving Time Integration for Thermodynamic Systems
NASA Astrophysics Data System (ADS)
Öttinger, Hans Christian
2018-04-01
Thermodynamically admissible evolution equations for non-equilibrium systems are known to possess a distinct mathematical structure. Within the GENERIC (general equation for the non-equilibrium reversible-irreversible coupling) framework of non-equilibrium thermodynamics, which is based on continuous time evolution, we investigate the possibility of preserving all the structural elements in time-discretized equations. Our approach, which follows Moser's [1] construction of symplectic integrators for Hamiltonian systems, is illustrated for the damped harmonic oscillator. Alternative approaches are sketched.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadets, Boris; Karolinsky, Eugene; Pop, Iulia
2016-05-15
In this paper we continue to study Belavin–Drinfeld cohomology introduced in Kadets et al., Commun. Math. Phys. 344(1), 1-24 (2016) and related to the classification of quantum groups whose quasi-classical limit is a given simple complex Lie algebra #Mathematical Fraktur Small G#. Here we compute Belavin–Drinfeld cohomology for all non-skewsymmetric r-matrices on the Belavin–Drinfeld list for simple Lie algebras of type B, C, and D.
Sixth-Order Lie Group Integrators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forest, E.
1990-03-01
In this paper we present the coefficients of several 6th order symplectic integrator of the type developed by R. Ruth. To get these results we fully exploit the connection with Lie groups. This integrator, as well as all the explicit integrators of Ruth, may be used in any equation where some sort of Lie bracket is preserved. In fact, if the Lie operator governing the equation of motion is separable into two solvable parts, the Ruth integrators can be used.
Higher-order hybrid implicit/explicit FDTD time-stepping
NASA Astrophysics Data System (ADS)
Tierens, W.
2016-12-01
Both partially implicit FDTD methods, and symplectic FDTD methods of high temporal accuracy (3rd or 4th order), are well documented in the literature. In this paper we combine them: we construct a conservative FDTD method which is fourth order accurate in time and is partially implicit. We show that the stability condition for this method depends exclusively on the explicit part, which makes it suitable for use in e.g. modelling wave propagation in plasmas.
NASA Astrophysics Data System (ADS)
Monnier, J.; Couderc, F.; Dartus, D.; Larnier, K.; Madec, R.; Vila, J.-P.
2016-11-01
The 2D shallow water equations adequately model some geophysical flows with wet-dry fronts (e.g. flood plain or tidal flows); nevertheless deriving accurate, robust and conservative numerical schemes for dynamic wet-dry fronts over complex topographies remains a challenge. Furthermore for these flows, data are generally complex, multi-scale and uncertain. Robust variational inverse algorithms, providing sensitivity maps and data assimilation processes may contribute to breakthrough shallow wet-dry front dynamics modelling. The present study aims at deriving an accurate, positive and stable finite volume scheme in presence of dynamic wet-dry fronts, and some corresponding inverse computational algorithms (variational approach). The schemes and algorithms are assessed on classical and original benchmarks plus a real flood plain test case (Lèze river, France). Original sensitivity maps with respect to the (friction, topography) pair are performed and discussed. The identification of inflow discharges (time series) or friction coefficients (spatially distributed parameters) demonstrate the algorithms efficiency.
Progress on automated data analysis algorithms for ultrasonic inspection of composites
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Forsyth, David S.; Welter, John T.
2015-03-01
Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.
Robust control algorithms for Mars aerobraking
NASA Technical Reports Server (NTRS)
Shipley, Buford W., Jr.; Ward, Donald T.
1992-01-01
Four atmospheric guidance concepts have been adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. The first two offer improvements to the Analytic Predictor Corrector (APC) to increase its robustness to density variations. The second two are variations of a new Liapunov tracking exit phase algorithm, developed to guide the vehicle along a reference trajectory. These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. MARSGRAM is used to develop realistic atmospheres for the study. When square wave density pulses perturb the atmosphere all four controllers are successful. The algorithms are tested against atmospheres where the inbound and outbound density functions are different. Square wave density pulses are again used, but only for the outbound leg of the trajectory. Additionally, sine waves are used to perturb the density function. The new algorithms are found to be more robust than any previously tested and a Liapunov controller is selected as the most robust control algorithm overall examined.
Multichannel blind iterative image restoration.
Sroubek, Filip; Flusser, Jan
2003-01-01
Blind image deconvolution is required in many applications of microscopy imaging, remote sensing, and astronomical imaging. Unfortunately in a single-channel framework, serious conceptual and numerical problems are often encountered. Very recently, an eigenvector-based method (EVAM) was proposed for a multichannel framework which determines perfectly convolution masks in a noise-free environment if channel disparity, called co-primeness, is satisfied. We propose a novel iterative algorithm based on recent anisotropic denoising techniques of total variation and a Mumford-Shah functional with the EVAM restoration condition included. A linearization scheme of half-quadratic regularization together with a cell-centered finite difference discretization scheme is used in the algorithm and provides a unified approach to the solution of total variation or Mumford-Shah. The algorithm performs well even on very noisy images and does not require an exact estimation of mask orders. We demonstrate capabilities of the algorithm on synthetic data. Finally, the algorithm is applied to defocused images taken with a digital camera and to data from astronomical ground-based observations of the Sun.
An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects.
Kim, Jinkwon; Min, Se Dong; Lee, Myoungho
2011-06-27
Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.
An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects
2011-01-01
Background Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. Methods In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. Results A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. Conclusions The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians. PMID:21707989
Surface charges for gravity and electromagnetism in the first order formalism
NASA Astrophysics Data System (ADS)
Frodden, Ernesto; Hidalgo, Diego
2018-02-01
A new derivation of surface charges for 3 + 1 gravity coupled to electromagnetism is obtained. Gravity theory is written in the tetrad-connection variables. The general derivation starts from the Lagrangian, and uses the covariant symplectic formalism in the language of forms. For gauge theories, surface charges disentangle physical from gauge symmetries through the use of Noether identities and the exactness symmetry condition. The surface charges are quasilocal, explicitly coordinate independent, gauge invariant and background independent. For a black hole family solution, the surface charge conservation implies the first law of black hole mechanics. As a check, we show the first law for an electrically charged, rotating black hole with an asymptotically constant curvature (the Kerr–Newman (anti-)de Sitter family). The charges, including the would-be mass term appearing in the first law, are quasilocal. No reference to the asymptotic structure of the spacetime nor the boundary conditions is required and therefore topological terms do not play a rôle. Finally, surface charge formulae for Lovelock gravity coupled to electromagnetism are exhibited, generalizing the one derived in a recent work by Barnich et al Proc. Workshop ‘ About Various Kinds of Interactions’ in honour of Philippe Spindel (4–5 June 2015, Mons, Belgium) C15-06-04 (2016 (arXiv:1611.01777 [gr-qc])). The two different symplectic methods to define surface charges are compared and shown equivalent.
NASA Astrophysics Data System (ADS)
Chang, Chueh-Hsin; Yu, Ching-Hao; Sheu, Tony Wen-Hann
2016-10-01
In this article, we numerically revisit the long-time solution behavior of the Camassa-Holm equation ut - uxxt + 2ux + 3uux = 2uxuxx + uuxxx. The finite difference solution of this integrable equation is sought subject to the newly derived initial condition with Delta-function potential. Our underlying strategy of deriving a numerical phase accurate finite difference scheme in time domain is to reduce the numerical dispersion error through minimization of the derived discrepancy between the numerical and exact modified wavenumbers. Additionally, to achieve the goal of conserving Hamiltonians in the completely integrable equation of current interest, a symplecticity-preserving time-stepping scheme is developed. Based on the solutions computed from the temporally symplecticity-preserving and the spatially wavenumber-preserving schemes, the long-time asymptotic CH solution characters can be accurately depicted in distinct regions of the space-time domain featuring with their own quantitatively very different solution behaviors. We also aim to numerically confirm that in the two transition zones their long-time asymptotics can indeed be described in terms of the theoretically derived Painlevé transcendents. Another attempt of this study is to numerically exhibit a close connection between the presently predicted finite-difference solution and the solution of the Painlevé ordinary differential equation of type II in two different transition zones.
On extremals of the entropy production by ‘Langevin-Kramers’ dynamics
NASA Astrophysics Data System (ADS)
Muratore-Ginanneschi, Paolo
2014-05-01
We refer as ‘Langevin-Kramers’ dynamics to a class of stochastic differential systems exhibiting a degenerate ‘metriplectic’ structure. This means that the drift field can be decomposed into a symplectic and a gradient-like component with respect to a pseudo-metric tensor associated with random fluctuations affecting increments of only a sub-set of the degrees of freedom. Systems in this class are often encountered in applications as elementary models of Hamiltonian dynamics in a heat bath eventually relaxing to a Boltzmann steady state. Entropy production control in Langevin-Kramers models differs from the now well-understood case of Langevin-Smoluchowski dynamics for two reasons. First, the definition of entropy production stemming from fluctuation theorems specifies a cost functional which does not act coercively on all degrees of freedom of control protocols. Second, the presence of a symplectic structure imposes a non-local constraint on the class of admissible controls. Using Pontryagin control theory and restricting the attention to additive noise, we show that smooth protocols attaining extremal values of the entropy production appear generically in continuous parametric families as a consequence of a trade-off between smoothness of the admissible protocols and non-coercivity of the cost functional. Uniqueness is, however, always recovered in the over-damped limit as extremal equations reduce at leading order to the Monge-Ampère-Kantorovich optimal mass-transport equations.
Experimental scheme and restoration algorithm of block compression sensing
NASA Astrophysics Data System (ADS)
Zhang, Linxia; Zhou, Qun; Ke, Jun
2018-01-01
Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.
NASA Astrophysics Data System (ADS)
Liu, Shixing; Liu, Chang; Hua, Wei; Guo, Yongxin
2016-11-01
By using the discrete variational method, we study the numerical method of the general nonholonomic system in the generalized Birkhoffian framework, and construct a numerical method of generalized Birkhoffian equations called a self-adjoint-preserving algorithm. Numerical results show that it is reasonable to study the nonholonomic system by the structure-preserving algorithm in the generalized Birkhoffian framework. Project supported by the National Natural Science Foundation of China (Grant Nos. 11472124, 11572145, 11202090, and 11301350), the Doctor Research Start-up Fund of Liaoning Province, China (Grant No. 20141050), the China Postdoctoral Science Foundation (Grant No. 2014M560203), and the General Science and Technology Research Plans of Liaoning Educational Bureau, China (Grant No. L2013005).
Observation and Control of Hamiltonian Chaos in Wave-particle Interaction
NASA Astrophysics Data System (ADS)
Doveil, F.; Elskens, Y.; Ruzzon, A.
2010-11-01
Wave-particle interactions are central in plasma physics. The paradigm beam-plasma system can be advantageously replaced by a traveling wave tube (TWT) to allow their study in a much less noisy environment. This led to detailed analysis of the self-consistent interaction between unstable waves and an either cold or warm electron beam. More recently a test cold beam has been used to observe its interaction with externally excited wave(s). This allowed observing the main features of Hamiltonian chaos and testing a new method to efficiently channel chaotic transport in phase space. To simulate accurately and efficiently the particle dynamics in the TWT and other 1D particle-wave systems, a new symplectic, symmetric, second order numerical algorithm is developed, using particle position as the independent variable, with a fixed spatial step. This contribution reviews : presentation of the TWT and its connection to plasma physics, resonant interaction of a charged particle in electrostatic waves, observation of particle trapping and transition to chaos, test of control of chaos, and description of the simulation algorithm. The velocity distribution function of the electron beam is recorded with a trochoidal energy analyzer at the output of the TWT. An arbitrary waveform generator is used to launch a prescribed spectrum of waves along the 4m long helix of the TWT. The nonlinear synchronization of particles by a single wave, responsible for Landau damping, is observed. We explore the resonant velocity domain associated with a single wave as well as the transition to large scale chaos when the resonant domains of two waves and their secondary resonances overlap. This transition exhibits a devil's staircase behavior when increasing the excitation level in agreement with numerical simulation. A new strategy for control of chaos by building barriers of transport in phase space as well as its robustness is successfully tested. The underlying concepts extend far beyond the field of electron devices and plasma physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brower, Richard C.
This proposal is to develop the software and algorithmic infrastructure needed for the numerical study of quantum chromodynamics (QCD), and of theories that have been proposed to describe physics beyond the Standard Model (BSM) of high energy physics, on current and future computers. This infrastructure will enable users (1) to improve the accuracy of QCD calculations to the point where they no longer limit what can be learned from high-precision experiments that seek to test the Standard Model, and (2) to determine the predictions of BSM theories in order to understand which of them are consistent with the data thatmore » will soon be available from the LHC. Work will include the extension and optimizations of community codes for the next generation of leadership class computers, the IBM Blue Gene/Q and the Cray XE/XK, and for the dedicated hardware funded for our field by the Department of Energy. Members of our collaboration at Brookhaven National Laboratory and Columbia University worked on the design of the Blue Gene/Q, and have begun to develop software for it. Under this grant we will build upon their experience to produce high-efficiency production codes for this machine. Cray XE/XK computers with many thousands of GPU accelerators will soon be available, and the dedicated commodity clusters we obtain with DOE funding include growing numbers of GPUs. We will work with our partners in NVIDIA's Emerging Technology group to scale our existing software to thousands of GPUs, and to produce highly efficient production codes for these machines. Work under this grant will also include the development of new algorithms for the effective use of heterogeneous computers, and their integration into our codes. It will include improvements of Krylov solvers and the development of new multigrid methods in collaboration with members of the FASTMath SciDAC Institute, using their HYPRE framework, as well as work on improved symplectic integrators.« less
Vectorial mask optimization methods for robust optical lithography
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong; Arce, Gonzalo R.
2012-10-01
Continuous shrinkage of critical dimension in an integrated circuit impels the development of resolution enhancement techniques for low k1 lithography. Recently, several pixelated optical proximity correction (OPC) and phase-shifting mask (PSM) approaches were developed under scalar imaging models to account for the process variations. However, the lithography systems with larger-NA (NA>0.6) are predominant for current technology nodes, rendering the scalar models inadequate to describe the vector nature of the electromagnetic field that propagates through the optical lithography system. In addition, OPC and PSM algorithms based on scalar models can compensate for wavefront aberrations, but are incapable of mitigating polarization aberrations in practical lithography systems, which can only be dealt with under the vector model. To this end, we focus on developing robust pixelated gradient-based OPC and PSM optimization algorithms aimed at canceling defocus, dose variation, wavefront and polarization aberrations under a vector model. First, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. A steepest descent algorithm is then used to iteratively optimize the mask patterns. Simulations show that the proposed algorithms can effectively improve the process windows of the optical lithography systems.
A mathematical model for computer image tracking.
Legters, G R; Young, T Y
1982-06-01
A mathematical model using an operator formulation for a moving object in a sequence of images is presented. Time-varying translation and rotation operators are derived to describe the motion. A variational estimation algorithm is developed to track the dynamic parameters of the operators. The occlusion problem is alleviated by using a predictive Kalman filter to keep the tracking on course during severe occlusion. The tracking algorithm (variational estimation in conjunction with Kalman filter) is implemented to track moving objects with occasional occlusion in computer-simulated binary images.
Generation of optimum vertical profiles for an advanced flight management system
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Waters, M. H.
1981-01-01
Algorithms for generating minimum fuel or minimum cost vertical profiles are derived and examined. The option for fixing the time of flight is included in the concepts developed. These algorithms form the basis for the design of an advanced on-board flight management system. The variations in the optimum vertical profiles (resulting from these concepts) due to variations in wind, takeoff mass, and range-to-destination are presented. Fuel savings due to optimum climb, free cruise altitude, and absorbing delays enroute are examined.
D.J. Nicolsky; V.E. Romanovsky; G.G. Panteleev
2008-01-01
A variational data assimilation algorithm is developed to reconstruct thermal properties, porosity, and parametrization of the unfrozen water content for fully saturated soils. The algorithm is tested with simulated synthetic temperatures. The simulations are performed to determine the robustness and sensitivity of algorithm to estimate soil properties from in-situ...
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-21
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed 'MPD-AwTTV'. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
NASA Astrophysics Data System (ADS)
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
A Variational Formulation of Macro-Particle Algorithms for Kinetic Plasma Simulations
NASA Astrophysics Data System (ADS)
Shadwick, B. A.
2013-10-01
Macro-particle based simulations methods are in widespread use in plasma physics; their computational efficiency and intuitive nature are largely responsible for their longevity. In the main, these algorithms are formulated by approximating the continuous equations of motion. For systems governed by a variational principle (such as collisionless plasmas), approximations of the equations of motion is known to introduce anomalous behavior, especially in system invariants. We present a variational formulation of particle algorithms for plasma simulation based on a reduction of the distribution function onto a finite collection of macro-particles. As in the usual Particle-In-Cell (PIC) formulation, these macro-particles have a definite momentum and are spatially extended. The primary advantage of this approach is the preservation of the link between symmetries and conservation laws. For example, nothing in the reduction introduces explicit time dependence to the system and, therefore, the continuous-time equations of motion exactly conserve energy; thus, these models are free of grid-heating. In addition, the variational formulation allows for constructing models of arbitrary spatial and temporal order. In contrast, the overall accuracy of the usual PIC algorithm is at most second due to the nature of the force interpolation between the gridded field quantities and the (continuous) particle position. Again in contrast to the usual PIC algorithm, here the macro-particle shape is arbitrary; the spatial extent is completely decoupled from both the grid-size and the ``smoothness'' of the shape; smoother particle shapes are not necessarily larger. For simplicity, we restrict our discussion to one-dimensional, non-relativistic, un-magnetized, electrostatic plasmas. We comment on the extension to the electromagnetic case. Supported by the US DoE under contract numbers DE-FG02-08ER55000 and DE-SC0008382.
Demidov, German; Simakova, Tamara; Vnuchkova, Julia; Bragin, Anton
2016-10-22
Multiplex polymerase chain reaction (PCR) is a common enrichment technique for targeted massive parallel sequencing (MPS) protocols. MPS is widely used in biomedical research and clinical diagnostics as the fast and accurate tool for the detection of short genetic variations. However, identification of larger variations such as structure variants and copy number variations (CNV) is still being a challenge for targeted MPS. Some approaches and tools for structural variants detection were proposed, but they have limitations and often require datasets of certain type, size and expected number of amplicons affected by CNVs. In the paper, we describe novel algorithm for high-resolution germinal CNV detection in the PCR-enriched targeted sequencing data and present accompanying tool. We have developed a machine learning algorithm for the detection of large duplications and deletions in the targeted sequencing data generated with PCR-based enrichment step. We have performed verification studies and established the algorithm's sensitivity and specificity. We have compared developed tool with other available methods applicable for the described data and revealed its higher performance. We showed that our method has high specificity and sensitivity for high-resolution copy number detection in targeted sequencing data using large cohort of samples.
Primitive ideals of C q [ SL(3)
NASA Astrophysics Data System (ADS)
Hodges, Timothy J.; Levasseur, Thierry
1993-10-01
The primitive ideals of the Hopf algebra C q [ SL(3)] are classified. In particular it is shown that the orbits in Prim C q [ SL(3)] under the action of the representation group H ≅ C *× C * are parameterized naturally by W×W, where W is the associated Weyl group. It is shown that there is a natural one-to-one correspondence between primitive ideals of C q [ SL(3)] and symplectic leaves of the associated Poisson algebraic group SL(3, C).
Microcracks, micropores, and their petrologic interpretation for 72415 and 15418
NASA Technical Reports Server (NTRS)
Richter, D.; Simmons, G.; Siegfried, R.
1976-01-01
Lunar samples 72415 and 15418 have complex microstructures that indicate a series of fracturing and healing events. Both samples contain relatively few open microcracks but many sealed and healed microcracks. Dunite 72415 contains abundant healed cracks that formed tectonically, symplectic intergrowths spatially and probably genetically related to microcracks, and a cataclastic matrix that has been extensively sintered. Metamorphosed breccia 15418 contains many post-metamorphic healed cracks, large shock induced cracks that have been sealed with glass, and a few younger, thin, open shock induced cracks.
Instanton approach to large N Harish-Chandra-Itzykson-Zuber integrals.
Bun, J; Bouchaud, J P; Majumdar, S N; Potters, M
2014-08-15
We reconsider the large N asymptotics of Harish-Chandra-Itzykson-Zuber integrals. We provide, using Dyson's Brownian motion and the method of instantons, an alternative, transparent derivation of the Matytsin formalism for the unitary case. Our method is easily generalized to the orthogonal and symplectic ensembles. We obtain an explicit solution of Matytsin's equations in the case of Wigner matrices, as well as a general expansion method in the dilute limit, when the spectrum of eigenvalues spreads over very wide regions.
Nonlocal variational model and filter algorithm to remove multiplicative noise
NASA Astrophysics Data System (ADS)
Chen, Dai-Qiang; Zhang, Hui; Cheng, Li-Zhi
2010-07-01
The nonlocal (NL) means filter proposed by Buades, Coll, and Morel (SIAM Multiscale Model. Simul. 4(2), 490-530, 2005), which makes full use of the redundancy information in images, has shown to be very efficient for image denoising with Gauss noise added. On the basis of the NL method and a striver to minimize the conditional mean-square error, we design a NL means filter to remove multiplicative noise, and combining the NL filter to regularity method, we propose a NL total variational (TV) model and present a fast iterated algorithm for it. Experiments demonstrate that our algorithm is better than TV method; it is superior in preserving small structures and textures and can obtain an improvement in peak signal-to-noise ratio.
Validation of the Thematic Mapper radiometric and geometric correction algorithms
NASA Technical Reports Server (NTRS)
Fischel, D.
1984-01-01
The radiometric and geometric correction algorithms for Thematic Mapper are critical to subsequent successful information extraction. Earlier Landsat scanners, known as Multispectral Scanners, produce imagery which exhibits striping due to mismatching of detector gains and biases. Thematic Mapper exhibits the same phenomenon at three levels: detector-to-detector, scan-to-scan, and multiscan striping. The cause of these variations has been traced to variations in the dark current of the detectors. An alternative formulation has been tested and shown to be very satisfactory. Unfortunately, the Thematic Mapper detectors exhibit saturation effects suffered while viewing extensive cloud areas, and is not easily correctable. The geometric correction algorithm has been shown to be remarkably reliable. Only minor and modest improvements are indicated and shown to be effective.
NASA Astrophysics Data System (ADS)
Liagkouras, K.; Metaxiotis, K.
2017-01-01
Multi-objective evolutionary algorithms (MOEAs) are currently a dynamic field of research that has attracted considerable attention. Mutation operators have been utilized by MOEAs as variation mechanisms. In particular, polynomial mutation (PLM) is one of the most popular variation mechanisms and has been utilized by many well-known MOEAs. In this paper, we revisit the PLM operator and we propose a fitness-guided version of the PLM. Experimental results obtained by non-dominated sorting genetic algorithm II and strength Pareto evolutionary algorithm 2 show that the proposed fitness-guided mutation operator outperforms the classical PLM operator, based on different performance metrics that evaluate both the proximity of the solutions to the Pareto front and their dispersion on it.
2010-05-01
Skyline Algorithms 2.2.1 Block-Nested Loops A simple way to find the skyline is to use the block-nested loops ( BNL ) algorithm [3], which is the algorithm...by an NDS member are discarded. After every individual has been compared with the NDS, the NDS is the dataset’s skyline. In the best case for BNL ...SFS) algorithm [4] is a variation on BNL that first introduces the idea of initially ordering the individuals by a monotonically increasing scoring
NOSS Altimeter Detailed Algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Mcmillan, J. D.
1982-01-01
The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.
A review on quantum search algorithms
NASA Astrophysics Data System (ADS)
Giri, Pulak Ranjan; Korepin, Vladimir E.
2017-12-01
The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.
HECLIB. Volume 2: HECDSS Subroutines Programmer’s Manual
1991-05-01
algorithm and hierarchical design for database accesses. This algorithm provides quick access to data sets and an efficient means of adding new data set...Description of How DSS Works DSS version 6 utilizes a modified hash algorithm based upon the pathname to store and retrieve data. This structure allows...balancing disk space and record access times. A variation in this algorithm is for "stable" files. In a stable file, a hash table is not utilized
Functional validation and comparison framework for EIT lung imaging.
Grychtol, Bartłomiej; Elke, Gunnar; Meybohm, Patrick; Weiler, Norbert; Frerichs, Inéz; Adler, Andy
2014-01-01
Electrical impedance tomography (EIT) is an emerging clinical tool for monitoring ventilation distribution in mechanically ventilated patients, for which many image reconstruction algorithms have been suggested. We propose an experimental framework to assess such algorithms with respect to their ability to correctly represent well-defined physiological changes. We defined a set of clinically relevant ventilation conditions and induced them experimentally in 8 pigs by controlling three ventilator settings (tidal volume, positive end-expiratory pressure and the fraction of inspired oxygen). In this way, large and discrete shifts in global and regional lung air content were elicited. We use the framework to compare twelve 2D EIT reconstruction algorithms, including backprojection (the original and still most frequently used algorithm), GREIT (a more recent consensus algorithm for lung imaging), truncated singular value decomposition (TSVD), several variants of the one-step Gauss-Newton approach and two iterative algorithms. We consider the effects of using a 3D finite element model, assuming non-uniform background conductivity, noise modeling, reconstructing for electrode movement, total variation (TV) reconstruction, robust error norms, smoothing priors, and using difference vs. normalized difference data. Our results indicate that, while variation in appearance of images reconstructed from the same data is not negligible, clinically relevant parameters do not vary considerably among the advanced algorithms. Among the analysed algorithms, several advanced algorithms perform well, while some others are significantly worse. Given its vintage and ad-hoc formulation backprojection works surprisingly well, supporting the validity of previous studies in lung EIT.
Automatic Whistler Detector and Analyzer system: Implementation of the analyzer algorithm
NASA Astrophysics Data System (ADS)
Lichtenberger, JáNos; Ferencz, Csaba; Hamar, Daniel; Steinbach, Peter; Rodger, Craig J.; Clilverd, Mark A.; Collier, Andrew B.
2010-12-01
The full potential of whistlers for monitoring plasmaspheric electron density variations has not yet been realized. The primary reason is the vast human effort required for the analysis of whistler traces. Recently, the first part of a complete whistler analysis procedure was successfully automated, i.e., the automatic detection of whistler traces from the raw broadband VLF signal was achieved. This study describes a new algorithm developed to determine plasmaspheric electron density measurements from whistler traces, based on a Virtual (Whistler) Trace Transformation, using a 2-D fast Fourier transform transformation. This algorithm can be automated and can thus form the final step to complete an Automatic Whistler Detector and Analyzer (AWDA) system. In this second AWDA paper, the practical implementation of the Automatic Whistler Analyzer (AWA) algorithm is discussed and a feasible solution is presented. The practical implementation of the algorithm is able to track the variations of plasmasphere in quasi real time on a PC cluster with 100 CPU cores. The electron densities obtained by the AWA method can be used in investigations such as plasmasphere dynamics, ionosphere-plasmasphere coupling, or in space weather models.
Simulations of the Fomalhaut system within its local galactic environment
NASA Astrophysics Data System (ADS)
Kaib, Nathan A.; White, Ethan B.; Izidoro, André
2018-01-01
Fomalhaut A is among the most well-studied nearby stars and has been discovered to possess a putative planetary object as well as a remarkable eccentric dust belt. This eccentric dust belt has often been interpreted as the dynamical signature of one or more planets that elude direct detection. However, the system also contains two other stellar companions residing ∼105 au from Fomalhaut A. We have designed a new symplectic integration algorithm to model the evolution of Fomalhaut A's planetary dust belt in concert with the dynamical evolution of its stellar companions to determine if these companions are likely to have generated the dust belt's morphology. Using our numerical simulations, we find that close encounters between Fomalhaut A and B are expected, with an ∼25 per cent probability that the two stars have passed within at least 400 au of each other at some point. Although the outcomes of such encounter histories are extremely varied, these close encounters nearly always excite the eccentricity of Fomalhaut A's dust belt and occasionally yield morphologies very similar to the observed belt. With these results, we argue that close encounters with Fomalhaut A's stellar companions should be considered a plausible mechanism to explain its eccentric belt, especially in the absence of detected planets capable of sculpting the belt's morphology. More broadly, we can also conclude from this work that very wide binary stars may often generate asymmetries in the stellar debris discs they host.
NASA Technical Reports Server (NTRS)
Clark, Roger N.; Swayze, Gregg A.; Gallagher, Andrea
1992-01-01
The sedimentary sections exposed in the Canyonlands and Arches National Parks region of Utah (generally referred to as 'Canyonlands') consist of sandstones, shales, limestones, and conglomerates. Reflectance spectra of weathered surfaces of rocks from these areas show two components: (1) variations in spectrally detectable mineralogy, and (2) variations in the relative ratios of the absorption bands between minerals. Both types of information can be used together to map each major lithology and the Clark spectral features mapping algorithm is applied to do the job.
Introduction of Total Variation Regularization into Filtered Backprojection Algorithm
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.
A second order derivative scheme based on Bregman algorithm class
NASA Astrophysics Data System (ADS)
Campagna, Rosanna; Crisci, Serena; Cuomo, Salvatore; Galletti, Ardelio; Marcellino, Livia
2016-10-01
The algorithms based on the Bregman iterative regularization are known for efficiently solving convex constraint optimization problems. In this paper, we introduce a second order derivative scheme for the class of Bregman algorithms. Its properties of convergence and stability are investigated by means of numerical evidences. Moreover, we apply the proposed scheme to an isotropic Total Variation (TV) problem arising out of the Magnetic Resonance Image (MRI) denoising. Experimental results confirm that our algorithm has good performance in terms of denoising quality, effectiveness and robustness.
TUMOR HAPLOTYPE ASSEMBLY ALGORITHMS FOR CANCER GENOMICS
AGUIAR, DEREK; WONG, WENDY S.W.; ISTRAIL, SORIN
2014-01-01
The growing availability of inexpensive high-throughput sequence data is enabling researchers to sequence tumor populations within a single individual at high coverage. But, cancer genome sequence evolution and mutational phenomena like driver mutations and gene fusions are difficult to investigate without first reconstructing tumor haplotype sequences. Haplotype assembly of single individual tumor populations is an exceedingly difficult task complicated by tumor haplotype heterogeneity, tumor or normal cell sequence contamination, polyploidy, and complex patterns of variation. While computational and experimental haplotype phasing of diploid genomes has seen much progress in recent years, haplotype assembly in cancer genomes remains uncharted territory. In this work, we describe HapCompass-Tumor a computational modeling and algorithmic framework for haplotype assembly of copy number variable cancer genomes containing haplotypes at different frequencies and complex variation. We extend our polyploid haplotype assembly model and present novel algorithms for (1) complex variations, including copy number changes, as varying numbers of disjoint paths in an associated graph, (2) variable haplotype frequencies and contamination, and (3) computation of tumor haplotypes using simple cycles of the compass graph which constrain the space of haplotype assembly solutions. The model and algorithm are implemented in the software package HapCompass-Tumor which is available for download from http://www.brown.edu/Research/Istrail_Lab/. PMID:24297529
A controlled variation scheme for convection treatment in pressure-based algorithm
NASA Technical Reports Server (NTRS)
Shyy, Wei; Thakur, Siddharth; Tucker, Kevin
1993-01-01
Convection effect and source terms are two primary sources of difficulties in computing turbulent reacting flows typically encountered in propulsion devices. The present work intends to elucidate the individual as well as the collective roles of convection and source terms in the fluid flow equations, and to devise appropriate treatments and implementations to improve our current capability of predicting such flows. A controlled variation scheme (CVS) has been under development in the context of a pressure-based algorithm, which has the characteristics of adaptively regulating the amount of numerical diffusivity, relative to central difference scheme, according to the variation in local flow field. Both the basic concepts and a pragmatic assessment will be presented to highlight the status of this work.
An advancing front Delaunay triangulation algorithm designed for robustness
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1992-01-01
A new algorithm is described for generating an unstructured mesh about an arbitrary two-dimensional configuration. Mesh points are generated automatically by the algorithm in a manner which ensures a smooth variation of elements, and the resulting triangulation constitutes the Delaunay triangulation of these points. The algorithm combines the mathematical elegance and efficiency of Delaunay triangulation algorithms with the desirable point placement features, boundary integrity, and robustness traditionally associated with advancing-front-type mesh generation strategies. The method offers increased robustness over previous algorithms in that it cannot fail regardless of the initial boundary point distribution and the prescribed cell size distribution throughout the flow-field.
A MAP blind image deconvolution algorithm with bandwidth over-constrained
NASA Astrophysics Data System (ADS)
Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong
2018-03-01
We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.
Power allocation for SWIPT in K-user interference channels using game theory
NASA Astrophysics Data System (ADS)
Wen, Zhigang; Liu, Ying; Liu, Xiaoqing; Li, Shan; Chen, Xianya
2018-12-01
A simultaneous wireless information and power transfer system in interference channels of multi-users is considered. In this system, each transmitter sends one data stream to its targeted receiver, which causes interference to other receivers. Since all transmitter-receiver links want to maximize their own average transmission rate, a power allocation problem under the transmit power constraints and the energy-harvesting constraints is developed. To solve this problem, we propose a game theory framework. Then, we convert the game into a variational inequalities problem by establishing the connection between game theory and variational inequalities and solve the variational inequalities problem. Through theoretical analysis, the existence and uniqueness of Nash equilibrium are both guaranteed by the theory of variational inequalities. A distributed iterative alternating optimization water-filling algorithm is derived, which is proved to converge. Numerical results show that the proposed algorithm reaches fast convergence and achieves a higher sum rate than the unaided scheme.
Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421
The classical dynamic symmetry for the U(1) -Kepler problems
NASA Astrophysics Data System (ADS)
Bouarroudj, Sofiane; Meng, Guowu
2018-01-01
For the Jordan algebra of hermitian matrices of order n ≥ 2, we let X be its submanifold consisting of rank-one semi-positive definite elements. The composition of the cotangent bundle map πX: T∗ X → X with the canonical map X → CP n - 1 (i.e., the map that sends a given hermitian matrix to its column space), pulls back the Kähler form of the Fubini-Study metric on CP n - 1 to a real closed differential two-form ωK on T∗ X. Let ωX be the canonical symplectic form on T∗ X and μ a real number. A standard fact says that ωμ ≔ωX + 2 μωK turns T∗ X into a symplectic manifold, hence a Poisson manifold with Poisson bracket {,}μ. In this article we exhibit a Poisson realization of the simple real Lie algebra su(n , n) on the Poisson manifold (T∗ X ,{,}μ) , i.e., a Lie algebra homomorphism from su(n , n) to (C∞(T∗ X , R) ,{,}μ). Consequently one obtains the Laplace-Runge-Lenz vector for the classical U(1) -Kepler problem of level n and magnetic charge μ. Since the McIntosh-Cisneros-Zwanziger-Kepler problems (MICZ-Kepler Problems) are the U(1) -Kepler problems of level 2, the work presented here is a direct generalization of the work by A. Barut and G. Bornzin (1971) on the classical dynamic symmetry for the MICZ-Kepler problems.
Crossover ensembles of random matrices and skew-orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Santosh, E-mail: skumar.physics@gmail.com; Pandey, Akhilesh, E-mail: ap0700@mail.jnu.ac.in
2011-08-15
Highlights: > We study crossover ensembles of Jacobi family of random matrices. > We consider correlations for orthogonal-unitary and symplectic-unitary crossovers. > We use the method of skew-orthogonal polynomials and quaternion determinants. > We prove universality of spectral correlations in crossover ensembles. > We discuss applications to quantum conductance and communication theory problems. - Abstract: In a recent paper (S. Kumar, A. Pandey, Phys. Rev. E, 79, 2009, p. 026211) we considered Jacobi family (including Laguerre and Gaussian cases) of random matrix ensembles and reported exact solutions of crossover problems involving time-reversal symmetry breaking. In the present paper we givemore » details of the work. We start with Dyson's Brownian motion description of random matrix ensembles and obtain universal hierarchic relations among the unfolded correlation functions. For arbitrary dimensions we derive the joint probability density (jpd) of eigenvalues for all transitions leading to unitary ensembles as equilibrium ensembles. We focus on the orthogonal-unitary and symplectic-unitary crossovers and give generic expressions for jpd of eigenvalues, two-point kernels and n-level correlation functions. This involves generalization of the theory of skew-orthogonal polynomials to crossover ensembles. We also consider crossovers in the circular ensembles to show the generality of our method. In the large dimensionality limit, correlations in spectra with arbitrary initial density are shown to be universal when expressed in terms of a rescaled symmetry breaking parameter. Applications of our crossover results to communication theory and quantum conductance problems are also briefly discussed.« less
Section sigma models coupled to symplectic duality bundles on Lorentzian four-manifolds
NASA Astrophysics Data System (ADS)
Lazaroiu, C. I.; Shahbazi, C. S.
2018-06-01
We give the global mathematical formulation of a class of generalized four-dimensional theories of gravity coupled to scalar matter and to Abelian gauge fields. In such theories, the scalar fields are described by a section of a surjective pseudo-Riemannian submersion π over space-time, whose total space carries a Lorentzian metric making the fibers into totally-geodesic connected Riemannian submanifolds. In particular, π is a fiber bundle endowed with a complete Ehresmann connection whose transport acts through isometries between the fibers. In turn, the Abelian gauge fields are "twisted" by a flat symplectic vector bundle defined over the total space of π. This vector bundle is endowed with a vertical taming which locally encodes the gauge couplings and theta angles of the theory and gives rise to the notion of twisted self-duality, of crucial importance to construct the theory. When the Ehresmann connection of π is integrable, we show that our theories are locally equivalent to ordinary Einstein-Scalar-Maxwell theories and hence provide a global non-trivial extension of the universal bosonic sector of four-dimensional supergravity. In this case, we show using a special trivializing atlas of π that global solutions of such models can be interpreted as classical "locally-geometric" U-folds. In the non-integrable case, our theories differ locally from ordinary Einstein-Scalar-Maxwell theories and may provide a geometric description of classical U-folds which are "locally non-geometric".
Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young
2015-04-01
In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Towards Structural Analysis of Audio Recordings in the Presence of Musical Variations
NASA Astrophysics Data System (ADS)
Müller, Meinard; Kurth, Frank
2006-12-01
One major goal of structural analysis of an audio recording is to automatically extract the repetitive structure or, more generally, the musical form of the underlying piece of music. Recent approaches to this problem work well for music, where the repetitions largely agree with respect to instrumentation and tempo, as is typically the case for popular music. For other classes of music such as Western classical music, however, musically similar audio segments may exhibit significant variations in parameters such as dynamics, timbre, execution of note groups, modulation, articulation, and tempo progression. In this paper, we propose a robust and efficient algorithm for audio structure analysis, which allows to identify musically similar segments even in the presence of large variations in these parameters. To account for such variations, our main idea is to incorporate invariance at various levels simultaneously: we design a new type of statistical features to absorb microvariations, introduce an enhanced local distance measure to account for local variations, and describe a new strategy for structure extraction that can cope with the global variations. Our experimental results with classical and popular music show that our algorithm performs successfully even in the presence of significant musical variations.
An improved reversible data hiding algorithm based on modification of prediction errors
NASA Astrophysics Data System (ADS)
Jafar, Iyad F.; Hiary, Sawsan A.; Darabkh, Khalid A.
2014-04-01
Reversible data hiding algorithms are concerned with the ability of hiding data and recovering the original digital image upon extraction. This issue is of interest in medical and military imaging applications. One particular class of such algorithms relies on the idea of histogram shifting of prediction errors. In this paper, we propose an improvement over one popular algorithm in this class. The improvement is achieved by employing a different predictor, the use of more bins in the prediction error histogram in addition to multilevel embedding. The proposed extension shows significant improvement over the original algorithm and its variations.
Multiscale 3-D shape representation and segmentation using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2007-04-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details.
Multiscale 3-D Shape Representation and Segmentation Using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron
2013-01-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details. PMID:17427745
Optimising operational amplifiers by evolutionary algorithms and gm/Id method
NASA Astrophysics Data System (ADS)
Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.
2016-10-01
The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.
Blind equalization with criterion with memory nonlinearity
NASA Astrophysics Data System (ADS)
Chen, Yuanjie; Nikias, Chrysostomos L.; Proakis, John G.
1992-06-01
Blind equalization methods usually combat the linear distortion caused by a nonideal channel via a transversal filter, without resorting to the a priori known training sequences. We introduce a new criterion with memory nonlinearity (CRIMNO) for the blind equalization problem. The basic idea of this criterion is to augment the Godard [or constant modulus algorithm (CMA)] cost function with additional terms that penalize the autocorrelations of the equalizer outputs. Several variations of the CRIMNO algorithms are derived, with the variations dependent on (1) whether the empirical averages or the single point estimates are used to approximate the expectations, (2) whether the recent or the delayed equalizer coefficients are used, and (3) whether the weights applied to the autocorrelation terms are fixed or are allowed to adapt. Simulation experiments show that the CRIMNO algorithm, and especially its adaptive weight version, exhibits faster convergence speed than the Godard (or CMA) algorithm. Extensions of the CRIMNO criterion to accommodate the case of correlated inputs to the channel are also presented.
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1991-01-01
Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.
Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David
2013-01-01
Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Mean Field Variational Bayesian Data Assimilation
NASA Astrophysics Data System (ADS)
Vrettas, M.; Cornford, D.; Opper, M.
2012-04-01
Current data assimilation schemes propose a range of approximate solutions to the classical data assimilation problem, particularly state estimation. Broadly there are three main active research areas: ensemble Kalman filter methods which rely on statistical linearization of the model evolution equations, particle filters which provide a discrete point representation of the posterior filtering or smoothing distribution and 4DVAR methods which seek the most likely posterior smoothing solution. In this paper we present a recent extension to our variational Bayesian algorithm which seeks the most probably posterior distribution over the states, within the family of non-stationary Gaussian processes. Our original work on variational Bayesian approaches to data assimilation sought the best approximating time varying Gaussian process to the posterior smoothing distribution for stochastic dynamical systems. This approach was based on minimising the Kullback-Leibler divergence between the true posterior over paths, and our Gaussian process approximation. So long as the observation density was sufficiently high to bring the posterior smoothing density close to Gaussian the algorithm proved very effective, on lower dimensional systems. However for higher dimensional systems, the algorithm was computationally very demanding. We have been developing a mean field version of the algorithm which treats the state variables at a given time as being independent in the posterior approximation, but still accounts for their relationships between each other in the mean solution arising from the original dynamical system. In this work we present the new mean field variational Bayesian approach, illustrating its performance on a range of classical data assimilation problems. We discuss the potential and limitations of the new approach. We emphasise that the variational Bayesian approach we adopt, in contrast to other variational approaches, provides a bound on the marginal likelihood of the observations given parameters in the model which also allows inference of parameters such as observation errors, and parameters in the model and model error representation, particularly if this is written as a deterministic form with small additive noise. We stress that our approach can address very long time window and weak constraint settings. However like traditional variational approaches our Bayesian variational method has the benefit of being posed as an optimisation problem. We finish with a sketch of the future directions for our approach.
High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization
NASA Astrophysics Data System (ADS)
Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan
2017-04-01
Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.
Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations
NASA Astrophysics Data System (ADS)
Mirloo, Mahsa; Ebrahimnezhad, Hosein
2018-03-01
In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.
Shaker, S B; Dirksen, A; Laursen, L C; Maltbaek, N; Christensen, L; Sander, U; Seersholm, N; Skovgaard, L T; Nielsen, L; Kok-Jensen, A
2004-07-01
To study the short-term reproducibility of lung density measurements by multi-slice computed tomography (CT) using three different radiation doses and three reconstruction algorithms. Twenty-five patients with smoker's emphysema and 25 patients with alpha1-antitrypsin deficiency underwent 3 scans at 2-week intervals. Low-dose protocol was applied, and images were reconstructed with bone, detail, and soft algorithms. Total lung volume (TLV), 15th percentile density (PD-15), and relative area at -910 Hounsfield units (RA-910) were obtained from the images using Pulmo-CMS software. Reproducibility of PD-15 and RA-910 and the influence of radiation dose, reconstruction algorithm, and type of emphysema were then analysed. The overall coefficient of variation of volume adjusted PD-15 for all combinations of radiation dose and reconstruction algorithm was 3.7%. The overall standard deviation of volume-adjusted RA-910 was 1.7% (corresponding to a coefficient of variation of 6.8%). Radiation dose, reconstruction algorithm, and type of emphysema had no significant influence on the reproducibility of PD-15 and RA-910. However, bone algorithm and very low radiation dose result in overestimation of the extent of emphysema. Lung density measurement by CT is a sensitive marker for quantitating both subtypes of emphysema. A CT-protocol with radiation dose down to 16 mAs and soft or detail reconstruction algorithm is recommended.
Functional Validation and Comparison Framework for EIT Lung Imaging
Meybohm, Patrick; Weiler, Norbert; Frerichs, Inéz; Adler, Andy
2014-01-01
Introduction Electrical impedance tomography (EIT) is an emerging clinical tool for monitoring ventilation distribution in mechanically ventilated patients, for which many image reconstruction algorithms have been suggested. We propose an experimental framework to assess such algorithms with respect to their ability to correctly represent well-defined physiological changes. We defined a set of clinically relevant ventilation conditions and induced them experimentally in 8 pigs by controlling three ventilator settings (tidal volume, positive end-expiratory pressure and the fraction of inspired oxygen). In this way, large and discrete shifts in global and regional lung air content were elicited. Methods We use the framework to compare twelve 2D EIT reconstruction algorithms, including backprojection (the original and still most frequently used algorithm), GREIT (a more recent consensus algorithm for lung imaging), truncated singular value decomposition (TSVD), several variants of the one-step Gauss-Newton approach and two iterative algorithms. We consider the effects of using a 3D finite element model, assuming non-uniform background conductivity, noise modeling, reconstructing for electrode movement, total variation (TV) reconstruction, robust error norms, smoothing priors, and using difference vs. normalized difference data. Results and Conclusions Our results indicate that, while variation in appearance of images reconstructed from the same data is not negligible, clinically relevant parameters do not vary considerably among the advanced algorithms. Among the analysed algorithms, several advanced algorithms perform well, while some others are significantly worse. Given its vintage and ad-hoc formulation backprojection works surprisingly well, supporting the validity of previous studies in lung EIT. PMID:25110887
Li, Jun-qing; Pan, Quan-ke; Mao, Kun
2014-01-01
A hybrid algorithm which combines particle swarm optimization (PSO) and iterated local search (ILS) is proposed for solving the hybrid flowshop scheduling (HFS) problem with preventive maintenance (PM) activities. In the proposed algorithm, different crossover operators and mutation operators are investigated. In addition, an efficient multiple insert mutation operator is developed for enhancing the searching ability of the algorithm. Furthermore, an ILS-based local search procedure is embedded in the algorithm to improve the exploitation ability of the proposed algorithm. The detailed experimental parameter for the canonical PSO is tuning. The proposed algorithm is tested on the variation of 77 Carlier and Néron's benchmark problems. Detailed comparisons with the present efficient algorithms, including hGA, ILS, PSO, and IG, verify the efficiency and effectiveness of the proposed algorithm. PMID:24883414
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tajaldeen, A; Ramachandran, P; Geso, M
2015-06-15
Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fastmore » superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of algorithms in lung cancer radiotherapy involving small fields. However, further investigation by Monte Carlo simulation is required to confirm our results.« less
Tang, Jie; Nett, Brian E; Chen, Guang-Hong
2009-10-07
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
Metaphor Identification in Large Texts Corpora
Neuman, Yair; Assaf, Dan; Cohen, Yohai; Last, Mark; Argamon, Shlomo; Howard, Newton; Frieder, Ophir
2013-01-01
Identifying metaphorical language-use (e.g., sweet child) is one of the challenges facing natural language processing. This paper describes three novel algorithms for automatic metaphor identification. The algorithms are variations of the same core algorithm. We evaluate the algorithms on two corpora of Reuters and the New York Times articles. The paper presents the most comprehensive study of metaphor identification in terms of scope of metaphorical phrases and annotated corpora size. Algorithms’ performance in identifying linguistic phrases as metaphorical or literal has been compared to human judgment. Overall, the algorithms outperform the state-of-the-art algorithm with 71% precision and 27% averaged improvement in prediction over the base-rate of metaphors in the corpus. PMID:23658625
NASA Astrophysics Data System (ADS)
Svejkosky, Joseph
The spectral signatures of vehicles in hyperspectral imagery exhibit temporal variations due to the preponderance of surfaces with material properties that display non-Lambertian bi-directional reflectance distribution functions (BRDFs). These temporal variations are caused by changing illumination conditions, changing sun-target-sensor geometry, changing road surface properties, and changing vehicle orientations. To quantify these variations and determine their relative importance in a sub-pixel vehicle reacquisition and tracking scenario, a hyperspectral vehicle BRDF sampling experiment was conducted in which four vehicles were rotated at different orientations and imaged over a six-hour period. The hyperspectral imagery was calibrated using novel in-scene methods and converted to reflectance imagery. The resulting BRDF sampled time-series imagery showed a strong vehicle level BRDF dependence on vehicle shape in off-nadir imaging scenarios and a strong dependence on vehicle color in simulated nadir imaging scenarios. The imagery also exhibited spectral features characteristic of sampling the BRDF of non-Lambertian targets, which were subsequently verified with simulations. In addition, the imagery demonstrated that the illumination contribution from vehicle adjacent horizontal surfaces significantly altered the shape and magnitude of the vehicle reflectance spectrum. The results of the BRDF sampling experiment illustrate the need for a target vehicle BRDF model and detection scheme that incorporates non-Lambertian BRDFs. A new detection algorithm called Eigenvector Loading Regression (ELR) is proposed that learns a hyperspectral vehicle BRDF from a series of BRDF measurements using regression in a lower dimensional space and then applies the learned BRDF to make test spectrum predictions. In cases of non-Lambertian vehicle BRDF, this detection methodology performs favorably when compared to subspace detections algorithms and graph-based detection algorithms that do not account for the target BRDF. The algorithms are compared using a test environment in which observed spectral reflectance signatures from the BRDF sampling experiment are implanted into aerial hyperspectral imagery that contain large quantities of vehicles.
NASA Astrophysics Data System (ADS)
Qian, Tingting; Wang, Lianlian; Lu, Guanghua
2017-07-01
Radar correlated imaging (RCI) introduces the optical correlated imaging technology to traditional microwave imaging, which has raised widespread concern recently. Conventional RCI methods neglect the structural information of complex extended target, which makes the quality of recovery result not really perfect, thus a novel combination of negative exponential restraint and total variation (NER-TV) algorithm for extended target imaging is proposed in this paper. The sparsity is measured by a sequential order one negative exponential function, then the 2D total variation technique is introduced to design a novel optimization problem for extended target imaging. And the proven alternating direction method of multipliers is applied to solve the new problem. Experimental results show that the proposed algorithm could realize high resolution imaging efficiently for extended target.
The Relationship of Temporal Variations in SMAP Vegetation Optical Depth to Plant Hydraulic Behavior
NASA Astrophysics Data System (ADS)
Konings, A. G.
2016-12-01
The soil emissions measured by L-band radiometers such as that on the NASA Soil Moisture Active/Passive mission are modulated by vegetation cover as quantified by the soil scattering albedo and the vegetation optical depth (VOD). The VOD is linearly proportional to the total vegetation water content, which is dependent on both the biomass and relative water content of the plant. Biomass is expected to vary more slowly than water content. Variations in vegetation water content are highly informative as they are directly indicative of the degree of hydraulic stress (or lack thereof) experienced by the plant. However, robust retrievals are needed in order for SMAP VOD observations to be useful. This is complicated by the fact that multiple unknowns (soil moisture, VOD, and albedo) need to be determined from two highly correlated polarizations. This presentation will discuss the application to SMAP of a recently developed timeseries algorithm for VOD and albedo retrieval - the Multi-Temporal Dual Channel Algorithm MTDCA, and its interpretation for plant hydraulic applications. The MT-DCA is based on the assumption that, for consecutive overpasses at a given time of day, VOD varies more slowly than soil moisture. A two-overpass moving average can then be used to determine variations in VOD that are less sensitive to high-frequency noise than classical dual-channel algorithms. Seasonal variations of SMAP VOD are presented and compared to expected patterns based on rainfall and radiation seasonality. Taking advantage of the large diurnal variation (relative to the seasonal variation) of canopy water potention, diurnal variations (between 6AM and 6PM observations) of SMAP VOD are then used to calculate global variations in ecosystem-scale isohydricity - the degree of stomatal closure and xylem conductivity loss in response to water stress. Lastly, the effect of satellite sensing frequency and overpass time on water content across canopies of different height will be discussed.
NASA Astrophysics Data System (ADS)
Sahadevan, R.; Rajakumar, S.
2008-03-01
A systematic investigation of finding bilinear or trilinear representations of fourth order autonomous ordinary difference equation, x(n +4)=F(x(n),x(n+1),x(n+2),x(n+3)) or xn +4=F(xn,xn +1,xn +2,xn +3), is made. As an illustration, we consider fourth order symplectic integrable difference equations reported by [Capel and Sahadevan, Physica A 289, 86 (2001)] and derived their bilinear or trilinear forms. Also, it is shown that the obtained bilinear representations admit exact solution of rational form.
Pedagogical introduction to the entropy of entanglement for Gaussian states
NASA Astrophysics Data System (ADS)
Demarie, Tommaso F.
2018-05-01
In quantum information theory, the entropy of entanglement is a standard measure of bipartite entanglement between two partitions of a composite system. For a particular class of continuous variable quantum states, the Gaussian states, the entropy of entanglement can be expressed elegantly in terms of symplectic eigenvalues, elements that characterise a Gaussian state and depend on the correlations of the canonical variables. We give a rigorous step-by-step derivation of this result and provide physical insights, together with an example that can be useful in practice for calculations.
Disordered two-dimensional electron systems with chiral symmetry
NASA Astrophysics Data System (ADS)
Markoš, P.; Schweitzer, L.
2012-10-01
We review the results of our recent numerical investigations on the electronic properties of disordered two dimensional systems with chiral unitary, chiral orthogonal, and chiral symplectic symmetry. Of particular interest is the behavior of the density of states and the logarithmic scaling of the smallest Lyapunov exponents in the vicinity of the chiral quantum critical point in the band center at E=0. The observed peaks or depressions in the density of states, the distribution of the critical conductances, and the possible non-universality of the critical exponents for certain chiral unitary models are discussed.
Boundary qKZ equation and generalized Razumov Stroganov sum rules for open IRF models
NASA Astrophysics Data System (ADS)
Di Francesco, P.
2005-11-01
We find higher-rank generalizations of the Razumov-Stroganov sum rules at q = -ei π/(k+1) for Ak-1 models with open boundaries, by constructing polynomial solutions of level-1 boundary quantum Knizhnik-Zamolodchikov equations for U_q(\\frak {sl}(k)) . The result takes the form of a character of the symplectic group, that leads to a generalization of the number of vertically symmetric alternating sign matrices. We also investigate the other combinatorial point q = -1, presumably related to the geometry of nilpotent matrix varieties.
NASA Astrophysics Data System (ADS)
Kogan, Ian I.
We discuss a quantum { U}q [sl(2)] symmetry in the Landau problem, which naturally arises due to the relation between { U}q [sl(2)] and the group of magnetic translations. The latter is connected with W∞ and area-preserving (symplectic) diffeomorphisms which are the canonical transformations in the two-dimensional phase space. We shall discuss the hidden quantum symmetry in a 2 + 1 gauge theory with the Chern-Simons term and in a quantum Hall system, which are both connected with the Landau problem.
An l1-TV algorithm for deconvolution with salt and pepper noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohlberg, Brendt; Rodriguez, Paul
2008-01-01
There has recently been considerable interest in applying Total Variation with an {ell}{sup 1} data fidelity term to the denoising of images subject to salt and pepper noise, but the extension of this formulation to more general problems, such as deconvolution, has received little attention, most probably because most efficient algorithms for {ell}{sup 1}-TV denoising can not handle more general inverse problems. We apply the Iteratively Reweighted Norm algorithm to this problem, and compare performance with an alternative algorithm based on the Mumford-Shah functional.
Consensus generation and variant detection by Celera Assembler.
Denisov, Gennady; Walenz, Brian; Halpern, Aaron L; Miller, Jason; Axelrod, Nelson; Levy, Samuel; Sutton, Granger
2008-04-15
We present an algorithm to identify allelic variation given a Whole Genome Shotgun (WGS) assembly of haploid sequences, and to produce a set of haploid consensus sequences rather than a single consensus sequence. Existing WGS assemblers take a column-by-column approach to consensus generation, and produce a single consensus sequence which can be inconsistent with the underlying haploid alleles, and inconsistent with any of the aligned sequence reads. Our new algorithm uses a dynamic windowing approach. It detects alleles by simultaneously processing the portions of aligned reads spanning a region of sequence variation, assigns reads to their respective alleles, phases adjacent variant alleles and generates a consensus sequence corresponding to each confirmed allele. This algorithm was used to produce the first diploid genome sequence of an individual human. It can also be applied to assemblies of multiple diploid individuals and hybrid assemblies of multiple haploid organisms. Being applied to the individual human genome assembly, the new algorithm detects exactly two confirmed alleles and reports two consensus sequences in 98.98% of the total number 2,033311 detected regions of sequence variation. In 33,269 out of 460,373 detected regions of size >1 bp, it fixes the constructed errors of a mosaic haploid representation of a diploid locus as produced by the original Celera Assembler consensus algorithm. Using an optimized procedure calibrated against 1 506 344 known SNPs, it detects 438 814 new heterozygous SNPs with false positive rate 12%. The open source code is available at: http://wgs-assembler.cvs.sourceforge.net/wgs-assembler/
NASA Astrophysics Data System (ADS)
Krauze, W.; Makowski, P.; Kujawińska, M.
2015-06-01
Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.
Reconstructing cortical current density by exploring sparseness in the transform domain
NASA Astrophysics Data System (ADS)
Ding, Lei
2009-05-01
In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.
2011-01-01
Background Envenomation by crotaline snakes (rattlesnake, cottonmouth, copperhead) is a complex, potentially lethal condition affecting thousands of people in the United States each year. Treatment of crotaline envenomation is not standardized, and significant variation in practice exists. Methods A geographically diverse panel of experts was convened for the purpose of deriving an evidence-informed unified treatment algorithm. Research staff analyzed the extant medical literature and performed targeted analyses of existing databases to inform specific clinical decisions. A trained external facilitator used modified Delphi and structured consensus methodology to achieve consensus on the final treatment algorithm. Results A unified treatment algorithm was produced and endorsed by all nine expert panel members. This algorithm provides guidance about clinical and laboratory observations, indications for and dosing of antivenom, adjunctive therapies, post-stabilization care, and management of complications from envenomation and therapy. Conclusions Clinical manifestations and ideal treatment of crotaline snakebite differ greatly, and can result in severe complications. Using a modified Delphi method, we provide evidence-informed treatment guidelines in an attempt to reduce variation in care and possibly improve clinical outcomes. PMID:21291549
NASA Astrophysics Data System (ADS)
Liu, Huanlin; Wang, Chujun; Chen, Yong
2018-01-01
Large-capacity encoding fiber Bragg grating (FBG) sensor network is widely used in modern long-term health monitoring system. Encoding FBG sensors have greatly improved the capacity of distributed FBG sensor network. However, the error of addressing increases correspondingly with the enlarging of capacity. To address the issue, an improved algorithm called genetic tracking algorithm (GTA) is proposed in the paper. In the GTA, for improving the success rate of matching and reducing the large number of redundant matching operations generated by sequential matching, the individuals are designed based on the feasible matching. Then, two kinds of self-crossover ways and a dynamic variation during mutation process are designed to increase the diversity of individuals and to avoid falling into local optimum. Meanwhile, an assistant decision is proposed to handle the issue that the GTA cannot solve when the variation of sensor information is highly overlapped. The simulation results indicate that the proposed GTA has higher accuracy compared with the traditional tracking algorithm and the enhanced tracking algorithm. In order to address the problems of spectrum fragmentation and low sharing degree of spectrum resources in survivable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Kevin J.; Wright, Bob W.; Jarman, Kristin H.
2003-05-09
A rapid retention time alignment algorithm was developed as a preprocessing utility to be used prior to chemometric analysis of large datasets of diesel fuel gas chromatographic profiles. Retention time variation from chromatogram-to-chromatogram has been a significant impediment against the use of chemometric techniques in the analysis of chromatographic data due to the inability of current multivariate techniques to correctly model information that shifts from variable to variable within a dataset. The algorithm developed is shown to increase the efficacy of pattern recognition methods applied to a set of diesel fuel chromatograms by retaining chemical selectivity while reducing chromatogram-to-chromatogram retentionmore » time variations and to do so on a time scale that makes analysis of large sets of chromatographic data practical.« less
Fraction Reduction through Continued Fractions
ERIC Educational Resources Information Center
Carley, Holly
2011-01-01
This article presents a method of reducing fractions without factoring. The ideas presented may be useful as a project for motivated students in an undergraduate number theory course. The discussion is related to the Euclidean Algorithm and its variations may lead to projects or early examples involving efficiency of an algorithm.
A unified classifier for robust face recognition based on combining multiple subspace algorithms
NASA Astrophysics Data System (ADS)
Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad
2012-10-01
Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.
Revisiting negative selection algorithms.
Ji, Zhou; Dasgupta, Dipankar
2007-01-01
This paper reviews the progress of negative selection algorithms, an anomaly/change detection approach in Artificial Immune Systems (AIS). Following its initial model, we try to identify the fundamental characteristics of this family of algorithms and summarize their diversities. There exist various elements in this method, including data representation, coverage estimate, affinity measure, and matching rules, which are discussed for different variations. The various negative selection algorithms are categorized by different criteria as well. The relationship and possible combinations with other AIS or other machine learning methods are discussed. Prospective development and applicability of negative selection algorithms and their influence on related areas are then speculated based on the discussion.
Robust Optimization Design Algorithm for High-Frequency TWTs
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.; Chevalier, Christine T.
2010-01-01
Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.
New second order Mumford-Shah model based on Γ-convergence approximation for image processing
NASA Astrophysics Data System (ADS)
Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li
2016-05-01
In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.
Pu239 Cross-Section Variations Based on Experimental Uncertainties and Covariances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sigeti, David Edward; Williams, Brian J.; Parsons, D. Kent
2016-10-18
Algorithms and software have been developed for producing variations in plutonium-239 neutron cross sections based on experimental uncertainties and covariances. The varied cross-section sets may be produced as random samples from the multi-variate normal distribution defined by an experimental mean vector and covariance matrix, or they may be produced as Latin-Hypercube/Orthogonal-Array samples (based on the same means and covariances) for use in parametrized studies. The variations obey two classes of constraints that are obligatory for cross-section sets and which put related constraints on the mean vector and covariance matrix that detemine the sampling. Because the experimental means and covariances domore » not obey some of these constraints to sufficient precision, imposing the constraints requires modifying the experimental mean vector and covariance matrix. Modification is done with an algorithm based on linear algebra that minimizes changes to the means and covariances while insuring that the operations that impose the different constraints do not conflict with each other.« less
Lisovskiĭ, A A; Pavlinov, I Ia
2008-01-01
Any morphospace is partitioned by the forms of group variation, its structure is described by a set of scalar (range, overlap) and vector (direction) characteristics. They are analyzed quantitatively for the sex and age variations in the sample of 200 skulls of the pine marten described by 14 measurable traits. Standard dispersion and variance components analyses are employed, accompanied with several resampling methods (randomization and bootstrep); effects of changes in the analysis design on results of the above methods are also considered. Maximum likelihood algorithm of variance components analysis is shown to give an adequate estimates of portions of particular forms of group variation within the overall disparity. It is quite stable in respect to changes of the analysis design and therefore could be used in the explorations of the real data with variously unbalanced designs. A new algorithm of estimation of co-directionality of particular forms of group variation within the overall disparity is elaborated, which includes angle measures between eigenvectors of covariation matrices of effects of group variations calculated by dispersion analysis. A null hypothesis of random portion of a given group variation could be tested by means of randomization of the respective grouping variable. A null hypothesis of equality of both portions and directionalities of different forms of group variation could be tested by means of the bootstrep procedure.
Solving TSP problem with improved genetic algorithm
NASA Astrophysics Data System (ADS)
Fu, Chunhua; Zhang, Lijun; Wang, Xiaojing; Qiao, Liying
2018-05-01
The TSP is a typical NP problem. The optimization of vehicle routing problem (VRP) and city pipeline optimization can use TSP to solve; therefore it is very important to the optimization for solving TSP problem. The genetic algorithm (GA) is one of ideal methods in solving it. The standard genetic algorithm has some limitations. Improving the selection operator of genetic algorithm, and importing elite retention strategy can ensure the select operation of quality, In mutation operation, using the adaptive algorithm selection can improve the quality of search results and variation, after the chromosome evolved one-way evolution reverse operation is added which can make the offspring inherit gene of parental quality improvement opportunities, and improve the ability of searching the optimal solution algorithm.
Yu, Hua-Gen
2002-01-01
We present a full dimensional variational algorithm to calculate vibrational energies of penta-atomic molecules. The quantum mechanical Hamiltonian of the system for J=0 is derived in a set of orthogonal polyspherical coordinates in the body-fixed frame without any dynamical approximation. Moreover, the vibrational Hamiltonian has been obtained in an explicitly Hermitian form. Variational calculations are performed in a direct product discrete variable representation basis set. The sine functions are used for the radial coordinates, whereas the Legendre polynomials are employed for the polar angles. For the azimuthal angles, the symmetrically adapted Fourier–Chebyshev basis functions are utilized. The eigenvalue problem ismore » solved by a Lanczos iterative diagonalization algorithm. The preliminary application to methane is given. Ultimately, we made a comparison with previous results.« less
Rigler, E. Joshua
2017-04-26
A theoretical basis and prototype numerical algorithm are provided that decompose regular time series of geomagnetic observations into three components: secular variation; solar quiet, and disturbance. Respectively, these three components correspond roughly to slow changes in the Earth’s internal magnetic field, periodic daily variations caused by quasi-stationary (with respect to the sun) electrical current systems in the Earth’s magnetosphere, and episodic perturbations to the geomagnetic baseline that are typically driven by fluctuations in a solar wind that interacts electromagnetically with the Earth’s magnetosphere. In contrast to similar algorithms applied to geomagnetic data in the past, this one addresses the issue of real time data acquisition directly by applying a time-causal, exponential smoother with “seasonal corrections” to the data as soon as they become available.
A Stochastic-Variational Model for Soft Mumford-Shah Segmentation
2006-01-01
In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059
Digital health technology and trauma: development of an app to standardize care.
Hsu, Jeremy M
2015-04-01
Standardized practice results in less variation, therefore reducing errors and improving outcome. Optimal trauma care is achieved through standardization, as is evidenced by the widespread adoption of the Advanced Trauma Life Support approach. The challenge for an individual institution is how does one educate and promulgate these standardized processes widely and efficiently? In today's world, digital health technology must be considered in the process. The aim of this study was to describe the process of developing an app, which includes standardized trauma algorithms. The objective of the app was to allow easy, real-time access to trauma algorithms, and therefore reduce omissions/errors. A set of trauma algorithms, relevant to the local setting, was derived from the best available evidence. After obtaining grant funding, a collaborative endeavour was undertaken with an external specialist app developing company. The process required 6 months to translate the existing trauma algorithms into an app. The app contains 32 separate trauma algorithms, formatted as a single-page flow diagram. It utilizes specific smartphone features such as 'pinch to zoom', jump-words and pop-ups to allow rapid access to the desired information. Improvements in trauma care outcomes result from reducing variation. By incorporating digital health technology, a trauma app has been developed, allowing easy and intuitive access to evidenced-based algorithms. © 2015 Royal Australasian College of Surgeons.
PyCOOL — A Cosmological Object-Oriented Lattice code written in Python
NASA Astrophysics Data System (ADS)
Sainio, J.
2012-04-01
There are a number of different phenomena in the early universe that have to be studied numerically with lattice simulations. This paper presents a graphics processing unit (GPU) accelerated Python program called PyCOOL that solves the evolution of scalar fields in a lattice with very precise symplectic integrators. The program has been written with the intention to hit a sweet spot of speed, accuracy and user friendliness. This has been achieved by using the Python language with the PyCUDA interface to make a program that is easy to adapt to different scalar field models. In this paper we derive the symplectic dynamics that govern the evolution of the system and then present the implementation of the program in Python and PyCUDA. The functionality of the program is tested in a chaotic inflation preheating model, a single field oscillon case and in a supersymmetric curvaton model which leads to Q-ball production. We have also compared the performance of a consumer graphics card to a professional Tesla compute card in these simulations. We find that the program is not only accurate but also very fast. To further increase the usefulness of the program we have equipped it with numerous post-processing functions that provide useful information about the cosmological model. These include various spectra and statistics of the fields. The program can be additionally used to calculate the generated curvature perturbation. The program is publicly available under GNU General Public License at https://github.com/jtksai/PyCOOL. Some additional information can be found from http://www.physics.utu.fi/tiedostot/theory/particlecosmology/pycool/.
Aspects géométriques et intégrables des modèles de matrices aléatoires
NASA Astrophysics Data System (ADS)
Marchal, Olivier
2010-12-01
This thesis deals with the geometric and integrable aspects associated with random matrix models. Its purpose is to provide various applications of random matrix theory, from algebraic geometry to partial differential equations of integrable systems. The variety of these applications shows why matrix models are important from a mathematical point of view. First, the thesis will focus on the study of the merging of two intervals of the eigenvalues density near a singular point. Specifically, we will show why this special limit gives universal equations from the Painlevé II hierarchy of integrable systems theory. Then, following the approach of (bi) orthogonal polynomials introduced by Mehta to compute partition functions, we will find Riemann-Hilbert and isomonodromic problems connected to matrix models, making the link with the theory of Jimbo, Miwa and Ueno. In particular, we will describe how the hermitian two-matrix models provide a degenerate case of Jimbo-Miwa-Ueno's theory that we will generalize in this context. Furthermore, the loop equations method, with its central notions of spectral curve and topological expansion, will lead to the symplectic invariants of algebraic geometry recently proposed by Eynard and Orantin. This last point will be generalized to the case of non-hermitian matrix models (arbitrary beta) paving the way to "quantum algebraic geometry" and to the generalization of symplectic invariants to "quantum curves". Finally, this set up will be applied to combinatorics in the context of topological string theory, with the explicit computation of an hermitian random matrix model enumerating the Gromov-Witten invariants of a toric Calabi-Yau threefold.
Spinor matter fields in SL(2,C) gauge theories of gravity: Lagrangian and Hamiltonian approaches
NASA Astrophysics Data System (ADS)
Antonowicz, Marek; Szczyrba, Wiktor
1985-06-01
We consider the SL(2,C)-covariant Lagrangian formulation of gravitational theories with the presence of spinor matter fields. The invariance properties of such theories give rise to the conservation laws (the contracted Bianchi identities) having in the presence of matter fields a more complicated form than those known in the literature previously. A general SL(2,C) gauge theory of gravity is cast into an SL(2,C)-covariant Hamiltonian formulation. Breaking the SL(2,C) symmetry of the system to the SU(2) symmetry, by introducing a spacelike slicing of spacetime, we get an SU(2)-covariant Hamiltonian picture. The qualitative analysis of SL(2,C) gauge theories of gravity in the SU(2)-covariant formulation enables us to define the dynamical symplectic variables and the gauge variables of the theory under consideration as well as to divide the set of field equations into the dynamical equations and the constraints. In the SU(2)-covariant Hamiltonian formulation the primary constraints, which are generic for first-order matter Lagrangians (Dirac, Weyl, Fierz-Pauli), can be reduced. The effective matter symplectic variables are given by SU(2)-spinor-valued half-forms on three-dimensional slices of spacetime. The coupled Einstein-Cartan-Dirac (Weyl, Fierz-Pauli) system is analyzed from the (3+1) point of view. This analysis is complete; the field equations of the Einstein-Cartan-Dirac theory split into 18 gravitational dynamical equations, 8 dynamical Dirac equations, and 7 first-class constraints. The system has 4+8=12 independent degrees of freedom in the phase space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sainio, J., E-mail: jani.sainio@utu.fi; Department of Physics and Astronomy, University of Turku, FI-20014 Turku
There are a number of different phenomena in the early universe that have to be studied numerically with lattice simulations. This paper presents a graphics processing unit (GPU) accelerated Python program called PyCOOL that solves the evolution of scalar fields in a lattice with very precise symplectic integrators. The program has been written with the intention to hit a sweet spot of speed, accuracy and user friendliness. This has been achieved by using the Python language with the PyCUDA interface to make a program that is easy to adapt to different scalar field models. In this paper we derive themore » symplectic dynamics that govern the evolution of the system and then present the implementation of the program in Python and PyCUDA. The functionality of the program is tested in a chaotic inflation preheating model, a single field oscillon case and in a supersymmetric curvaton model which leads to Q-ball production. We have also compared the performance of a consumer graphics card to a professional Tesla compute card in these simulations. We find that the program is not only accurate but also very fast. To further increase the usefulness of the program we have equipped it with numerous post-processing functions that provide useful information about the cosmological model. These include various spectra and statistics of the fields. The program can be additionally used to calculate the generated curvature perturbation. The program is publicly available under GNU General Public License at https://github.com/jtksai/PyCOOL. Some additional information can be found from http://www.physics.utu.fi/tiedostot/theory/particlecosmology/pycool/.« less
Answer Markup Algorithms for Southeast Asian Languages.
ERIC Educational Resources Information Center
Henry, George M.
1991-01-01
Typical markup methods for providing feedback to foreign language learners are not applicable to languages not written in a strictly linear fashion. A modification of Hart's edit markup software is described, along with a second variation based on a simple edit distance algorithm adapted to a general Southeast Asian font system. (10 references)…
A sequential coalescent algorithm for chromosomal inversions
Peischl, S; Koch, E; Guerrero, R F; Kirkpatrick, M
2013-01-01
Chromosomal inversions are common in natural populations and are believed to be involved in many important evolutionary phenomena, including speciation, the evolution of sex chromosomes and local adaptation. While recent advances in sequencing and genotyping methods are leading to rapidly increasing amounts of genome-wide sequence data that reveal interesting patterns of genetic variation within inverted regions, efficient simulation methods to study these patterns are largely missing. In this work, we extend the sequential Markovian coalescent, an approximation to the coalescent with recombination, to include the effects of polymorphic inversions on patterns of recombination. Results show that our algorithm is fast, memory-efficient and accurate, making it feasible to simulate large inversions in large populations for the first time. The SMC algorithm enables studies of patterns of genetic variation (for example, linkage disequilibria) and tests of hypotheses (using simulation-based approaches) that were previously intractable. PMID:23632894
Determination of stores pointing error due to wing flexibility under flight load
NASA Technical Reports Server (NTRS)
Lokos, William A.; Bahm, Catherine M.; Heinle, Robert A.
1995-01-01
The in-flight elastic wing twist of a fighter-type aircraft was studied to provide for an improved on-board real-time computed prediction of pointing variations of three wing store stations. This is an important capability to correct sensor pod alignment variation or to establish initial conditions of iron bombs or smart weapons prior to release. The original algorithm was based upon coarse measurements. The electro-optical Flight Deflection Measurement System measured the deformed wing shape in flight under maneuver loads to provide a higher resolution database from which an improved twist prediction algorithm could be developed. The FDMS produced excellent repeatable data. In addition, a NASTRAN finite-element analysis was performed to provide additional elastic deformation data. The FDMS data combined with the NASTRAN analysis indicated that an improved prediction algorithm could be derived by using a different set of aircraft parameters, namely normal acceleration, stores configuration, Mach number, and gross weight.
Accurate mask-based spatially regularized correlation filter for visual tracking
NASA Astrophysics Data System (ADS)
Gu, Xiaodong; Xu, Xinping
2017-01-01
Recently, discriminative correlation filter (DCF)-based trackers have achieved extremely successful results in many competitions and benchmarks. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier. However, this assumption will produce unwanted boundary effects, which severely degrade the tracking performance. Correlation filters with limited boundaries and spatially regularized DCFs were proposed to reduce boundary effects. However, their methods used the fixed mask or predesigned weights function, respectively, which was unsuitable for large appearance variation. We propose an accurate mask-based spatially regularized correlation filter for visual tracking. Our augmented objective can reduce the boundary effect even in large appearance variation. In our algorithm, the masking matrix is converted into the regularized function that acts on the correlation filter in frequency domain, which makes the algorithm fast convergence. Our online tracking algorithm performs favorably against state-of-the-art trackers on OTB-2015 Benchmark in terms of efficiency, accuracy, and robustness.
An Improved Binary Differential Evolution Algorithm to Infer Tumor Phylogenetic Trees.
Liang, Ying; Liao, Bo; Zhu, Wen
2017-01-01
Tumourigenesis is a mutation accumulation process, which is likely to start with a mutated founder cell. The evolutionary nature of tumor development makes phylogenetic models suitable for inferring tumor evolution through genetic variation data. Copy number variation (CNV) is the major genetic marker of the genome with more genes, disease loci, and functional elements involved. Fluorescence in situ hybridization (FISH) accurately measures multiple gene copy number of hundreds of single cells. We propose an improved binary differential evolution algorithm, BDEP, to infer tumor phylogenetic tree based on FISH platform. The topology analysis of tumor progression tree shows that the pathway of tumor subcell expansion varies greatly during different stages of tumor formation. And the classification experiment shows that tree-based features are better than data-based features in distinguishing tumor. The constructed phylogenetic trees have great performance in characterizing tumor development process, which outperforms other similar algorithms.
Pressure modulation algorithm to separate cerebral hemodynamic signals from extracerebral artifacts.
Baker, Wesley B; Parthasarathy, Ashwin B; Ko, Tiffany S; Busch, David R; Abramson, Kenneth; Tzeng, Shih-Yu; Mesquita, Rickson C; Durduran, Turgut; Greenberg, Joel H; Kung, David K; Yodh, Arjun G
2015-07-01
We introduce and validate a pressure measurement paradigm that reduces extracerebral contamination from superficial tissues in optical monitoring of cerebral blood flow with diffuse correlation spectroscopy (DCS). The scheme determines subject-specific contributions of extracerebral and cerebral tissues to the DCS signal by utilizing probe pressure modulation to induce variations in extracerebral blood flow. For analysis, the head is modeled as a two-layer medium and is probed with long and short source-detector separations. Then a combination of pressure modulation and a modified Beer-Lambert law for flow enables experimenters to linearly relate differential DCS signals to cerebral and extracerebral blood flow variation without a priori anatomical information. We demonstrate the algorithm's ability to isolate cerebral blood flow during a finger-tapping task and during graded scalp ischemia in healthy adults. Finally, we adapt the pressure modulation algorithm to ameliorate extracerebral contamination in monitoring of cerebral blood oxygenation and blood volume by near-infrared spectroscopy.
Correction of rotational distortion for catheter-based en face OCT and OCT angiography
Ahsen, Osman O.; Lee, Hsiang-Chieh; Giacomelli, Michael G.; Wang, Zhao; Liang, Kaicheng; Tsai, Tsung-Han; Potsaid, Benjamin; Mashimo, Hiroshi; Fujimoto, James G.
2015-01-01
We demonstrate a computationally efficient method for correcting the nonuniform rotational distortion (NURD) in catheter-based imaging systems to improve endoscopic en face optical coherence tomography (OCT) and OCT angiography. The method performs nonrigid registration using fiducial markers on the catheter to correct rotational speed variations. Algorithm performance is investigated with an ultrahigh-speed endoscopic OCT system and micromotor catheter. Scan nonuniformity is quantitatively characterized, and artifacts from rotational speed variations are significantly reduced. Furthermore, we present endoscopic en face OCT and OCT angiography images of human gastrointestinal tract in vivo to demonstrate the image quality improvement using the correction algorithm. PMID:25361133
AgRISTARS. Supporting research: Algorithms for scene modelling
NASA Technical Reports Server (NTRS)
Rassbach, M. E. (Principal Investigator)
1982-01-01
The requirements for a comprehensive analysis of LANDSAT or other visual data scenes are defined. The development of a general model of a scene and a computer algorithm for finding the particular model for a given scene is discussed. The modelling system includes a boundary analysis subsystem, which detects all the boundaries and lines in the image and builds a boundary graph; a continuous variation analysis subsystem, which finds gradual variations not well approximated by a boundary structure; and a miscellaneous features analysis, which includes texture, line parallelism, etc. The noise reduction capabilities of this method and its use in image rectification and registration are discussed.
NASA Astrophysics Data System (ADS)
Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng
2017-05-01
Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.
Measuring Leaf Area in Soy Plants by HSI Color Model Filtering and Mathematical Morphology
NASA Astrophysics Data System (ADS)
Benalcázar, M.; Padín, J.; Brun, M.; Pastore, J.; Ballarin, V.; Peirone, L.; Pereyra, G.
2011-12-01
There has been lately a significant progress in automating tasks for the agricultural sector. One of the advances is the development of robots, based on computer vision, applied to care and management of soy crops. In this task, digital image processing plays an important role, but must solve some important problems, like the ones associated to the variations in lighting conditions during image acquisition. Such variations influence directly on the brightness level of the images to be processed. In this paper we propose an algorithm to segment and measure automatically the leaf area of soy plants. This information is used by the specialists to evaluate and compare the growth of different soy genotypes. This algorithm, based on color filtering using the HSI model, detects green objects from the image background. The segmentation of leaves (foliage) was made applying Mathematical Morphology. The foliage area was estimated counting the pixels that belong to the segmented leaves. From several experiments, consisting in applying the algorithm to measure the foliage of about fifty plants of various genotypes of soy, at different growth stages, we obtained successful results, despite the high brightness variations and shadows in the processed images.
SU-F-J-180: A Reference Data Set for Testing Two Dimension Registration Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dankwa, A; Castillo, E; Guerrero, T
Purpose: To create and characterize a reference data set for testing image registration algorithms that transform portal image (PI) to digitally reconstructed radiograph (DRR). Methods: Anterior-posterior (AP) and Lateral (LAT) projection and DRR image pairs from nine cases representing four different anatomical sites (head and neck, thoracic, abdominal, and pelvis) were selected for this study. Five experts will perform manual registration by placing landmarks points (LMPs) on the DRR and finding their corresponding points on the PI using computer assisted manual point selection tool (CAMPST), a custom-made MATLAB software tool developed in house. The landmark selection process will be repeatedmore » on both the PI and the DRR in order to characterize inter- and -intra observer variations associated with the point selection process. Inter and an intra observer variation in LMPs was done using Bland-Altman (B&A) analysis and one-way analysis of variance. We set our limit such that the absolute value of the mean difference between the readings should not exceed 3mm. Later on in this project we will test different two dimension (2D) image registration algorithms and quantify the uncertainty associated with their registration. Results: Using one-way analysis of variance (ANOVA) there was no variations within the readers. When Bland-Altman analysis was used the variation within the readers was acceptable. The variation was higher in the PI compared to the DRR.ConclusionThe variation seen for the PI is because although the PI has a much better spatial resolution the poor resolution on the DRR makes it difficult to locate the actual corresponding anatomical feature on the PI. We hope this becomes more evident when all the readers complete the point selection. The reason for quantifying inter- and -intra observer variation tells us to what degree of accuracy a manual registration can be done. Research supported by William Beaumont Hospital Research Start Up Fund.« less
Comparison of atmospheric correction algorithms for the Coastal Zone Color Scanner
NASA Technical Reports Server (NTRS)
Tanis, F. J.; Jain, S. C.
1984-01-01
Before Nimbus-7 Costal Zone Color Scanner (CZC) data can be used to distinguish between coastal water types, methods must be developed for the removal of spatial variations in aerosol path radiance. These can dominate radiance measurements made by the satellite. An assessment is presently made of the ability of four different algorithms to quantitatively remove haze effects; each was adapted for the extraction of the required scene-dependent parameters during an initial pass through the data set The CZCS correction algorithms considered are (1) the Gordon (1981, 1983) algorithm; (2) the Smith and Wilson (1981) iterative algorityhm; (3) the pseudooptical depth method; and (4) the residual component algorithm.
Path connectivity based spectral defragmentation in flexible bandwidth networks.
Wang, Ying; Zhang, Jie; Zhao, Yongli; Zhang, Jiawei; Zhao, Jie; Wang, Xinbo; Gu, Wanyi
2013-01-28
Optical networks with flexible bandwidth provisioning have become a very promising networking architecture. It enables efficient resource utilization and supports heterogeneous bandwidth demands. In this paper, two novel spectrum defragmentation approaches, i.e. Maximum Path Connectivity (MPC) algorithm and Path Connectivity Triggering (PCT) algorithm, are proposed based on the notion of Path Connectivity, which is defined to represent the maximum variation of node switching ability along the path in flexible bandwidth networks. A cost-performance-ratio based profitability model is given to denote the prons and cons of spectrum defragmentation. We compare these two proposed algorithms with non-defragmentation algorithm in terms of blocking probability. Then we analyze the differences of defragmentation profitability between MPC and PCT algorithms.
Game theory-based visual tracking approach focusing on color and texture features.
Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Chen, Chuanhua; Wang, Xin
2017-07-20
It is difficult for a single-feature tracking algorithm to achieve strong robustness under a complex environment. To solve this problem, we proposed a multifeature fusion tracking algorithm that is based on game theory. By focusing on color and texture features as two gamers, this algorithm accomplishes tracking by using a mean shift iterative formula to search for the Nash equilibrium of the game. The contribution of different features is always keeping the state of optical balance, so that the algorithm can fully take advantage of feature fusion. According to the experiment results, this algorithm proves to possess good performance, especially under the condition of scene variation, target occlusion, and similar interference.
Mechanic: The MPI/HDF code framework for dynamical astronomy
NASA Astrophysics Data System (ADS)
Słonina, Mariusz; Goździewski, Krzysztof; Migaszewski, Cezary
2015-01-01
We introduce the Mechanic, a new open-source code framework. It is designed to reduce the development effort of scientific applications by providing unified API (Application Programming Interface) for configuration, data storage and task management. The communication layer is based on the well-established Message Passing Interface (MPI) standard, which is widely used on variety of parallel computers and CPU-clusters. The data storage is performed within the Hierarchical Data Format (HDF5). The design of the code follows core-module approach which allows to reduce the user’s codebase and makes it portable for single- and multi-CPU environments. The framework may be used in a local user’s environment, without administrative access to the cluster, under the PBS or Slurm job schedulers. It may become a helper tool for a wide range of astronomical applications, particularly focused on processing large data sets, such as dynamical studies of long-term orbital evolution of planetary systems with Monte Carlo methods, dynamical maps or evolutionary algorithms. It has been already applied in numerical experiments conducted for Kepler-11 (Migaszewski et al., 2012) and νOctantis planetary systems (Goździewski et al., 2013). In this paper we describe the basics of the framework, including code listings for the implementation of a sample user’s module. The code is illustrated on a model Hamiltonian introduced by (Froeschlé et al., 2000) presenting the Arnold diffusion. The Arnold web is shown with the help of the MEGNO (Mean Exponential Growth of Nearby Orbits) fast indicator (Goździewski et al., 2008a) applied onto symplectic SABAn integrators family (Laskar and Robutel, 2001).
Mercury - A New Software Package for Orbital Integrations
NASA Astrophysics Data System (ADS)
Chambers, J. E.; Migliorini, F.
1997-07-01
We present Mercury: a new general-purpose software package for carrying out orbital integrations for problems in solar-system dynamics. Suitable applications include studying the long-term stability of the planetary system, investigating the orbital evolution of comets, asteroids or meteoroids, and simulating planetary accretion. Mercury is designed to be versatile and easy to use, accepting initial conditions in either Cartesian coordinates or Keplerian elements in ``cometary'' or ``asteroidal'' format, with different epochs of osculation for different objects. Output from an integration consists of either osculating or averaged (``proper'') elements, written in a machine-independent compressed format, which allows the results of a calculation performed on one platform to be transferred (e.g. via FTP) and decoded on another. Mercury itself is platform independent, and can be run on machines using DEC Unix, Open VMS, HP Unix, Solaris, Linux or DOS. During an integration, Mercury monitors and records details of close encounters, sungrazing events, ejections and collisions between objects. The effects of non-gravitational forces on comets can also be modelled. Additional effects such as Poynting-Robertson drag, post-Newtonian corrections, oblateness of the primary, and the galactic potential will be incorporated in future. The package currently supports integrations using a mixed-variable symplectic routine, the Bulirsch-Stoer method, and a hybrid code for planetary accretion calculations; with Everhart's popular RADAU algorithm and a symmetric multistep routine to be added shortly. Our presentation will include a demonstration of the latest version of Mercury, with the explicit aim of getting feedback from potential users and incorporating these suggestions into a final version that will be made available to everybody.
Reduced projection angles for binary tomography with particle aggregation.
Al-Rifaie, Mohammad Majid; Blackwell, Tim
This paper extends particle aggregate reconstruction technique (PART), a reconstruction algorithm for binary tomography based on the movement of particles. PART supposes that pixel values are particles, and that particles diffuse through the image, staying together in regions of uniform pixel value known as aggregates. In this work, a variation of this algorithm is proposed and a focus is placed on reducing the number of projections and whether this impacts the reconstruction of images. The algorithm is tested on three phantoms of varying sizes and numbers of forward projections and compared to filtered back projection, a random search algorithm and to SART, a standard algebraic reconstruction method. It is shown that the proposed algorithm outperforms the aforementioned algorithms on small numbers of projections. This potentially makes the algorithm attractive in scenarios where collecting less projection data are inevitable.
NASA Astrophysics Data System (ADS)
Rimlinger, Thomas; Hamilton, Douglas; Hahn, Joseph M.
2017-06-01
We are in the process of developing a useful extension to the N-body integrator HNBody (Rauch & Hamilton 2002), enabling it to simulate a viscous, self-gravitating ring orbiting an oblate body. Our algorithm follows that used in the symplectic integrator epi_int (Hahn & Spitale 2013), in which the ring is simulated as many (~100) interacting, elliptic, confocal streamlines. This idea was first introduced in an analytic context by Goldreich & Tremaine (1979) and enabled rapid progress in the theory of ring evolution; since then, such discretization has been standard in the literature. While we adopt epi_int’s streamline formalism, we nevertheless improve upon its design in several ways. Epi_int uses epicyclic elements in its drift step; approximating these elements introduces small, systematic errors that build up with time. We sidestep this problem by instead using the more traditional Keplerian osculating elements. In addition, epi_int uses several particles per wire to effectively calculate the inter-gravitational forces everywhere along each streamline. We replicate this ability but can often gain a speed boost by using a single tracer particle per streamline; while this restricts us to simulating rings dominated by the m = 1 mode, this is typical of most observed narrow eccentric ringlets. We have also extended epi_int’s two dimensional algorithm into 3D. Finally, whereas epi_int is written in IDL, HNBody is written in C, which yields considerably faster integrations.Braga-Ribas et al. (2014) reported a set of narrow rings orbiting the Centaur Chariklo, but neither their investigation nor that of Pan & Wu (2016) yielded a satisfactory origin and evolution scenario. Eschewing the assumption that such rings must be short-lived, we instead argue (as in Rimlinger et al. 2016) that sufficiently eccentric rings can self-confine for hundreds of millions of years while circularizing. In this case, Chariklo may have formed rings as a KBO. We are working towards demonstrating both the feasibility of this theory and the utility of the HNBody extension by using it to simulate such a ring around Chariklo.
Algorithms, complexity, and the sciences
Papadimitriou, Christos
2014-01-01
Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382
Imaging tilted transversely isotropic media with a generalised screen propagator
NASA Astrophysics Data System (ADS)
Shin, Sung-Il; Byun, Joongmoo; Seol, Soon Jee
2015-01-01
One-way wave equation migration is computationally efficient compared with reverse time migration, and it provides a better subsurface image than ray-based migration algorithms when imaging complex structures. Among many one-way wave-based migration algorithms, we adopted the generalised screen propagator (GSP) to build the migration algorithm. When the wavefield propagates through the large velocity variation in lateral or steeply dipping structures, GSP increases the accuracy of the wavefield in wide angle by adopting higher-order terms induced from expansion of the vertical slowness in Taylor series with each perturbation term. To apply the migration algorithm to a more realistic geological structure, we considered tilted transversely isotropic (TTI) media. The new GSP, which contains the tilting angle as a symmetric axis of the anisotropic media, was derived by modifying the GSP designed for vertical transversely isotropic (VTI) media. To verify the developed TTI-GSP, we analysed the accuracy of wave propagation, especially for the new perturbation parameters and the tilting angle; the results clearly showed that the perturbation term of the tilting angle in TTI media has considerable effects on proper propagation. In addition, through numerical tests, we demonstrated that the developed TTI-GS migration algorithm could successfully image a steeply dipping salt flank with high velocity variation around anisotropic layers.
Wang, Jin; Zhang, Chen; Wang, Yuanyuan
2017-05-30
In photoacoustic tomography (PAT), total variation (TV) based iteration algorithm is reported to have a good performance in PAT image reconstruction. However, classical TV based algorithm fails to preserve the edges and texture details of the image because it is not sensitive to the direction of the image. Therefore, it is of great significance to develop a new PAT reconstruction algorithm to effectively solve the drawback of TV. In this paper, a directional total variation with adaptive directivity (DDTV) model-based PAT image reconstruction algorithm, which weightedly sums the image gradients based on the spatially varying directivity pattern of the image is proposed to overcome the shortcomings of TV. The orientation field of the image is adaptively estimated through a gradient-based approach. The image gradients are weighted at every pixel based on both its anisotropic direction and another parameter, which evaluates the estimated orientation field reliability. An efficient algorithm is derived to solve the iteration problem associated with DDTV and possessing directivity of the image adaptively updated for each iteration step. Several texture images with various directivity patterns are chosen as the phantoms for the numerical simulations. The 180-, 90- and 30-view circular scans are conducted. Results obtained show that the DDTV-based PAT reconstructed algorithm outperforms the filtered back-projection method (FBP) and TV algorithms in the quality of reconstructed images with the peak signal-to-noise rations (PSNR) exceeding those of TV and FBP by about 10 and 18 dB, respectively, for all cases. The Shepp-Logan phantom is studied with further discussion of multimode scanning, convergence speed, robustness and universality aspects. In-vitro experiments are performed for both the sparse-view circular scanning and linear scanning. The results further prove the effectiveness of the DDTV, which shows better results than that of the TV with sharper image edges and clearer texture details. Both numerical simulation and in vitro experiments confirm that the DDTV provides a significant quality improvement of PAT reconstructed images for various directivity patterns.
Reconstruction of multiple cracks from experimental electrostatic boundary measurements
NASA Technical Reports Server (NTRS)
Bryan, Kurt; Liepa, Valdis; Vogelius, Michael
1993-01-01
An algorithm for recovering a collection of linear cracks in a homogeneous electrical conductor from boundary measurements of voltages induced by specified current fluxes is described. The technique is a variation of Newton's method and is based on taking weighted averages of the boundary data. An apparatus that was constructed specifically for generating laboratory data on which to test the algorithm is also described. The algorithm is applied to a number of different test cases and the results are discussed.
The simulation of magnetic resonance elastography through atherosclerosis.
Thomas-Seale, L E J; Hollis, L; Klatt, D; Sack, I; Roberts, N; Pankaj, P; Hoskins, P R
2016-06-14
The clinical diagnosis of atherosclerosis via the measurement of stenosis size is widely acknowledged as an imperfect criterion. The vulnerability of an atherosclerotic plaque to rupture is associated with its mechanical properties. The potential to image these mechanical properties using magnetic resonance elastography (MRE) was investigated through synthetic datasets. An image of the steady state wave propagation, equivalent to the first harmonic, can be extracted directly from finite element analysis. Inversion of this displacement data yields a map of the shear modulus, known as an elastogram. The variation of plaque composition, stenosis size, Gaussian noise, filter thresholds and excitation frequency were explored. A decreasing mean shear modulus with an increasing lipid composition was identified through all stenosis sizes. However the inversion algorithm showed sensitivity to parameter variation leading to artefacts which disrupted both the elastograms and quantitative trends. As noise was increased up to a realistic level, the contrast was maintained between the fully fibrous and lipid plaques but lost between the interim compositions. Although incorporating a Butterworth filter improved the performance of the algorithm, restrictive filter thresholds resulted in a reduction of the sensitivity of the algorithm to composition and noise variation. Increasing the excitation frequency improved the techniques ability to image the magnitude of the shear modulus and identify a contrast between compositions. In conclusion, whilst the technique has the potential to image the shear modulus of atherosclerotic plaques, future research will require the integration of a heterogeneous inversion algorithm. Copyright © 2016 Elsevier Ltd. All rights reserved.
Parallel algorithm of real-time infrared image restoration based on total variation theory
NASA Astrophysics Data System (ADS)
Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei
2015-10-01
Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
A Family of Algorithms for Computing Consensus about Node State from Network Data
Brush, Eleanor R.; Krakauer, David C.; Flack, Jessica C.
2013-01-01
Biological and social networks are composed of heterogeneous nodes that contribute differentially to network structure and function. A number of algorithms have been developed to measure this variation. These algorithms have proven useful for applications that require assigning scores to individual nodes–from ranking websites to determining critical species in ecosystems–yet the mechanistic basis for why they produce good rankings remains poorly understood. We show that a unifying property of these algorithms is that they quantify consensus in the network about a node's state or capacity to perform a function. The algorithms capture consensus by either taking into account the number of a target node's direct connections, and, when the edges are weighted, the uniformity of its weighted in-degree distribution (breadth), or by measuring net flow into a target node (depth). Using data from communication, social, and biological networks we find that that how an algorithm measures consensus–through breadth or depth– impacts its ability to correctly score nodes. We also observe variation in sensitivity to source biases in interaction/adjacency matrices: errors arising from systematic error at the node level or direct manipulation of network connectivity by nodes. Our results indicate that the breadth algorithms, which are derived from information theory, correctly score nodes (assessed using independent data) and are robust to errors. However, in cases where nodes “form opinions” about other nodes using indirect information, like reputation, depth algorithms, like Eigenvector Centrality, are required. One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative. In these cases the network structure allows the depth algorithms to effectively capture breadth as well as depth. Finally, we discuss the algorithms' cognitive and computational demands. This is an important consideration in systems in which individuals use the collective opinions of others to make decisions. PMID:23874167
Xu, Q; Yang, D; Tan, J; Anastasio, M
2012-06-01
To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.
2012-08-15
Environmental Model ( GDEM ) 72 levels) was conserved in the interpolated profiles and small variations in the vertical field may have lead to large...Planner ETKF Ensemble Transform Kalman Filter G8NCOM 1/8⁰ Global NCOM GA Genetic Algorithm GDEM Generalized Digital Environmental Model GOST
Intra-pulse modulation recognition using short-time ramanujan Fourier transform spectrogram
NASA Astrophysics Data System (ADS)
Ma, Xiurong; Liu, Dan; Shan, Yunlong
2017-12-01
Intra-pulse modulation recognition under negative signal-to-noise ratio (SNR) environment is a research challenge. This article presents a robust algorithm for the recognition of 5 types of radar signals with large variation range in the signal parameters in low SNR using the combination of the Short-time Ramanujan Fourier transform (ST-RFT) and pseudo-Zernike moments invariant features. The ST-RFT provides the time-frequency distribution features for 5 modulations. The pseudo-Zernike moments provide invariance properties that are able to recognize different modulation schemes on different parameter variation conditions from the ST-RFT spectrograms. Simulation results demonstrate that the proposed algorithm achieves the probability of successful recognition (PSR) of over 90% when SNR is above -5 dB with large variation range in the signal parameters: carrier frequency (CF) for all considered signals, hop size (HS) for frequency shift keying (FSK) signals, and the time-bandwidth product for Linear Frequency Modulation (LFM) signals.
Regularization of Mickelsson generators for nonexceptional quantum groups
NASA Astrophysics Data System (ADS)
Mudrov, A. I.
2017-08-01
Let g' ⊂ g be a pair of Lie algebras of either symplectic or orthogonal infinitesimal endomorphisms of the complex vector spaces C N-2 ⊂ C N and U q (g') ⊂ U q (g) be a pair of quantum groups with a triangular decomposition U q (g) = U q (g-) U q (g+) U q (h). Let Z q (g, g') be the corresponding step algebra. We assume that its generators are rational trigonometric functions h ∗ → U q (g±). We describe their regularization such that the resulting generators do not vanish for any choice of the weight.
NASA Astrophysics Data System (ADS)
Nutku, Y.
1985-06-01
We point out a class of nonlinear wave equations which admit infinitely many conserved quantities. These equations are characterized by a pair of exact one-forms. The implication that they are closed gives rise to equations, the characteristics and Riemann invariants of which are readily obtained. The construction of the conservation laws requires the solution of a linear second-order equation which can be reduced to canonical form using the Riemann invariants. The hodograph transformation results in a similar linear equation. We discuss also the symplectic structure and Bäcklund transformations associated with these equations.
How many invariant polynomials are needed to decide local unitary equivalence of qubit states?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maciążek, Tomasz; Faculty of Physics, University of Warsaw, ul. Hoża 69, 00-681 Warszawa; Oszmaniec, Michał
2013-09-15
Given L-qubit states with the fixed spectra of reduced one-qubit density matrices, we find a formula for the minimal number of invariant polynomials needed for solving local unitary (LU) equivalence problem, that is, problem of deciding if two states can be connected by local unitary operations. Interestingly, this number is not the same for every collection of the spectra. Some spectra require less polynomials to solve LU equivalence problem than others. The result is obtained using geometric methods, i.e., by calculating the dimensions of reduced spaces, stemming from the symplectic reduction procedure.
Regularization of the Perturbed Spatial Restricted Three-Body Problem by L-Transformations
NASA Astrophysics Data System (ADS)
Poleshchikov, S. M.
2018-03-01
Equations of motion for the perturbed circular restricted three-body problem have been regularized in canonical variables in a moving coordinate system. Two different L-matrices of the fourth order are used in the regularization. Conditions for generalized symplecticity of the constructed transform have been checked. In the unperturbed case, the regular equations have a polynomial structure. The regular equations have been numerically integrated using the Runge-Kutta-Fehlberg method. The results of numerical experiments are given for the Earth-Moon system parameters taking into account the perturbation of the Sun for different L-matrices.
A Hamilton-Jacobi theory for implicit differential systems
NASA Astrophysics Data System (ADS)
Esen, Oǧul; de León, Manuel; Sardón, Cristina
2018-02-01
In this paper, we propose a geometric Hamilton-Jacobi theory for systems of implicit differential equations. In particular, we are interested in implicit Hamiltonian systems, described in terms of Lagrangian submanifolds of TT*Q generated by Morse families. The implicit character implies the nonexistence of a Hamiltonian function describing the dynamics. This fact is here amended by a generating family of Morse functions which plays the role of a Hamiltonian. A Hamilton-Jacobi equation is obtained with the aid of this generating family of functions. To conclude, we apply our results to singular Lagrangians by employing the construction of special symplectic structures.
Equivariant branes and equivariant homological mirror symmetry
NASA Astrophysics Data System (ADS)
Ashwinkumar, Meer; Tan, Meng-Chwan
2018-03-01
We describe supersymmetric A-branes and B-branes in open N =(2 ,2 ) dynamically gauged nonlinear sigma models (GNLSM), placing emphasis on toric manifold target spaces. For a subset of toric manifolds, these equivariant branes have a mirror description as branes in gauged Landau-Ginzburg models with neutral matter. We then study correlation functions in the topological A-twisted version of the GNLSM and identify their values with open Hamiltonian Gromov-Witten invariants. Supersymmetry breaking can occur in the A-twisted GNLSM due to nonperturbative open symplectic vortices, and we canonically Becchi-Rouet-Stora-Tyutin quantize the mirror theory to analyze this phenomenon.
NASA Astrophysics Data System (ADS)
Dvorak, R.; Henrard, J.
1993-06-01
Topics addressed include planetary theories, the Sitnikov problem, asteroids, resonance, general dynamical systems, and chaos and stability. Particular attention is given to recent progress in the theory and application of symplectic integrators, a computer-aided analysis of the Sitnikov problem, the chaotic behavior of trajectories for the asteroidal resonances, and the resonant motion in the restricted three-body problem. Also discussed are the second order long-period motion of Hyperion, meteorites from the asteroid 6 Hebe, and least squares parameter estimation in chaotic differential equations.
Polynomial approximation of Poincare maps for Hamiltonian system
NASA Technical Reports Server (NTRS)
Froeschle, Claude; Petit, Jean-Marc
1992-01-01
Different methods are proposed and tested for transforming a non-linear differential system, and more particularly a Hamiltonian one, into a map without integrating the whole orbit as in the well-known Poincare return map technique. We construct piecewise polynomial maps by coarse-graining the phase-space surface of section into parallelograms and using either only values of the Poincare maps at the vertices or also the gradient information at the nearest neighbors to define a polynomial approximation within each cell. The numerical experiments are in good agreement with both the real symplectic and Poincare maps.
Exponential Methods for the Time Integration of Schroedinger Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cano, B.; Gonzalez-Pachon, A.
2010-09-30
We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.
Qudit quantum computation on matrix product states with global symmetry
NASA Astrophysics Data System (ADS)
Wang, Dongsheng; Stephen, David; Raussendorf, Robert
Resource states that contain nontrivial symmetry-protected topological order are identified for universal measurement-based quantum computation. Our resource states fall into two classes: one as the qudit generalizations of the qubit cluster state, and the other as the higher-symmetry generalizations of the spin-1 Affleck-Kennedy-Lieb-Tasaki (AKLT) state, namely, with unitary, orthogonal, or symplectic symmetry. The symmetry in cluster states protects information propagation (identity gate), while the higher symmetry in AKLT-type states enables nontrivial gate computation. This work demonstrates a close connection between measurement-based quantum computation and symmetry-protected topological order.
Qudit quantum computation on matrix product states with global symmetry
NASA Astrophysics Data System (ADS)
Wang, Dong-Sheng; Stephen, David T.; Raussendorf, Robert
2017-03-01
Resource states that contain nontrivial symmetry-protected topological order are identified for universal single-qudit measurement-based quantum computation. Our resource states fall into two classes: one as the qudit generalizations of the one-dimensional qubit cluster state, and the other as the higher-symmetry generalizations of the spin-1 Affleck-Kennedy-Lieb-Tasaki (AKLT) state, namely, with unitary, orthogonal, or symplectic symmetry. The symmetry in cluster states protects information propagation (identity gate), while the higher symmetry in AKLT-type states enables nontrivial gate computation. This work demonstrates a close connection between measurement-based quantum computation and symmetry-protected topological order.
Magneto-acousto-electrical Measurement Based Electrical Conductivity Reconstruction for Tissues.
Zhou, Yan; Ma, Qingyu; Guo, Gepu; Tu, Juan; Zhang, Dong
2018-05-01
Based on the interaction of ultrasonic excitation and magnetoelectrical induction, magneto-acousto-electrical (MAE) technology was demonstrated to have the capability of differentiating conductivity variations along the acoustic transmission. By applying the characteristics of the MAE voltage, a simplified algorithm of MAE measurement based conductivity reconstruction was developed. With the analyses of acoustic vibration, ultrasound propagation, Hall effect, and magnetoelectrical induction, theoretical and experimental studies of MAE measurement and conductivity reconstruction were performed. The formula of MAE voltage was derived and simplified for the transducer with strong directivity. MAE voltage was simulated for a three-layer gel phantom and the conductivity distribution was reconstructed using the modified Wiener inverse filter and Hilbert transform, which was also verified by experimental measurements. The experimental results are basically consistent with the simulations, and demonstrate that the wave packets of MAE voltage are generated at tissue interfaces with the amplitudes and vibration polarities representing the values and directions of conductivity variations. With the proposed algorithm, the amplitude and polarity of conductivity gradient can be restored and the conductivity distribution can also be reconstructed accurately. The favorable results demonstrate the feasibility of accurate conductivity reconstruction with improved spatial resolution using MAE measurement for tissues with conductivity variations, especially suitable for nondispersive tissues with abrupt conductivity changes. This study demonstrates that the MAE measurement based conductivity reconstruction algorithm can be applied as a new strategy for nondestructive real-time monitoring of conductivity variations in biomedical engineering.
Plasmodium copy number variation scan: gene copy numbers evaluation in haploid genomes.
Beghain, Johann; Langlois, Anne-Claire; Legrand, Eric; Grange, Laura; Khim, Nimol; Witkowski, Benoit; Duru, Valentine; Ma, Laurence; Bouchier, Christiane; Ménard, Didier; Paul, Richard E; Ariey, Frédéric
2016-04-12
In eukaryotic genomes, deletion or amplification rates have been estimated to be a thousand more frequent than single nucleotide variation. In Plasmodium falciparum, relatively few transcription factors have been identified, and the regulation of transcription is seemingly largely influenced by gene amplification events. Thus copy number variation (CNV) is a major mechanism enabling parasite genomes to adapt to new environmental changes. Currently, the detection of CNVs is based on quantitative PCR (qPCR), which is significantly limited by the relatively small number of genes that can be analysed at any one time. Technological advances that facilitate whole-genome sequencing, such as next generation sequencing (NGS) enable deeper analyses of the genomic variation to be performed. Because the characteristics of Plasmodium CNVs need special consideration in algorithms and strategies for which classical CNV detection programs are not suited a dedicated algorithm to detect CNVs across the entire exome of P. falciparum was developed. This algorithm is based on a custom read depth strategy through NGS data and called PlasmoCNVScan. The analysis of CNV identification on three genes known to have different levels of amplification and which are located either in the nuclear, apicoplast or mitochondrial genomes is presented. The results are correlated with the qPCR experiments, usually used for identification of locus specific amplification/deletion. This tool will facilitate the study of P. falciparum genomic adaptation in response to ecological changes: drug pressure, decreased transmission, reduction of the parasite population size (transition to pre-elimination endemic area).
Hamiltonian structure of real Monge - Ampère equations
NASA Astrophysics Data System (ADS)
Nutku, Y.
1996-06-01
The variational principle for the real homogeneous Monge - Ampère equation in two dimensions is shown to contain three arbitrary functions of four variables. There exist two different specializations of this variational principle where the Lagrangian is degenerate and furthermore contains an arbitrary function of two variables. The Hamiltonian formulation of these degenerate Lagrangian systems requires the use of Dirac's theory of constraints. As in the case of most completely integrable systems the constraints are second class and Dirac brackets directly yield the Hamiltonian operators. Thus the real homogeneous Monge - Ampère equation in two dimensions admits two classes of infinitely many Hamiltonian operators, namely a family of local, as well as another family non-local Hamiltonian operators and symplectic 2-forms which depend on arbitrary functions of two variables. The simplest non-local Hamiltonian operator corresponds to the Kac - Moody algebra of vector fields and functions on the unit circle. Hamiltonian operators that belong to either class are compatible with each other but between classes there is only one compatible pair. In the case of real Monge - Ampère equations with constant right-hand side this compatible pair is the only pair of Hamiltonian operators that survives. Then the complete integrability of all these real Monge - Ampère equations follows by Magri's theorem. Some of the remarkable properties we have obtained for the Hamiltonian structure of the real homogeneous Monge - Ampère equation in two dimensions turn out to be generic to the real homogeneous Monge - Ampère equation and the geodesic flow for the complex homogeneous Monge - Ampère equation in arbitrary number of dimensions. Hence among all integrable nonlinear evolution equations in one space and one time dimension, the real homogeneous Monge - Ampère equation is distinguished as one that retains its character as an integrable system in multiple dimensions.
Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.
1997-01-01
The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.
ComprehensiveBench: a Benchmark for the Extensive Evaluation of Global Scheduling Algorithms
NASA Astrophysics Data System (ADS)
Pilla, Laércio L.; Bozzetti, Tiago C.; Castro, Márcio; Navaux, Philippe O. A.; Méhaut, Jean-François
2015-10-01
Parallel applications that present tasks with imbalanced loads or complex communication behavior usually do not exploit the underlying resources of parallel platforms to their full potential. In order to mitigate this issue, global scheduling algorithms are employed. As finding the optimal task distribution is an NP-Hard problem, identifying the most suitable algorithm for a specific scenario and comparing algorithms are not trivial tasks. In this context, this paper presents ComprehensiveBench, a benchmark for global scheduling algorithms that enables the variation of a vast range of parameters that affect performance. ComprehensiveBench can be used to assist in the development and evaluation of new scheduling algorithms, to help choose a specific algorithm for an arbitrary application, to emulate other applications, and to enable statistical tests. We illustrate its use in this paper with an evaluation of Charm++ periodic load balancers that stresses their characteristics.
A Hybrid Adaptive Routing Algorithm for Event-Driven Wireless Sensor Networks
Figueiredo, Carlos M. S.; Nakamura, Eduardo F.; Loureiro, Antonio A. F.
2009-01-01
Routing is a basic function in wireless sensor networks (WSNs). For these networks, routing algorithms depend on the characteristics of the applications and, consequently, there is no self-contained algorithm suitable for every case. In some scenarios, the network behavior (traffic load) may vary a lot, such as an event-driven application, favoring different algorithms at different instants. This work presents a hybrid and adaptive algorithm for routing in WSNs, called Multi-MAF, that adapts its behavior autonomously in response to the variation of network conditions. In particular, the proposed algorithm applies both reactive and proactive strategies for routing infrastructure creation, and uses an event-detection estimation model to change between the strategies and save energy. To show the advantages of the proposed approach, it is evaluated through simulations. Comparisons with independent reactive and proactive algorithms show improvements on energy consumption. PMID:22423207
A hybrid adaptive routing algorithm for event-driven wireless sensor networks.
Figueiredo, Carlos M S; Nakamura, Eduardo F; Loureiro, Antonio A F
2009-01-01
Routing is a basic function in wireless sensor networks (WSNs). For these networks, routing algorithms depend on the characteristics of the applications and, consequently, there is no self-contained algorithm suitable for every case. In some scenarios, the network behavior (traffic load) may vary a lot, such as an event-driven application, favoring different algorithms at different instants. This work presents a hybrid and adaptive algorithm for routing in WSNs, called Multi-MAF, that adapts its behavior autonomously in response to the variation of network conditions. In particular, the proposed algorithm applies both reactive and proactive strategies for routing infrastructure creation, and uses an event-detection estimation model to change between the strategies and save energy. To show the advantages of the proposed approach, it is evaluated through simulations. Comparisons with independent reactive and proactive algorithms show improvements on energy consumption.
Gradient descent learning algorithm overview: a general dynamical systems perspective.
Baldi, P
1995-01-01
Gives a unified treatment of gradient descent learning algorithms for neural networks using a general framework of dynamical systems. This general approach organizes and simplifies all the known algorithms and results which have been originally derived for different problems (fixed point/trajectory learning), for different models (discrete/continuous), for different architectures (forward/recurrent), and using different techniques (backpropagation, variational calculus, adjoint methods, etc.). The general approach can also be applied to derive new algorithms. The author then briefly examines some of the complexity issues and limitations intrinsic to gradient descent learning. Throughout the paper, the author focuses on the problem of trajectory learning.
INTEGRAL/SPI data segmentation to retrieve source intensity variations
NASA Astrophysics Data System (ADS)
Bouchet, L.; Amestoy, P. R.; Buttari, A.; Rouet, F.-H.; Chauvin, M.
2013-07-01
Context. The INTEGRAL/SPI, X/γ-ray spectrometer (20 keV-8 MeV) is an instrument for which recovering source intensity variations is not straightforward and can constitute a difficulty for data analysis. In most cases, determining the source intensity changes between exposures is largely based on a priori information. Aims: We propose techniques that help to overcome the difficulty related to source intensity variations, which make this step more rational. In addition, the constructed "synthetic" light curves should permit us to obtain a sky model that describes the data better and optimizes the source signal-to-noise ratios. Methods: For this purpose, the time intensity variation of each source was modeled as a combination of piecewise segments of time during which a given source exhibits a constant intensity. To optimize the signal-to-noise ratios, the number of segments was minimized. We present a first method that takes advantage of previous time series that can be obtained from another instrument on-board the INTEGRAL observatory. A data segmentation algorithm was then used to synthesize the time series into segments. The second method no longer needs external light curves, but solely SPI raw data. For this, we developed a specific algorithm that involves the SPI transfer function. Results: The time segmentation algorithms that were developed solve a difficulty inherent to the SPI instrument, which is the intensity variations of sources between exposures, and it allows us to obtain more information about the sources' behavior. Based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain, and Switzerland), Czech Republic and Poland with participation of Russia and the USA.
Castellana, Stefano; Fusilli, Caterina; Mazzoccoli, Gianluigi; Biagini, Tommaso; Capocefalo, Daniele; Carella, Massimo; Vescovi, Angelo Luigi; Mazza, Tommaso
2017-06-01
24,189 are all the possible non-synonymous amino acid changes potentially affecting the human mitochondrial DNA. Only a tiny subset was functionally evaluated with certainty so far, while the pathogenicity of the vast majority was only assessed in-silico by software predictors. Since these tools proved to be rather incongruent, we have designed and implemented APOGEE, a machine-learning algorithm that outperforms all existing prediction methods in estimating the harmfulness of mitochondrial non-synonymous genome variations. We provide a detailed description of the underlying algorithm, of the selected and manually curated training and test sets of variants, as well as of its classification ability.
Stable orthogonal local discriminant embedding for linear dimensionality reduction.
Gao, Quanxue; Ma, Jingjie; Zhang, Hailin; Gao, Xinbo; Liu, Yamin
2013-07-01
Manifold learning is widely used in machine learning and pattern recognition. However, manifold learning only considers the similarity of samples belonging to the same class and ignores the within-class variation of data, which will impair the generalization and stableness of the algorithms. For this purpose, we construct an adjacency graph to model the intraclass variation that characterizes the most important properties, such as diversity of patterns, and then incorporate the diversity into the discriminant objective function for linear dimensionality reduction. Finally, we introduce the orthogonal constraint for the basis vectors and propose an orthogonal algorithm called stable orthogonal local discriminate embedding. Experimental results on several standard image databases demonstrate the effectiveness of the proposed dimensionality reduction approach.
A variational dynamic programming approach to robot-path planning with a distance-safety criterion
NASA Technical Reports Server (NTRS)
Suh, Suk-Hwan; Shin, Kang G.
1988-01-01
An approach to robot-path planning is developed by considering both the traveling distance and the safety of the robot. A computationally-efficient algorithm is developed to find a near-optimal path with a weighted distance-safety criterion by using a variational calculus and dynamic programming (VCDP) method. The algorithm is readily applicable to any factory environment by representing the free workspace as channels. A method for deriving these channels is also proposed. Although it is developed mainly for two-dimensional problems, this method can be easily extended to a class of three-dimensional problems. Numerical examples are presented to demonstrate the utility and power of this method.
Efficient iris recognition by characterizing key local variations.
Ma, Li; Tan, Tieniu; Wang, Yunhong; Zhang, Dexin
2004-06-01
Unlike other biometrics such as fingerprints and face, the distinct aspect of iris comes from randomly distributed features. This leads to its high reliability for personal identification, and at the same time, the difficulty in effectively representing such details in an image. This paper describes an efficient algorithm for iris recognition by characterizing key local variations. The basic idea is that local sharp variation points, denoting the appearing or vanishing of an important image structure, are utilized to represent the characteristics of the iris. The whole procedure of feature extraction includes two steps: 1) a set of one-dimensional intensity signals is constructed to effectively characterize the most important information of the original two-dimensional image; 2) using a particular class of wavelets, a position sequence of local sharp variation points in such signals is recorded as features. We also present a fast matching scheme based on exclusive OR operation to compute the similarity between a pair of position sequences. Experimental results on 2255 iris images show that the performance of the proposed method is encouraging and comparable to the best iris recognition algorithm found in the current literature.
1994-04-01
a variation of Ziv - Lempel compression [ZL77]. We found that using a standard compression algorithm rather than semantic compression allowed simplified...mentation. In Proceedings of the Conference on Programming Language Design and Implementation, 1993. (ZL77] J. Ziv and A. Lempel . A universal algorithm ...required by adaptable binaries. Our ABS stores adaptable binary information using the conventional binary symbol table and compresses this data using
Multiresolution image registration in digital x-ray angiography with intensity variation modeling.
Nejati, Mansour; Pourghassem, Hossein
2014-02-01
Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction.
NASA Astrophysics Data System (ADS)
Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier
2016-05-01
A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.
Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier
2016-05-28
A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.
NASA Astrophysics Data System (ADS)
Marzbanrad, Javad; Tahbaz-zadeh Moghaddam, Iman
2016-09-01
The main purpose of this paper is to design a self-tuning control algorithm for an adaptive cruise control (ACC) system that can adapt its behaviour to variations of vehicle dynamics and uncertain road grade. To this aim, short-time linear quadratic form (STLQF) estimation technique is developed so as to track simultaneously the trend of the time-varying parameters of vehicle longitudinal dynamics with a small delay. These parameters are vehicle mass, road grade and aerodynamic drag-area coefficient. Next, the values of estimated parameters are used to tune the throttle and brake control inputs and to regulate the throttle/brake switching logic that governs the throttle and brake switching. The performance of the designed STLQF-based self-tuning control (STLQF-STC) algorithm for ACC system is compared with the conventional method based on fixed control structure regarding the speed/distance tracking control modes. Simulation results show that the proposed control algorithm improves the performance of throttle and brake controllers, providing more comfort while travelling, enhancing driving safety and giving a satisfactory performance in the presence of different payloads and road grade variations.
NASA Astrophysics Data System (ADS)
Zhou, Lifan; Chai, Dengfeng; Xia, Yu; Ma, Peifeng; Lin, Hui
2018-01-01
Phase unwrapping (PU) is one of the key processes in reconstructing the digital elevation model of a scene from its interferometric synthetic aperture radar (InSAR) data. It is known that two-dimensional (2-D) PU problems can be formulated as maximum a posteriori estimation of Markov random fields (MRFs). However, considering that the traditional MRF algorithm is usually defined on a rectangular grid, it fails easily if large parts of the wrapped data are dominated by noise caused by large low-coherence area or rapid-topography variation. A PU solution based on sparse MRF is presented to extend the traditional MRF algorithm to deal with sparse data, which allows the unwrapping of InSAR data dominated by high phase noise. To speed up the graph cuts algorithm for sparse MRF, we designed dual elementary graphs and merged them to obtain the Delaunay triangle graph, which is used to minimize the energy function efficiently. The experiments on simulated and real data, compared with other existing algorithms, both confirm the effectiveness of the proposed MRF approach, which suffers less from decorrelation effects caused by large low-coherence area or rapid-topography variation.
4D inversion of time-lapse magnetotelluric data sets for monitoring geothermal reservoir
NASA Astrophysics Data System (ADS)
Nam, Myung Jin; Song, Yoonho; Jang, Hannuree; Kim, Bitnarae
2017-06-01
The productivity of a geothermal reservoir, which is a function of the pore-space and fluid-flow path of the reservoir, varies since the properties of the reservoir changes with geothermal reservoir production. Because the variation in the reservoir properties causes changes in electrical resistivity, time-lapse (TL) three-dimensional (3D) magnetotelluric (MT) methods can be applied to monitor the productivity variation of a geothermal reservoir thanks to not only its sensitivity to the electrical resistivity but also its deep depth of survey penetration. For an accurate interpretation of TL MT-data sets, a four-dimensional (4D) MT inversion algorithm has been developed to simultaneously invert all vintage data considering time-coupling between vintages. However, the changes in electrical resistivity of deep geothermal reservoirs are usually small generating minimum variation in TL MT responses. Maximizing the sensitivity of inversion to the changes in resistivity is critical in the success of 4D MT inversion. Thus, we further developed a focused 4D MT inversion method by considering not only the location of a reservoir but also the distribution of newly-generated fractures during the production. For the evaluation of the 4D MT algorithm, we tested our 4D inversion algorithms using synthetic TL MT-data sets.
Statistical image reconstruction from correlated data with applications to PET
Alessio, Adam; Sauer, Ken; Kinahan, Paul
2008-01-01
Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576
Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner
NASA Technical Reports Server (NTRS)
Tanis, Fred J.
1984-01-01
A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.
Du, Tingsong; Hu, Yang; Ke, Xianting
2015-01-01
An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA.
Novel and efficient tag SNPs selection algorithms.
Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2014-01-01
SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.
Hu, Yang; Ke, Xianting
2015-01-01
An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA. PMID:26447713
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, Lijie, E-mail: bxhanm@126.com; Wu, Xinyuan, E-mail: xywu@nju.edu.cn
In general, extended Runge–Kutta–Nyström (ERKN) methods are more effective than traditional Runge–Kutta–Nyström (RKN) methods in dealing with oscillatory Hamiltonian systems. However, the theoretical analysis for ERKN methods, such as the order conditions, the symplectic conditions and the symmetric conditions, becomes much more complicated than that for RKN methods. Therefore, it is a bottleneck to construct high-order ERKN methods efficiently. In this paper, we first establish the ERKN group Ω for ERKN methods and the RKN group G for RKN methods, respectively. We then rigorously show that ERKN methods are a natural extension of RKN methods, that is, there exists anmore » epimorphism η of the ERKN group Ω onto the RKN group G. This epimorphism gives a global insight into the structure of the ERKN group by the analysis of its kernel and the corresponding RKN group G. Meanwhile, we establish a particular mapping φ of G into Ω so that each image element is an ideal representative element of the congruence class in Ω. Furthermore, an elementary theoretical analysis shows that this map φ can preserve many structure-preserving properties, such as the order, the symmetry and the symplecticity. From the epimorphism η together with its section φ, we may gain knowledge about the structure of the ERKN group Ω via the RKN group G. In light of the theoretical analysis of this paper, we obtain high-order structure-preserving ERKN methods in an effective way for solving oscillatory Hamiltonian systems. Numerical experiments are carried out and the results are very promising, which strongly support our theoretical analysis presented in this paper.« less
Actions, topological terms and boundaries in first-order gravity: A review
NASA Astrophysics Data System (ADS)
Corichi, Alejandro; Rubalcava-García, Irais; Vukašinac, Tatjana
2016-03-01
In this review, we consider first-order gravity in four dimensions. In particular, we focus our attention in formulations where the fundamental variables are a tetrad eaI and a SO(3, 1) connection ωaIJ. We study the most general action principle compatible with diffeomorphism invariance. This implies, in particular, considering besides the standard Einstein-Hilbert-Palatini term, other terms that either do not change the equations of motion, or are topological in nature. Having a well defined action principle sometimes involves the need for additional boundary terms, whose detailed form may depend on the particular boundary conditions at hand. In this work, we consider spacetimes that include a boundary at infinity, satisfying asymptotically flat boundary conditions and/or an internal boundary satisfying isolated horizons boundary conditions. We focus on the covariant Hamiltonian formalism where the phase space Γ is given by solutions to the equations of motion. For each of the possible terms contributing to the action, we consider the well-posedness of the action, its finiteness, the contribution to the symplectic structure, and the Hamiltonian and Noether charges. For the chosen boundary conditions, standard boundary terms warrant a well posed theory. Furthermore, the boundary and topological terms do not contribute to the symplectic structure, nor the Hamiltonian conserved charges. The Noether conserved charges, on the other hand, do depend on such additional terms. The aim of this manuscript is to present a comprehensive and self-contained treatment of the subject, so the style is somewhat pedagogical. Furthermore, along the way, we point out and clarify some issues that have not been clearly understood in the literature.
Continuum limit and symmetries of the periodic gℓ(1|1) spin chain
NASA Astrophysics Data System (ADS)
Gainutdinov, A. M.; Read, N.; Saleur, H.
2013-06-01
This paper is the first in a series devoted to the study of logarithmic conformal field theories (LCFT) in the bulk. Building on earlier work in the boundary case, our general strategy consists in analyzing the algebraic properties of lattice regularizations (quantum spin chains) of these theories. In the boundary case, a crucial step was the identification of the space of states as a bimodule over the Temperley-Lieb (TL) algebra and the quantum group Uqsℓ(2). The extension of this analysis in the bulk case involves considerable difficulties, since the Uqsℓ(2) symmetry is partly lost, while the TL algebra is replaced by a much richer version (the Jones-Temperley-Lieb — JTL — algebra). Even the simplest case of the gℓ(1|1) spin chain — corresponding to the c=-2 symplectic fermions theory in the continuum limit — presents very rich aspects, which we will discuss in several papers. In this first work, we focus on the symmetries of the spin chain, that is, the centralizer of the JTL algebra in the alternating tensor product of the gℓ(1|1) fundamental representation and its dual. We prove that this centralizer is only a subalgebra of Uqsℓ(2) at q=i that we dub Uqoddsℓ(2). We then begin the analysis of the continuum limit of the JTL algebra: using general arguments about the regularization of the stress-energy tensor, we identify families of JTL elements going over to the Virasoro generators Ln,L in the continuum limit. We then discuss the sℓ(2) symmetry of the (continuum limit) symplectic fermions theory from the lattice and JTL point of view. The analysis of the spin chain as a bimodule over Uqoddsℓ(2) and JTLN is discussed in the second paper of this series.
NASA Astrophysics Data System (ADS)
de Guillebon, L.; Vittot, M.
2013-10-01
Guiding-center reduction is studied using gyro-gauge-independent coordinates. The Lagrangian 1-form of charged particle dynamics is Lie transformed without introducing a gyro-gauge, but using directly the unit vector of the component of the velocity perpendicular to the magnetic field as the coordinate corresponding to Larmor gyration. The reduction is shown to provide a maximal reduction for the Lagrangian and to work for all orders in the Larmor radius, following exactly the same procedure as when working with the standard gauge-dependent coordinate. The gauge-dependence is removed from the coordinate system by using a constrained variable for the gyro-angle. The closed 1-form dθ is replaced by a more general non-closed 1-form, which is equal to dθ in the gauge-dependent case. The gauge vector is replaced by a more general connection in the definition of the gradient, which behaves as a covariant derivative, in perfect agreement with the circle-bundle picture. This explains some results of previous works, whose gauge-independent expressions did not correspond to gauge fixing but did indeed correspond to connection fixing. In addition, some general results are obtained for the guiding-center reduction. The expansion is polynomial in the cotangent of the pitch-angle as an effect of the structure of the Lagrangian, preserved by Lie derivatives. The induction for the reduction is shown to rely on the inversion of a matrix, which is the same for all orders higher than three. It is inverted and explicit induction relations are obtained to go to an arbitrary order in the perturbation expansion. The Hamiltonian and symplectic representations of the guiding-center reduction are recovered, but conditions for the symplectic representation at each order are emphasized.
Quantitative analysis of eyes and other optical systems in linear optics.
Harris, William F; Evans, Tanya; van Gool, Radboud D
2017-05-01
To show that 14-dimensional spaces of augmented point P and angle Q characteristics, matrices obtained from the ray transference, are suitable for quantitative analysis although only the latter define an inner-product space and only on it can one define distances and angles. The paper examines the nature of the spaces and their relationships to other spaces including symmetric dioptric power space. The paper makes use of linear optics, a three-dimensional generalization of Gaussian optics. Symmetric 2 × 2 dioptric power matrices F define a three-dimensional inner-product space which provides a sound basis for quantitative analysis (calculation of changes, arithmetic means, etc.) of refractive errors and thin systems. For general systems the optical character is defined by the dimensionally-heterogeneous 4 × 4 symplectic matrix S, the transference, or if explicit allowance is made for heterocentricity, the 5 × 5 augmented symplectic matrix T. Ordinary quantitative analysis cannot be performed on them because matrices of neither of these types constitute vector spaces. Suitable transformations have been proposed but because the transforms are dimensionally heterogeneous the spaces are not naturally inner-product spaces. The paper obtains 14-dimensional spaces of augmented point P and angle Q characteristics. The 14-dimensional space defined by the augmented angle characteristics Q is dimensionally homogenous and an inner-product space. A 10-dimensional subspace of the space of augmented point characteristics P is also an inner-product space. The spaces are suitable for quantitative analysis of the optical character of eyes and many other systems. Distances and angles can be defined in the inner-product spaces. The optical systems may have multiple separated astigmatic and decentred refracting elements. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
Choi, Jaewon; Jung, Hyung-Sup; Yun, Sang-Ho
2015-03-09
As the aerospace industry grows, images obtained from Earth observation satellites have been successfully used in various fields. Specifically, the demand for a high-resolution (HR) optical images is gradually increasing, and hence the generation of a high-quality mosaic image is being magnified as an interesting issue. In this paper, we have proposed an efficient mosaic algorithm for HR optical images that are significantly different due to seasonal change. The algorithm includes main steps such as: (1) seamline extraction from gradient magnitude and seam images; (2) histogram matching; and (3) image feathering. Eleven Kompsat-2 images characterized by seasonal variations are used for the performance validation of the proposed method. The results of the performance test show that the proposed method effectively mosaics Kompsat-2 adjacent images including severe seasonal changes. Moreover, the results reveal that the proposed method is applicable to HR optic images such as GeoEye, IKONOS, QuickBird, RapidEye, SPOT, WorldView, etc.
Generalising Ward's Method for Use with Manhattan Distances.
Strauss, Trudie; von Maltitz, Michael Johan
2017-01-01
The claim that Ward's linkage algorithm in hierarchical clustering is limited to use with Euclidean distances is investigated. In this paper, Ward's clustering algorithm is generalised to use with l1 norm or Manhattan distances. We argue that the generalisation of Ward's linkage method to incorporate Manhattan distances is theoretically sound and provide an example of where this method outperforms the method using Euclidean distances. As an application, we perform statistical analyses on languages using methods normally applied to biology and genetic classification. We aim to quantify differences in character traits between languages and use a statistical language signature based on relative bi-gram (sequence of two letters) frequencies to calculate a distance matrix between 32 Indo-European languages. We then use Ward's method of hierarchical clustering to classify the languages, using the Euclidean distance and the Manhattan distance. Results obtained from using the different distance metrics are compared to show that the Ward's algorithm characteristic of minimising intra-cluster variation and maximising inter-cluster variation is not violated when using the Manhattan metric.
An experimental phylogeny to benchmark ancestral sequence reconstruction
Randall, Ryan N.; Radford, Caelan E.; Roof, Kelsey A.; Natarajan, Divya K.; Gaucher, Eric A.
2016-01-01
Ancestral sequence reconstruction (ASR) is a still-burgeoning method that has revealed many key mechanisms of molecular evolution. One criticism of the approach is an inability to validate its algorithms within a biological context as opposed to a computer simulation. Here we build an experimental phylogeny using the gene of a single red fluorescent protein to address this criticism. The evolved phylogeny consists of 19 operational taxonomic units (leaves) and 17 ancestral bifurcations (nodes) that display a wide variety of fluorescent phenotypes. The 19 leaves then serve as ‘modern' sequences that we subject to ASR analyses using various algorithms and to benchmark against the known ancestral genotypes and ancestral phenotypes. We confirm computer simulations that show all algorithms infer ancient sequences with high accuracy, yet we also reveal wide variation in the phenotypes encoded by incorrectly inferred sequences. Specifically, Bayesian methods incorporating rate variation significantly outperform the maximum parsimony criterion in phenotypic accuracy. Subsampling of extant sequences had minor effect on the inference of ancestral sequences. PMID:27628687
Adaptive phase k-means algorithm for waveform classification
NASA Astrophysics Data System (ADS)
Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin
2018-01-01
Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.
Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle
Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza; ...
2017-05-18
We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less
Markov-modulated Markov chains and the covarion process of molecular evolution.
Galtier, N; Jean-Marie, A
2004-01-01
The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.
Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza
We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less
Recognizing Disguised Faces: Human and Machine Evaluation
Dhamecha, Tejas Indulal; Singh, Richa; Vatsa, Mayank; Kumar, Ajay
2014-01-01
Face verification, though an easy task for humans, is a long-standing open research area. This is largely due to the challenging covariates, such as disguise and aging, which make it very hard to accurately verify the identity of a person. This paper investigates human and machine performance for recognizing/verifying disguised faces. Performance is also evaluated under familiarity and match/mismatch with the ethnicity of observers. The findings of this study are used to develop an automated algorithm to verify the faces presented under disguise variations. We use automatically localized feature descriptors which can identify disguised face patches and account for this information to achieve improved matching accuracy. The performance of the proposed algorithm is evaluated on the IIIT-Delhi Disguise database that contains images pertaining to 75 subjects with different kinds of disguise variations. The experiments suggest that the proposed algorithm can outperform a popular commercial system and evaluates them against humans in matching disguised face images. PMID:25029188
A comparison of algorithms for inference and learning in probabilistic graphical models.
Frey, Brendan J; Jojic, Nebojsa
2005-09-01
Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.
Real-time segmentation of burst suppression patterns in critical care EEG monitoring
Westover, M. Brandon; Shafi, Mouhsin M.; Ching, ShiNung; Chemali, Jessica J.; Purdon, Patrick L.; Cash, Sydney S.; Brown, Emery N.
2014-01-01
Objective Develop a real-time algorithm to automatically discriminate suppressions from non-suppressions (bursts) in electroencephalograms of critically ill adult patients. Methods A real-time method for segmenting adult ICU EEG data into bursts and suppressions is presented based on thresholding local voltage variance. Results are validated against manual segmentations by two experienced human electroencephalographers. We compare inter-rater agreement between manual EEG segmentations by experts with inter-rater agreement between human vs automatic segmentations, and investigate the robustness of segmentation quality to variations in algorithm parameter settings. We further compare the results of using these segmentations as input for calculating the burst suppression probability (BSP), a continuous measure of depth-of-suppression. Results Automated segmentation was comparable to manual segmentation, i.e. algorithm-vs-human agreement was comparable to human-vs-human agreement, as judged by comparing raw EEG segmentations or the derived BSP signals. Results were robust to modest variations in algorithm parameter settings. Conclusions Our automated method satisfactorily segments burst suppression data across a wide range adult ICU EEG patterns. Performance is comparable to or exceeds that of manual segmentation by human electroencephalographers. Significance Automated segmentation of burst suppression EEG patterns is an essential component of quantitative brain activity monitoring in critically ill and anesthetized adults. The segmentations produced by our algorithm provide a basis for accurate tracking of suppression depth. PMID:23891828
Real-time segmentation of burst suppression patterns in critical care EEG monitoring.
Brandon Westover, M; Shafi, Mouhsin M; Ching, Shinung; Chemali, Jessica J; Purdon, Patrick L; Cash, Sydney S; Brown, Emery N
2013-09-30
Develop a real-time algorithm to automatically discriminate suppressions from non-suppressions (bursts) in electroencephalograms of critically ill adult patients. A real-time method for segmenting adult ICU EEG data into bursts and suppressions is presented based on thresholding local voltage variance. Results are validated against manual segmentations by two experienced human electroencephalographers. We compare inter-rater agreement between manual EEG segmentations by experts with inter-rater agreement between human vs automatic segmentations, and investigate the robustness of segmentation quality to variations in algorithm parameter settings. We further compare the results of using these segmentations as input for calculating the burst suppression probability (BSP), a continuous measure of depth-of-suppression. Automated segmentation was comparable to manual segmentation, i.e. algorithm-vs-human agreement was comparable to human-vs-human agreement, as judged by comparing raw EEG segmentations or the derived BSP signals. Results were robust to modest variations in algorithm parameter settings. Our automated method satisfactorily segments burst suppression data across a wide range adult ICU EEG patterns. Performance is comparable to or exceeds that of manual segmentation by human electroencephalographers. Automated segmentation of burst suppression EEG patterns is an essential component of quantitative brain activity monitoring in critically ill and anesthetized adults. The segmentations produced by our algorithm provide a basis for accurate tracking of suppression depth. Copyright © 2013 Elsevier B.V. All rights reserved.
2010-01-01
Background The information provided by dense genome-wide markers using high throughput technology is of considerable potential in human disease studies and livestock breeding programs. Genome-wide association studies relate individual single nucleotide polymorphisms (SNP) from dense SNP panels to individual measurements of complex traits, with the underlying assumption being that any association is caused by linkage disequilibrium (LD) between SNP and quantitative trait loci (QTL) affecting the trait. Often SNP are in genomic regions of no trait variation. Whole genome Bayesian models are an effective way of incorporating this and other important prior information into modelling. However a full Bayesian analysis is often not feasible due to the large computational time involved. Results This article proposes an expectation-maximization (EM) algorithm called emBayesB which allows only a proportion of SNP to be in LD with QTL and incorporates prior information about the distribution of SNP effects. The posterior probability of being in LD with at least one QTL is calculated for each SNP along with estimates of the hyperparameters for the mixture prior. A simulated example of genomic selection from an international workshop is used to demonstrate the features of the EM algorithm. The accuracy of prediction is comparable to a full Bayesian analysis but the EM algorithm is considerably faster. The EM algorithm was accurate in locating QTL which explained more than 1% of the total genetic variation. A computational algorithm for very large SNP panels is described. Conclusions emBayesB is a fast and accurate EM algorithm for implementing genomic selection and predicting complex traits by mapping QTL in genome-wide dense SNP marker data. Its accuracy is similar to Bayesian methods but it takes only a fraction of the time. PMID:20969788
Empirical correction for earth sensor horizon radiance variation
NASA Technical Reports Server (NTRS)
Hashmall, Joseph A.; Sedlak, Joseph; Andrews, Daniel; Luquette, Richard
1998-01-01
A major limitation on the use of infrared horizon sensors for attitude determination is the variability of the height of the infrared Earth horizon. This variation includes a climatological component and a stochastic component of approximately equal importance. The climatological component shows regular variation with season and latitude. Models based on historical measurements have been used to compensate for these systematic changes. The stochastic component is analogous to tropospheric weather. It can cause extreme, localized changes that for a period of days, overwhelm the climatological variation. An algorithm has been developed to compensate partially for the climatological variation of horizon height and at least to mitigate the stochastic variation. This method uses attitude and horizon sensor data from spacecraft to update a horizon height history as a function of latitude. For spacecraft that depend on horizon sensors for their attitudes (such as the Total Ozone Mapping Spectrometer-Earth Probe-TOMS-EP) a batch least squares attitude determination system is used. It is assumed that minimizing the average sensor residual throughout a full orbit of data results in attitudes that are nearly independent of local horizon height variations. The method depends on the additional assumption that the mean horizon height over all latitudes is approximately independent of season. Using these assumptions, the method yields the latitude dependent portion of local horizon height variations. This paper describes the algorithm used to generate an empirical horizon height. Ideally, an international horizon height database could be established that would rapidly merge data from various spacecraft to provide timely corrections that could be used by all.
Remote assessment of ocean color for interpretation of satellite visible imagery: A review
NASA Technical Reports Server (NTRS)
Gordon, H. R.; Morel, A. Y.
1983-01-01
An assessment is presented of the state-of-the-art of remote, (satellite-based) Coastal Zone Color (CZCS) Scanning of color variations in the ocean due to phytoplankton. Attention is given to physical problems associated with ocean color remote sensing, in-water algorithms for the correction of atmospheric effects, constituent retrieval algorithms and application of the algorithms to CZCS imagery. The applicability of CZCS to both near-coast and mid-ocean waters is considered, and it is concluded that while differences between the two environments are complex, universal algorithms can be used for the case of mid-ocean waters, and site-specific algorithms are adequate for CZCS imaging of the near-coast oceanic environment. A short description of CVCS and some sample photographs are provided in an appendix.
Analysis of retinal and cortical components of Retinex algorithms
NASA Astrophysics Data System (ADS)
Yeonan-Kim, Jihyun; Bertalmío, Marcelo
2017-05-01
Following Land and McCann's first proposal of the Retinex theory, numerous Retinex algorithms that differ considerably both algorithmically and functionally have been developed. We clarify the relationships among various Retinex families by associating their spatial processing structures to the neural organizations in the retina and the primary visual cortex in the brain. Some of the Retinex algorithms have a retina-like processing structure (Land's designator idea and NASA Retinex), and some show a close connection with the cortical structures in the primary visual area of the brain (two-dimensional L&M Retinex). A third group of Retinexes (the variational Retinex) manifests an explicit algorithmic relation to Wilson-Cowan's physiological model. We intend to overview these three groups of Retinexes with the frame of reference in the biological visual mechanisms.
NASA Astrophysics Data System (ADS)
Bal, A.; Alam, M. S.; Aslan, M. S.
2006-05-01
Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.
Quantum Google in a Complex Network
Paparo, Giuseppe Davide; Müller, Markus; Comellas, Francesc; Martin-Delgado, Miguel Angel
2013-01-01
We investigate the behaviour of the recently proposed Quantum PageRank algorithm, in large complex networks. We find that the algorithm is able to univocally reveal the underlying topology of the network and to identify and order the most relevant nodes. Furthermore, it is capable to clearly highlight the structure of secondary hubs and to resolve the degeneracy in importance of the low lying part of the list of rankings. The quantum algorithm displays an increased stability with respect to a variation of the damping parameter, present in the Google algorithm, and a more clearly pronounced power-law behaviour in the distribution of importance, as compared to the classical algorithm. We test the performance and confirm the listed features by applying it to real world examples from the WWW. Finally, we raise and partially address whether the increased sensitivity of the quantum algorithm persists under coordinated attacks in scale-free and random networks. PMID:24091980
Color constancy by characterization of illumination chromaticity
NASA Astrophysics Data System (ADS)
Nikkanen, Jarno T.
2011-05-01
Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2004-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2005-01-01
A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
An effective one-dimensional anisotropic fingerprint enhancement algorithm
NASA Astrophysics Data System (ADS)
Ye, Zhendong; Xie, Mei
2012-01-01
Fingerprint identification is one of the most important biometric technologies. The performance of the minutiae extraction and the speed of the fingerprint verification system rely heavily on the quality of the input fingerprint images, so the enhancement of the low fingerprint is a critical and difficult step in a fingerprint verification system. In this paper we proposed an effective algorithm for fingerprint enhancement. Firstly we use normalization algorithm to reduce the variations in gray level values along ridges and valleys. Then we utilize the structure tensor approach to estimate each pixel of the fingerprint orientations. At last we propose a novel algorithm which combines the advantages of onedimensional Gabor filtering method and anisotropic method to enhance the fingerprint in recoverable region. The proposed algorithm has been evaluated on the database of Fingerprint Verification Competition 2004, and the results show that our algorithm performs within less time.
An effective one-dimensional anisotropic fingerprint enhancement algorithm
NASA Astrophysics Data System (ADS)
Ye, Zhendong; Xie, Mei
2011-12-01
Fingerprint identification is one of the most important biometric technologies. The performance of the minutiae extraction and the speed of the fingerprint verification system rely heavily on the quality of the input fingerprint images, so the enhancement of the low fingerprint is a critical and difficult step in a fingerprint verification system. In this paper we proposed an effective algorithm for fingerprint enhancement. Firstly we use normalization algorithm to reduce the variations in gray level values along ridges and valleys. Then we utilize the structure tensor approach to estimate each pixel of the fingerprint orientations. At last we propose a novel algorithm which combines the advantages of onedimensional Gabor filtering method and anisotropic method to enhance the fingerprint in recoverable region. The proposed algorithm has been evaluated on the database of Fingerprint Verification Competition 2004, and the results show that our algorithm performs within less time.
A variational technique for smoothing flight-test and accident data
NASA Technical Reports Server (NTRS)
Bach, R. E., Jr.
1980-01-01
The problem of determining aircraft motions along a trajectory is solved using a variational algorithm that generates unmeasured states and forcing functions, and estimates instrument bias and scale-factor errors. The problem is formulated as a nonlinear fixed-interval smoothing problem, and is solved as a sequence of linear two-point boundary value problems, using a sweep method. The algorithm has been implemented for use in flight-test and accident analysis. Aircraft motions are assumed to be governed by a six-degree-of-freedom kinematic model; forcing functions consist of body accelerations and winds, and the measurement model includes aerodynamic and radar data. Examples of the determination of aircraft motions from typical flight-test and accident data are presented.
Empirical mode decomposition-based facial pose estimation inside video sequences
NASA Astrophysics Data System (ADS)
Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing
2010-03-01
We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.
Cellular Precipitates Of Iron Oxide in Olivine in a Stratospheric Interplanetary Dust Particle
NASA Technical Reports Server (NTRS)
Rietmeijer, Frans J. M.
1996-01-01
The petrology of a massive olivine-sulphide interplanetary dust particle shows melting of Fe,Ni-sulphide plus complete loss of sulphur and subsequent quenching to a mixture of iron-oxides and Fe,Ni-metal. Oxidation of the fayalite component in olivine produced maghemite discs and cellular intergrowths with olivine and rare andradite-rich garnet. Cellular reactions require no long-range solid-state diffusion and are kinetically favourable during pyrometamorphic oxidation. Local melting of the cellular intergrowths resulted in three dimensional symplectic textures. Dynamic pyrometamorphism of this asteroidal particle occurred at approx. 1100 C during atmospheric entry flash (5-15 s) heating.
Symplectic analysis of three-dimensional Abelian topological gravity
NASA Astrophysics Data System (ADS)
Cartas-Fuentevilla, R.; Escalante, Alberto; Herrera-Aguilar, Alfredo
2017-02-01
A detailed Faddeev-Jackiw quantization of an Abelian topological gravity is performed; we show that this formalism is equivalent and more economical than Dirac's method. In particular, we identify the complete set of constraints of the theory, from which the number of physical degrees of freedom is explicitly computed. We prove that the generalized Faddeev-Jackiw brackets and the Dirac ones coincide with each other. Moreover, we perform the Faddeev-Jackiw analysis of the theory at the chiral point, and the full set of constraints and the generalized Faddeev-Jackiw brackets are constructed. Finally we compare our results with those found in the literature and we discuss some remarks and prospects.
Projective limits of state spaces I. Classical formalism
NASA Astrophysics Data System (ADS)
Lanéry, Suzanne; Thiemann, Thomas
2017-01-01
In this series of papers, we investigate the projective framework initiated by Jerzy Kijowski (1977) and Andrzej Okołów (2009, 2013, 2014), which describes the states of a quantum (field) theory as projective families of density matrices. A short reading guide to the series can be found in [27]. The present first paper aims at clarifying the classical structures that underlies this formalism, namely projective limits of symplectic manifolds [27, subsection 2.1]. In particular, this allows us to discuss accurately the issues hindering an easy implementation of the dynamics in this context, and to formulate a strategy for overcoming them [27, subsection 4.1].
Computational tools and lattice design for the PEP-II B-Factory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Irwin, J.; Nosochkov, Y.
1997-02-01
Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT. {copyright} {ital 1997 American Institute of Physics.}
Computational tools and lattice design for the PEP-II B-Factory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai Yunhai; Irwin, John; Nosochkov, Yuri
1997-02-01
Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT.
NASA Astrophysics Data System (ADS)
Hansen, J. E.; Judd, B. R.; Raassen, A. J. J.; Uylings, P. H. M.
1997-04-01
Small discrepancies in the fitted energy levels of the configurations 3dN of transition metal ions are ascribed to effective three-electron magnetic operators yi. Surprisingly it has been found that, of the 16 possible operators with ranks 1 in both spin and orbital spaces, four operators labeled by the irreducible representation (irrep) (11) of SO(5) are sufficient to obtain results which appear to be limited by the errors in the experimental energy levels. An interpretation is given involving products of operators labeled by the irreps of SO(5) and the symplectic group Sp(10).
Point form relativistic quantum mechanics and relativistic SU(6)
NASA Technical Reports Server (NTRS)
Klink, W. H.
1993-01-01
The point form is used as a framework for formulating a relativistic quantum mechanics, with the mass operator carrying the interactions of underlying constituents. A symplectic Lie algebra of mass operators is introduced from which a relativistic harmonic oscillator mass operator is formed. Mass splittings within the degenerate harmonic oscillator levels arise from relativistically invariant spin-spin, spin-orbit, and tensor mass operators. Internal flavor (and color) symmetries are introduced which make it possible to formulate a relativistic SU(6) model of baryons (and mesons). Careful attention is paid to the permutation symmetry properties of the hadronic wave functions, which are written as polynomials in Bargmann spaces.
The Shannon entropy as a measure of diffusion in multidimensional dynamical systems
NASA Astrophysics Data System (ADS)
Giordano, C. M.; Cincotta, P. M.
2018-05-01
In the present work, we introduce two new estimators of chaotic diffusion based on the Shannon entropy. Using theoretical, heuristic and numerical arguments, we show that the entropy, S, provides a measure of the diffusion extent of a given small initial ensemble of orbits, while an indicator related with the time derivative of the entropy, S', estimates the diffusion rate. We show that in the limiting case of near ergodicity, after an appropriate normalization, S' coincides with the standard homogeneous diffusion coefficient. The very first application of this formulation to a 4D symplectic map and to the Arnold Hamiltonian reveals very successful and encouraging results.