Sample records for vector space decomposition

  1. Generalized decompositions of dynamic systems and vector Lyapunov functions

    NASA Astrophysics Data System (ADS)

    Ikeda, M.; Siljak, D. D.

    1981-10-01

    The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.

  2. Dynamics in the Decompositions Approach to Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Harding, John

    2017-12-01

    In Harding (Trans. Amer. Math. Soc. 348(5), 1839-1862 1996) it was shown that the direct product decompositions of any non-empty set, group, vector space, and topological space X form an orthomodular poset Fact X. This is the basis for a line of study in foundational quantum mechanics replacing Hilbert spaces with other types of structures. Here we develop dynamics and an abstract version of a time independent Schrödinger's equation in the setting of decompositions by considering representations of the group of real numbers in the automorphism group of the orthomodular poset Fact X of decompositions.

  3. Manifolds for pose tracking from monocular video

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2015-03-01

    We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).

  4. Unitary Operators on the Document Space.

    ERIC Educational Resources Information Center

    Hoenkamp, Eduard

    2003-01-01

    Discusses latent semantic indexing (LSI) that would allow search engines to reduce the dimension of the document space by mapping it into a space spanned by conceptual indices. Topics include vector space models; singular value decomposition (SVD); unitary operators; the Haar transform; and new algorithms. (Author/LRW)

  5. Gibbsian Stationary Non-equilibrium States

    NASA Astrophysics Data System (ADS)

    De Carlo, Leonardo; Gabrielli, Davide

    2017-09-01

    We study the structure of stationary non-equilibrium states for interacting particle systems from a microscopic viewpoint. In particular we discuss two different discrete geometric constructions. We apply both of them to determine non reversible transition rates corresponding to a fixed invariant measure. The first one uses the equivalence of this problem with the construction of divergence free flows on the transition graph. Since divergence free flows are characterized by cyclic decompositions we can generate families of models from elementary cycles on the configuration space. The second construction is a functional discrete Hodge decomposition for translational covariant discrete vector fields. According to this, for example, the instantaneous current of any interacting particle system on a finite torus can be canonically decomposed in a gradient part, a circulation term and an harmonic component. All the three components are associated with functions on the configuration space. This decomposition is unique and constructive. The stationary condition can be interpreted as an orthogonality condition with respect to an harmonic discrete vector field and we use this decomposition to construct models having a fixed invariant measure.

  6. Extended vector-tensor theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Rampei; Naruko, Atsushi; Yoshida, Daisuke, E-mail: rampei@th.phys.titech.ac.jp, E-mail: naruko@th.phys.titech.ac.jp, E-mail: yoshida@th.phys.titech.ac.jp

    Recently, several extensions of massive vector theory in curved space-time have been proposed in many literatures. In this paper, we consider the most general vector-tensor theories that contain up to two derivatives with respect to metric and vector field. By imposing a degeneracy condition of the Lagrangian in the context of ADM decomposition of space-time to eliminate an unwanted mode, we construct a new class of massive vector theories where five degrees of freedom can propagate, corresponding to three for massive vector modes and two for massless tensor modes. We find that the generalized Proca and the beyond generalized Procamore » theories up to the quartic Lagrangian, which should be included in this formulation, are degenerate theories even in curved space-time. Finally, introducing new metric and vector field transformations, we investigate the properties of thus obtained theories under such transformations.« less

  7. Decomposition of a symmetric second-order tensor

    NASA Astrophysics Data System (ADS)

    Heras, José A.

    2018-05-01

    In the three-dimensional space there are different definitions for the dot and cross products of a vector with a second-order tensor. In this paper we show how these products can uniquely be defined for the case of symmetric tensors. We then decompose a symmetric second-order tensor into its ‘dot’ part, which involves the dot product, and the ‘cross’ part, which involves the cross product. For some physical applications, this decomposition can be interpreted as one in which the dot part identifies with the ‘parallel’ part of the tensor and the cross part identifies with the ‘perpendicular’ part. This decomposition of a symmetric second-order tensor may be suitable for undergraduate courses of vector calculus, mechanics and electrodynamics.

  8. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  9. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE PAGES

    None, None

    2016-11-21

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  10. Killing-Yano tensors in spaces admitting a hypersurface orthogonal Killing vector

    NASA Astrophysics Data System (ADS)

    Garfinkle, David; Glass, E. N.

    2013-03-01

    Methods are presented for finding Killing-Yano tensors, conformal Killing-Yano tensors, and conformal Killing vectors in spacetimes with a hypersurface orthogonal Killing vector. These methods are similar to a method developed by the authors for finding Killing tensors. In all cases one decomposes both the tensor and the equation it satisfies into pieces along the Killing vector and pieces orthogonal to the Killing vector. Solving the separate equations that result from this decomposition requires less computing than integrating the original equation. In each case, examples are given to illustrate the method.

  11. Material decomposition in an arbitrary number of dimensions using noise compensating projection

    NASA Astrophysics Data System (ADS)

    O'Donnell, Thomas; Halaweish, Ahmed; Cormode, David; Cheheltani, Rabee; Fayad, Zahi A.; Mani, Venkatesh

    2017-03-01

    Purpose: Multi-energy CT (e.g., dual energy or photon counting) facilitates the identification of certain compounds via data decomposition. However, the standard approach to decomposition (i.e., solving a system of linear equations) fails if - due to noise - a pixel's vector of HU values falls outside the boundary of values describing possible pure or mixed basis materials. Typically, this is addressed by either throwing away those pixels or projecting them onto the closest point on this boundary. However, when acquiring four (or more) energy volumes, the space bounded by three (or more) materials that may be found in the human body (either naturally or through injection) can be quite small. Noise may significantly limit the number of those pixels to be included within. Therefore, projection onto the boundary becomes an important option. But, projection in higher than 3 dimensional space is not possible with standard vector algebra: the cross-product is not defined. Methods: We describe a technique which employs Clifford Algebra to perform projection in an arbitrary number of dimensions. Clifford Algebra describes a manipulation of vectors that incorporates the concepts of addition, subtraction, multiplication, and division. Thereby, vectors may be operated on like scalars forming a true algebra. Results: We tested our approach on a phantom containing inserts of calcium, gadolinium, iodine, gold nanoparticles and mixtures of pairs thereof. Images were acquired on a prototype photon counting CT scanner under a range of threshold combinations. Comparison of the accuracy of different threshold combinations versus ground truth are presented. Conclusions: Material decomposition is possible with three or more materials and four or more energy thresholds using Clifford Algebra projection to mitigate noise.

  12. A fast new algorithm for a robot neurocontroller using inverse QR decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, A.S.; Khemaissia, S.

    2000-01-01

    A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less

  13. Adjustable vector Airy light-sheet single optical tweezers: negative radiation forces on a subwavelength spheroid and spin torque reversal

    NASA Astrophysics Data System (ADS)

    Mitri, Farid G.

    2018-01-01

    Generalized solutions of vector Airy light-sheets, adjustable per their derivative order m, are introduced stemming from the Lorenz gauge condition and Maxwell's equations using the angular spectrum decomposition method. The Cartesian components of the incident radiated electric, magnetic and time-averaged Poynting vector fields in free space (excluding evanescent waves) are determined and computed with particular emphasis on the derivative order of the Airy light-sheet and the polarization on the magnetic vector potential forming the beam. Negative transverse time-averaged Poynting vector components can arise, while the longitudinal counterparts are always positive. Moreover, the analysis is extended to compute the optical radiation force and spin torque vector components on a lossless dielectric prolate subwavelength spheroid in the framework of the electric dipole approximation. The results show that negative forces and spin torques sign reversal arise depending on the derivative order of the beam, the polarization of the magnetic vector potential, and the orientation of the subwavelength prolate spheroid in space. The spin torque sign reversal suggests that counter-clockwise or clockwise rotations around the center of mass of the subwavelength spheroid can occur. The results find useful applications in single Airy light-sheet tweezers, particle manipulation, handling, and rotation applications to name a few examples.

  14. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  15. Rortex—A new vortex vector definition and vorticity tensor and vector decompositions

    NASA Astrophysics Data System (ADS)

    Liu, Chaoqun; Gao, Yisheng; Tian, Shuling; Dong, Xiangrui

    2018-03-01

    A vortex is intuitively recognized as the rotational/swirling motion of the fluids. However, an unambiguous and universally accepted definition for vortex is yet to be achieved in the field of fluid mechanics, which is probably one of the major obstacles causing considerable confusions and misunderstandings in turbulence research. In our previous work, a new vector quantity that is called vortex vector was proposed to accurately describe the local fluid rotation and clearly display vortical structures. In this paper, the definition of the vortex vector, named Rortex here, is revisited from the mathematical perspective. The existence of the possible rotational axis is proved through real Schur decomposition. Based on real Schur decomposition, a fast algorithm for calculating Rortex is also presented. In addition, new vorticity tensor and vector decompositions are introduced: the vorticity tensor is decomposed to a rigidly rotational part and a non-rotationally anti-symmetric part, and the vorticity vector is decomposed to a rigidly rotational vector which is called the Rortex vector and a non-rotational vector which is called the shear vector. Several cases, including the 2D Couette flow, 2D rigid rotational flow, and 3D boundary layer transition on a flat plate, are studied to demonstrate the justification of the definition of Rortex. It can be observed that Rortex identifies both the precise swirling strength and the rotational axis, and thus it can reasonably represent the local fluid rotation and provide a new powerful tool for vortex dynamics and turbulence research.

  16. Wiimote Experiments: 3-D Inclined Plane Problem for Reinforcing the Vector Concept

    ERIC Educational Resources Information Center

    Kawam, Alae; Kouh, Minjoon

    2011-01-01

    In an introductory physics course where students first learn about vectors, they oftentimes struggle with the concept of vector addition and decomposition. For example, the classic physics problem involving a mass on an inclined plane requires the decomposition of the force of gravity into two directions that are parallel and perpendicular to the…

  17. Investigations on the hierarchy of reference frames in geodesy and geodynamics

    NASA Technical Reports Server (NTRS)

    Grafarend, E. W.; Mueller, I. I.; Papo, H. B.; Richter, B.

    1979-01-01

    Problems related to reference directions were investigated. Space and time variant angular parameters are illustrated in hierarchic structures or towers. Using least squares techniques, model towers of triads are presented which allow the formation of linear observation equations. Translational and rotational degrees of freedom (origin and orientation) are discussed along with and the notion of length and scale degrees of freedom. According to the notion of scale parallelism, scale factors with respect to a unit length are given. Three-dimensional geodesy was constructed from the set of three base vectors (gravity, earth-rotation and the ecliptic normal vector). Space and time variations are given with respect to a polar and singular value decomposition or in terms of changes in translation, rotation, deformation (shear, dilatation or angular and scale distortions).

  18. Fault Detection of Bearing Systems through EEMD and Optimization Algorithm

    PubMed Central

    Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2017-01-01

    This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772

  19. Gordan—Capelli series in superalgebras

    PubMed Central

    Brini, Andrea; Palareti, Aldopaolo; Teolis, Antonio G. B.

    1988-01-01

    We derive two Gordan—Capelli series for the supersymmetric algebra of the tensor product of two [unk]2-graded [unk]-vector spaces U and V, being [unk] a field of characteristic zero. These expansions yield complete decompositions of the supersymmetric algebra regarded as a pl(U)- and a pl(V)- module, where pl(U) and pl(V) are the general linear Lie superalgebras of U and V, respectively. PMID:16593911

  20. Separable decompositions of bipartite mixed states

    NASA Astrophysics Data System (ADS)

    Li, Jun-Li; Qiao, Cong-Feng

    2018-04-01

    We present a practical scheme for the decomposition of a bipartite mixed state into a sum of direct products of local density matrices, using the technique developed in Li and Qiao (Sci. Rep. 8:1442, 2018). In the scheme, the correlation matrix which characterizes the bipartite entanglement is first decomposed into two matrices composed of the Bloch vectors of local states. Then, we show that the symmetries of Bloch vectors are consistent with that of the correlation matrix, and the magnitudes of the local Bloch vectors are lower bounded by the correlation matrix. Concrete examples for the separable decompositions of bipartite mixed states are presented for illustration.

  1. The Local Stellar Velocity Field via Vector Spherical Harmonics

    NASA Technical Reports Server (NTRS)

    Makarov, V. V.; Murphy, D. W.

    2007-01-01

    We analyze the local field of stellar tangential velocities for a sample of 42,339 nonbinary Hipparcos stars with accurate parallaxes, using a vector spherical harmonic formalism.We derive simple relations between the parameters of the classical linear model (Ogorodnikov-Milne) of the local systemic field and low-degree terms of the general vector harmonic decomposition. Taking advantage of these relationships, we determine the solar velocity with respect to the local stars of (V(sub X), V(sub Y), V(sub Z)) = (10.5, 18.5, 7.3) +/- 0.1 km s(exp -1) not for the asymmetric drift with respect to the local standard of rest. If only stars more distant than 100 pc are considered, the peculiar solar motion is (V(sub X), V(sub Y), V(sub Z)) = (9.9, 15.6, 6.9) +/- 0.2 km s(exp -1). The adverse effects of harmonic leakage, which occurs between the reflex solar motion represented by the three electric vector harmonics in the velocity space and higher degree harmonics in the proper-motion space, are eliminated in our analysis by direct subtraction of the reflex solar velocity in its tangential components for each star...

  2. Elastic and acoustic wavefield decompositions and application to reverse time migrations

    NASA Astrophysics Data System (ADS)

    Wang, Wenlong

    P- and S-waves coexist in elastic wavefields, and separation between them is an essential step in elastic reverse-time migrations (RTMs). Unlike the traditional separation methods that use curl and divergence operators, which do not preserve the wavefield vector component information, we propose and compare two vector decomposition methods, which preserve the same vector components that exist in the input elastic wavefield. The amplitude and phase information is automatically preserved, so no amplitude or phase corrections are required. The decoupled propagation method is extended from elastic to viscoelastic wavefields. To use the decomposed P and S vector wavefields and generate PP and PS images, we create a new 2D migration context for isotropic, elastic RTM which includes PS vector decomposition; the propagation directions of both incident and reflected P- and S-waves are calculated directly from the stress and particle velocity definitions of the decomposed P- and S-wave Poynting vectors. Then an excitation-amplitude image condition that scales the receiver wavelet by the source vector magnitude produces angle-dependent images of PP and PS reflection coefficients with the correct polarities, polarization, and amplitudes. It thus simplifies the process of obtaining PP and PS angle-domain common-image gathers (ADCIGs); it is less effort to generate ADCIGs from vector data than from scalar data. Besides P- and S-waves decomposition, separations of up- and down-going waves are also a part of processing of multi-component recorded data and propagating wavefields. A complex trace based up/down separation approach is extended from acoustic to elastic, and combined with P- and S-wave decomposition by decoupled propagation. This eliminates the need for a Fourier transform over time, thereby significantly reducing the storage cost and improving computational efficiency. Wavefield decomposition is applied to both synthetic elastic VSP data and propagating wavefield snapshots. Poynting vectors obtained from the particle-velocity and stress fields after P/S and up/down decompositions are much more accurate than those without. The up/down separation algorithm is also applicable in acoustic RTMs, where both (forward-time extrapolated) source and (reverse-time extrapolated) receiver wavefields are decomposed into up-going and down-going parts. Together with the crosscorrelation imaging condition, four images (down-up, up-down, up-up and down-down) are generated, which facilitate the analysis of artifacts and the imaging ability of the four images. Artifacts may exist in all the decomposed images, but their positions and types are different. The causes of artifacts in different images are explained and illustrated with sketches and numerical tests.

  3. Amino acid "little Big Bang": representing amino acid substitution matrices as dot products of Euclidian vectors.

    PubMed

    Zimmermann, Karel; Gibrat, Jean-François

    2010-01-04

    Sequence comparisons make use of a one-letter representation for amino acids, the necessary quantitative information being supplied by the substitution matrices. This paper deals with the problem of finding a representation that provides a comprehensive description of amino acid intrinsic properties consistent with the substitution matrices. We present a Euclidian vector representation of the amino acids, obtained by the singular value decomposition of the substitution matrices. The substitution matrix entries correspond to the dot product of amino acid vectors. We apply this vector encoding to the study of the relative importance of various amino acid physicochemical properties upon the substitution matrices. We also characterize and compare the PAM and BLOSUM series substitution matrices. This vector encoding introduces a Euclidian metric in the amino acid space, consistent with substitution matrices. Such a numerical description of the amino acid is useful when intrinsic properties of amino acids are necessary, for instance, building sequence profiles or finding consensus sequences, using machine learning algorithms such as Support Vector Machine and Neural Networks algorithms.

  4. Quantization of Electromagnetic Fields in Cavities

    NASA Technical Reports Server (NTRS)

    Kakazu, Kiyotaka; Oshiro, Kazunori

    1996-01-01

    A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.

  5. On the use of the singular value decomposition for text retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Husbands, P.; Simon, H.D.; Ding, C.

    2000-12-04

    The use of the Singular Value Decomposition (SVD) has been proposed for text retrieval in several recent works. This technique uses the SVD to project very high dimensional document and query vectors into a low dimensional space. In this new space it is hoped that the underlying structure of the collection is revealed thus enhancing retrieval performance. Theoretical results have provided some evidence for this claim and to some extent experiments have confirmed this. However, these studies have mostly used small test collections and simplified document models. In this work we investigate the use of the SVD on large documentmore » collections. We show that, if interpreted as a mechanism for representing the terms of the collection, this technique alone is insufficient for dealing with the variability in term occurrence. Section 2 introduces the text retrieval concepts necessary for our work. A short description of our experimental architecture is presented in Section 3. Section 4 describes how term occurrence variability affects the SVD and then shows how the decomposition influences retrieval performance. A possible way of improving SVD-based techniques is presented in Section 5 and concluded in Section 6.« less

  6. Simplified moment tensor analysis and unified decomposition of acoustic emission source: Application to in situ hydrofracturing test

    NASA Astrophysics Data System (ADS)

    Ohtsu, Masayasu

    1991-04-01

    An application of a moment tensor analysis to acoustic emission (AE) is studied to elucidate crack types and orientations of AE sources. In the analysis, simplified treatment is desirable, because hundreds of AE records are obtained from just one experiment and thus sophisticated treatment is realistically cumbersome. Consequently, a moment tensor inversion based on P wave amplitude is employed to determine six independent tensor components. Selecting only P wave portion from the full-space Green's function of homogeneous and isotropic material, a computer code named SiGMA (simplified Green's functions for the moment tensor analysis) is developed for the AE inversion analysis. To classify crack type and to determine crack orientation from moment tensor components, a unified decomposition of eigenvalues into a double-couple (DC) part, a compensated linear vector dipole (CLVD) part, and an isotropic part is proposed. The aim of the decomposition is to determine the proportion of shear contribution (DC) and tensile contribution (CLVD + isotropic) on AE sources and to classify cracks into a crack type of the dominant motion. Crack orientations determined from eigenvectors are presented as crack-opening vectors for tensile cracks and fault motion vectors for shear cracks, instead of stereonets. The SiGMA inversion and the unified decomposition are applied to synthetic data and AE waveforms detected during an in situ hydrofracturing test. To check the accuracy of the procedure, numerical experiments are performed on the synthetic waveforms, including cases with 10% random noise added. Results show reasonable agreement with assumed crack configurations. Although the maximum error is approximately 10% with respect to the ratios, the differences on crack orientations are less than 7°. AE waveforms detected by eight accelerometers deployed during the hydrofracturing test are analyzed. Crack types and orientations determined are in reasonable agreement with a predicted failure plane from borehole TV observation. The results suggest that tensile cracks are generated first at weak seams and then shear cracks follow on the opened joints.

  7. On the Hilbert-Huang Transform Data Processing System Development

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Huang, Norden E.; Cornwell, Evette; Smith, Darell

    2003-01-01

    One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The Fourier view of nonlinear mechanics that had existed for a long time, and the associated FFT (fairly recent development), carry strong a-priori assumptions about the source data, such as linearity and of being stationary. Natural phenomena measurements are essentially nonlinear and nonstationary. A very recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT) proposes a novel approach to the solution for the nonlinear class of spectrum analysis problems. Using the Empirical Mode Decomposition (EMD) followed by the Hilbert Transform of the empirical decomposition data (HT), the HHT allows spectrum analysis of nonlinear and nonstationary data by using an engineering a-posteriori data processing, based on the EMD algorithm. This results in a non-constrained decomposition of a source real value data vector into a finite set of Intrinsic Mode Functions (IMF) that can be further analyzed for spectrum interpretation by the classical Hilbert Transform. This paper describes phase one of the development of a new engineering tool, the HHT Data Processing System (HHTDPS). The HHTDPS allows applying the "T to a data vector in a fashion similar to the heritage FFT. It is a generic, low cost, high performance personal computer (PC) based system that implements the HHT computational algorithms in a user friendly, file driven environment. This paper also presents a quantitative analysis for a complex waveform data sample, a summary of technology commercialization efforts and the lessons learned from this new technology development.

  8. Geometric decompositions of collective motion

    NASA Astrophysics Data System (ADS)

    Mischiati, Matteo; Krishnaprasad, P. S.

    2017-04-01

    Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes-including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots.

  9. Geometric decompositions of collective motion

    PubMed Central

    Krishnaprasad, P. S.

    2017-01-01

    Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes—including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots. PMID:28484319

  10. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest

    PubMed Central

    Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-01-01

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548

  11. Reduced Order Model Basis Vector Generation: Generates Basis Vectors fro ROMs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arrighi, Bill

    2016-03-03

    libROM is a library that implements order reduction via singular value decomposition (SVD) of sampled state vectors. It implements 2 parallel, incremental SVD algorithms and one serial, non-incremental algorithm. It also provides a mechanism for adaptive sampling of basis vectors.

  12. O Electromagnetic Power Waves and Power Density Components.

    NASA Astrophysics Data System (ADS)

    Petzold, Donald Wayne

    1980-12-01

    On January 10, 1884 Lord Rayleigh presented a paper entitled "On the Transfer of Energy in the Electromagnetic Field" to the Royal Society of London. This paper had been authored by the late Fellow of Trinity College, Cambridge, Professor J. H. Poynting and in it he claimed that there was a general law for the transfer of electromagnetic energy. He argued that associated with each point in space is a quantity, that has since been called the Poynting vector, that is a measure of the rate of energy flow per unit area. His analysis was concerned with the integration of this power density vector at all points over an enclosing surface of a specific volume. The interpretation of this Poynting vector as a true measure of the local power density was viewed with great skepticism unless the vector was integrated over a closed surface, as the development of the concept required. However, within the last decade or so Shadowitz indicates that a number of prominent authors have argued that the criticism of the interpretation of Poynting's vector as a local power density vector is unjustified. The present paper is not concerned with these arguments but instead is concerned with a decomposition of Poynting's power density vector into two and only two components: one vector which has the same direction as Poynting's vector and which is called the forward power density vector, and another vector, directed opposite to the Poynting vector and called the reverse power density vector. These new local forward and reverse power density vectors will be shown to be dependent upon forward and reverse power wave vectors and these vectors in turn will be related to newly defined forward and reverse components of the electric and magnetic fields. The sum of these forward and reverse power density vectors, which is simply the original Poynting vector, is associated with the total electromagnetic energy traveling past the local point. Another vector which is the difference between the forward and reverse power density vectors and which will be shown to be associated with the total electric and magnetic field energy densities existing at a local point will also be introduced. These local forward and reverse power density vectors may be integrated over a surface to determine the forward and reverse powers and from these results problems related to maximum power transfer or efficiency of electromagnetic energy transmission in space may be studied in a manner similar to that presently being done with transmission lines, wave guides, and more recently with two port multiport lumped parameter systems. These new forward and reverse power density vectors at a point in space are analogous to the forward and revoltages or currents and power waves as used with the transmission line, waveguide, or port. These power wave vectors in space are a generalization of the power waves as developed by Penfield, Youla, and Kurokawa and used with the scattering parameters associated with transmission lines, waveguides and ports.

  13. Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Meadows, Steven

    1997-10-01

    Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.

  14. Using domain decomposition in the multigrid NAS parallel benchmark on the Fujitsu VPP500

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J.C.H.; Lung, H.; Katsumata, Y.

    1995-12-01

    In this paper, we demonstrate how domain decomposition can be applied to the multigrid algorithm to convert the code for MPP architectures. We also discuss the performance and scalability of this implementation on the new product line of Fujitsu`s vector parallel computer, VPP500. This computer has Fujitsu`s well-known vector processor as the PE each rated at 1.6 C FLOPS. The high speed crossbar network rated at 800 MB/s provides the inter-PE communication. The results show that the physical domain decomposition is the best way to solve MG problems on VPP500.

  15. Neurocomputing strategies in decomposition based structural design

    NASA Technical Reports Server (NTRS)

    Szewczyk, Z.; Hajela, P.

    1993-01-01

    The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.

  16. Protein folding: complex potential for the driving force in a two-dimensional space of collective variables.

    PubMed

    Chekmarev, Sergei F

    2013-10-14

    Using the Helmholtz decomposition of the vector field of folding fluxes in a two-dimensional space of collective variables, a potential of the driving force for protein folding is introduced. The potential has two components. One component is responsible for the source and sink of the folding flows, which represent respectively, the unfolded states and the native state of the protein, and the other, which accounts for the flow vorticity inherently generated at the periphery of the flow field, is responsible for the canalization of the flow between the source and sink. The theoretical consideration is illustrated by calculations for a model β-hairpin protein.

  17. On Fock-space representations of quantized enveloping algebras related to noncommutative differential geometry

    NASA Astrophysics Data System (ADS)

    Jurčo, B.; Schlieker, M.

    1995-07-01

    In this paper explicitly natural (from the geometrical point of view) Fock-space representations (contragradient Verma modules) of the quantized enveloping algebras are constructed. In order to do so, one starts from the Gauss decomposition of the quantum group and introduces the differential operators on the corresponding q-deformed flag manifold (assumed as a left comodule for the quantum group) by a projection to it of the right action of the quantized enveloping algebra on the quantum group. Finally, the representatives of the elements of the quantized enveloping algebra corresponding to the left-invariant vector fields on the quantum group are expressed as first-order differential operators on the q-deformed flag manifold.

  18. Polar decomposition for attitude determination from vector observations

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.

    1993-01-01

    This work treats the problem of weighted least squares fitting of a 3D Euclidean-coordinate transformation matrix to a set of unit vectors measured in the reference and transformed coordinates. A closed-form analytic solution to the problem is re-derived. The fact that the solution is the closest orthogonal matrix to some matrix defined on the measured vectors and their weights is clearly demonstrated. Several known algorithms for computing the analytic closed form solution are considered. An algorithm is discussed which is based on the polar decomposition of matrices into the closest unitary matrix to the decomposed matrix and a Hermitian matrix. A somewhat longer improved algorithm is suggested too. A comparison of several algorithms is carried out using simulated data as well as real data from the Upper Atmosphere Research Satellite. The comparison is based on accuracy and time consumption. It is concluded that the algorithms based on polar decomposition yield a simple although somewhat less accurate solution. The precision of the latter algorithms increase with the number of the measured vectors and with the accuracy of their measurement.

  19. Application of wavelet-based multi-model Kalman filters to real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Chou, Chien-Ming; Wang, Ru-Yih

    2004-04-01

    This paper presents the application of a multimodel method using a wavelet-based Kalman filter (WKF) bank to simultaneously estimate decomposed state variables and unknown parameters for real-time flood forecasting. Applying the Haar wavelet transform alters the state vector and input vector of the state space. In this way, an overall detail plus approximation describes each new state vector and input vector, which allows the WKF to simultaneously estimate and decompose state variables. The wavelet-based multimodel Kalman filter (WMKF) is a multimodel Kalman filter (MKF), in which the Kalman filter has been substituted for a WKF. The WMKF then obtains M estimated state vectors. Next, the M state-estimates, each of which is weighted by its possibility that is also determined on-line, are combined to form an optimal estimate. Validations conducted for the Wu-Tu watershed, a small watershed in Taiwan, have demonstrated that the method is effective because of the decomposition of wavelet transform, the adaptation of the time-varying Kalman filter and the characteristics of the multimodel method. Validation results also reveal that the resulting method enhances the accuracy of the runoff prediction of the rainfall-runoff process in the Wu-Tu watershed.

  20. Vector autoregressive models: A Gini approach

    NASA Astrophysics Data System (ADS)

    Mussard, Stéphane; Ndiaye, Oumar Hamady

    2018-02-01

    In this paper, it is proven that the usual VAR models may be performed in the Gini sense, that is, on a ℓ1 metric space. The Gini regression is robust to outliers. As a consequence, when data are contaminated by extreme values, we show that semi-parametric VAR-Gini regressions may be used to obtain robust estimators. The inference about the estimators is made with the ℓ1 norm. Also, impulse response functions and Gini decompositions for prevision errors are introduced. Finally, Granger's causality tests are properly derived based on U-statistics.

  1. Efficient morse decompositions of vector fields.

    PubMed

    Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene

    2008-01-01

    Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.

  2. Biquaternion beamspace with its application to vector-sensor array direction findings and polarization estimations

    NASA Astrophysics Data System (ADS)

    Li, Dan; Xu, Feng; Jiang, Jing Fei; Zhang, Jian Qiu

    2017-12-01

    In this paper, a biquaternion beamspace, constructed by projecting the original data of an electromagnetic vector-sensor array into a subspace of a lower dimension via a quaternion transformation matrix, is first proposed. To estimate the direction and polarization angles of sources, biquaternion beamspace multiple signal classification (BB-MUSIC) estimators are then formulated. The analytical results show that the biquaternion beamspaces offer us some additional degrees of freedom to simultaneously achieve three goals. One is to save the memory spaces for storing the data covariance matrix and reduce the computation efforts of the eigen-decomposition. Another is to decouple the estimations of the sources' polarization parameters from those of their direction angles. The other is to blindly whiten the coherent noise of the six constituent antennas in each vector-sensor. It is also shown that the existing biquaternion multiple signal classification (BQ-MUSIC) estimator is a specific case of our BB-MUSIC ones. The simulation results verify the correctness and effectiveness of the analytical ones.

  3. Current harmonics elimination control method for six-phase PM synchronous motor drives.

    PubMed

    Yuan, Lei; Chen, Ming-liang; Shen, Jian-qing; Xiao, Fei

    2015-11-01

    To reduce the undesired 5th and 7th stator harmonic current in the six-phase permanent magnet synchronous motor (PMSM), an improved vector control algorithm was proposed based on vector space decomposition (VSD) transformation method, which can control the fundamental and harmonic subspace separately. To improve the traditional VSD technology, a novel synchronous rotating coordinate transformation matrix was presented in this paper, and only using the traditional PI controller in d-q subspace can meet the non-static difference adjustment, the controller parameter design method is given by employing internal model principle. Moreover, the current PI controller parallel with resonant controller is employed in x-y subspace to realize the specific 5th and 7th harmonic component compensation. In addition, a new six-phase SVPWM algorithm based on VSD transformation theory is also proposed. Simulation and experimental results verify the effectiveness of current decoupling vector controller. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Decomposition-aggregation stability analysis. [for large scale dynamic systems with application to spinning Skylab control system

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Weissenberger, S.; Cuk, S. M.

    1973-01-01

    This report presents the development and description of the decomposition aggregation approach to stability investigations of high dimension mathematical models of dynamic systems. The high dimension vector differential equation describing a large dynamic system is decomposed into a number of lower dimension vector differential equations which represent interconnected subsystems. Then a method is described by which the stability properties of each subsystem are aggregated into a single vector Liapunov function, representing the aggregate system model, consisting of subsystem Liapunov functions as components. A linear vector differential inequality is then formed in terms of the vector Liapunov function. The matrix of the model, which reflects the stability properties of the subsystems and the nature of their interconnections, is analyzed to conclude over-all system stability characteristics. The technique is applied in detail to investigate the stability characteristics of a dynamic model of a hypothetical spinning Skylab.

  5. Tensor gauge condition and tensor field decomposition

    NASA Astrophysics Data System (ADS)

    Zhu, Ben-Chao; Chen, Xiang-Song

    2015-10-01

    We discuss various proposals of separating a tensor field into pure-gauge and gauge-invariant components. Such tensor field decomposition is intimately related to the effort of identifying the real gravitational degrees of freedom out of the metric tensor in Einstein’s general relativity. We show that as for a vector field, the tensor field decomposition has exact correspondence to and can be derived from the gauge-fixing approach. The complication for the tensor field, however, is that there are infinitely many complete gauge conditions in contrast to the uniqueness of Coulomb gauge for a vector field. The cause of such complication, as we reveal, is the emergence of a peculiar gauge-invariant pure-gauge construction for any gauge field of spin ≥ 2. We make an extensive exploration of the complete tensor gauge conditions and their corresponding tensor field decompositions, regarding mathematical structures, equations of motion for the fields and nonlinear properties. Apparently, no single choice is superior in all aspects, due to an awkward fact that no gauge-fixing can reduce a tensor field to be purely dynamical (i.e. transverse and traceless), as can the Coulomb gauge in a vector case.

  6. Domain decomposition methods in aerodynamics

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.; Saltz, Joel

    1990-01-01

    Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported.

  7. Propagation and wavefront ambiguity of linear nondiffracting beams

    NASA Astrophysics Data System (ADS)

    Grunwald, R.; Bock, M.

    2014-02-01

    Ultrashort-pulsed Bessel and Airy beams in free space are often interpreted as "linear light bullets". Usually, interconnected intensity profiles are considered a "propagation" along arbitrary pathways which can even follow curved trajectories. A more detailed analysis, however, shows that this picture gives an adequate description only in situations which do not require to consider the transport of optical signals or causality. To also cover these special cases, a generalization of the terms "beam" and "propagation" is necessary. The problem becomes clearer by representing the angular spectra of the propagating wave fields by rays or Poynting vectors. It is known that quasi-nondiffracting beams can be described as caustics of ray bundles. Their decomposition into Poynting vectors by Shack-Hartmann sensors indicates that, in the frame of their classical definition, the corresponding local wavefronts are ambiguous and concepts based on energy density are not appropriate to describe the propagation completely. For this reason, quantitative parameters like the beam propagation factor have to be treated with caution as well. For applications like communication or optical computing, alternative descriptions are required. A heuristic approach based on vector field based information transport and Fourier analysis is proposed here. Continuity and discontinuity of far field distributions in space and time are discussed. Quantum aspects of propagation are briefly addressed.

  8. Multiple image encryption scheme based on pixel exchange operation and vector decomposition

    NASA Astrophysics Data System (ADS)

    Xiong, Y.; Quan, C.; Tay, C. J.

    2018-02-01

    We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.

  9. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    PubMed Central

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-01-01

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431

  10. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array.

    PubMed

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-04-27

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.

  11. Regularization of Mickelsson generators for nonexceptional quantum groups

    NASA Astrophysics Data System (ADS)

    Mudrov, A. I.

    2017-08-01

    Let g' ⊂ g be a pair of Lie algebras of either symplectic or orthogonal infinitesimal endomorphisms of the complex vector spaces C N-2 ⊂ C N and U q (g') ⊂ U q (g) be a pair of quantum groups with a triangular decomposition U q (g) = U q (g-) U q (g+) U q (h). Let Z q (g, g') be the corresponding step algebra. We assume that its generators are rational trigonometric functions h ∗ → U q (g±). We describe their regularization such that the resulting generators do not vanish for any choice of the weight.

  12. Global Solutions to Repulsive Hookean Elastodynamics

    NASA Astrophysics Data System (ADS)

    Hu, Xianpeng; Masmoudi, Nader

    2017-01-01

    The global existence of classical solutions to the three dimensional repulsive Hookean elastodynamics around an equilibrium is considered. By linearization and Hodge's decomposition, the compressible part of the velocity, the density, and the compressible part of the transpose of the deformation gradient satisfy Klein-Gordon equations with speed {√{2}}, while the incompressible parts of the velocity and of the transpose of the deformation gradient satisfy wave equations with speed one. The space-time resonance method combined with the vector field method is used in a novel way to obtain the decay of the solution and hence global existence.

  13. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    NASA Astrophysics Data System (ADS)

    Loring, B.; Karimabadi, H.; Rortershteyn, V.

    2015-10-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.

  14. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim

    2014-07-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not.more » We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.« less

  15. Scalar/Vector potential formulation for compressible viscous unsteady flows

    NASA Technical Reports Server (NTRS)

    Morino, L.

    1985-01-01

    A scalar/vector potential formulation for unsteady viscous compressible flows is presented. The scalar/vector potential formulation is based on the classical Helmholtz decomposition of any vector field into the sum of an irrotational and a solenoidal field. The formulation is derived from fundamental principles of mechanics and thermodynamics. The governing equations for the scalar potential and vector potential are obtained, without restrictive assumptions on either the equation of state or the constitutive relations or the stress tensor and the heat flux vector.

  16. Clustering Tree-structured Data on Manifold

    PubMed Central

    Lu, Na; Miao, Hongyu

    2016-01-01

    Tree-structured data usually contain both topological and geometrical information, and are necessarily considered on manifold instead of Euclidean space for appropriate data parameterization and analysis. In this study, we propose a novel tree-structured data parameterization, called Topology-Attribute matrix (T-A matrix), so the data clustering task can be conducted on matrix manifold. We incorporate the structure constraints embedded in data into the non-negative matrix factorization method to determine meta-trees from the T-A matrix, and the signature vector of each single tree can then be extracted by meta-tree decomposition. The meta-tree space turns out to be a cone space, in which we explore the distance metric and implement the clustering algorithm based on the concepts like Fréchet mean. Finally, the T-A matrix based clustering (TAMBAC) framework is evaluated and compared using both simulated data and real retinal images to illus trate its efficiency and accuracy. PMID:26660696

  17. Utilizing the Structure and Content Information for XML Document Clustering

    NASA Astrophysics Data System (ADS)

    Tran, Tien; Kutty, Sangeetha; Nayak, Richi

    This paper reports on the experiments and results of a clustering approach used in the INEX 2008 document mining challenge. The clustering approach utilizes both the structure and content information of the Wikipedia XML document collection. A latent semantic kernel (LSK) is used to measure the semantic similarity between XML documents based on their content features. The construction of a latent semantic kernel involves the computing of singular vector decomposition (SVD). On a large feature space matrix, the computation of SVD is very expensive in terms of time and memory requirements. Thus in this clustering approach, the dimension of the document space of a term-document matrix is reduced before performing SVD. The document space reduction is based on the common structural information of the Wikipedia XML document collection. The proposed clustering approach has shown to be effective on the Wikipedia collection in the INEX 2008 document mining challenge.

  18. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.

    PubMed

    Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong

    2018-05-11

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.

  19. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN

    PubMed Central

    Cheng, Gang; Chen, Xihui

    2018-01-01

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671

  20. Visualization of x-ray computer tomography using computer-generated holography

    NASA Astrophysics Data System (ADS)

    Daibo, Masahiro; Tayama, Norio

    1998-09-01

    The theory converted from x-ray projection data to the hologram directly by combining the computer tomography (CT) with the computer generated hologram (CGH), is proposed. The purpose of this study is to offer the theory for realizing the all- electronic and high-speed seeing through 3D visualization system, which is for the application to medical diagnosis and non- destructive testing. First, the CT is expressed using the pseudo- inverse matrix which is obtained by the singular value decomposition. CGH is expressed in the matrix style. Next, `projection to hologram conversion' (PTHC) matrix is calculated by the multiplication of phase matrix of CGH with pseudo-inverse matrix of the CT. Finally, the projection vector is converted to the hologram vector directly, by multiplication of the PTHC matrix with the projection vector. Incorporating holographic analog computation into CT reconstruction, it becomes possible that the calculation amount is drastically reduced. We demonstrate the CT cross section which is reconstituted by He-Ne laser in the 3D space from the real x-ray projection data acquired by x-ray television equipment, using our direct conversion technique.

  1. Subband directional vector quantization in radiological image compression

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  2. About decomposition approach for solving the classification problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.

    2016-11-01

    This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.

  3. River flow prediction using hybrid models of support vector regression with the wavelet transform, singular spectrum analysis and chaotic approach

    NASA Astrophysics Data System (ADS)

    Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal

    2018-06-01

    Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.

  4. AGT relations for abelian quiver gauge theories on ALE spaces

    NASA Astrophysics Data System (ADS)

    Pedrini, Mattia; Sala, Francesco; Szabo, Richard J.

    2016-05-01

    We construct level one dominant representations of the affine Kac-Moody algebra gl̂k on the equivariant cohomology groups of moduli spaces of rank one framed sheaves on the orbifold compactification of the minimal resolution Xk of the Ak-1 toric singularity C2 /Zk. We show that the direct sum of the fundamental classes of these moduli spaces is a Whittaker vector for gl̂k, which proves the AGT correspondence for pure N = 2 U(1) gauge theory on Xk. We consider Carlsson-Okounkov type Ext-bundles over products of the moduli spaces and use their Euler classes to define vertex operators. Under the decomposition gl̂k ≃ h ⊕sl̂k, these vertex operators decompose as products of bosonic exponentials associated to the Heisenberg algebra h and primary fields of sl̂k. We use these operators to prove the AGT correspondence for N = 2 superconformal abelian quiver gauge theories on Xk.

  5. Decomposition of the complex system into nonlinear spatio-temporal modes: algorithm and application to climate data mining

    NASA Astrophysics Data System (ADS)

    Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry

    2015-04-01

    Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems. 1. Feigin A.M., Mukhin D., Gavrilov A., Volodin E.M., and Loskutov E.M. (2013) "Separation of spatial-temporal patterns ("climatic modes") by combined analysis of really measured and generated numerically vector time series", AGU 2013 Fall Meeting, Abstract NG33A-1574. 2. Alexander Feigin, Dmitry Mukhin, Andrey Gavrilov, Evgeny Volodin, and Evgeny Loskutov (2014) "Approach to analysis of multiscale space-distributed time series: separation of spatio-temporal modes with essentially different time scales", Geophysical Research Abstracts, Vol. 16, EGU2014-6877. 3. Dmitry Mukhin, Dmitri Kondrashov, Evgeny Loskutov, Andrey Gavrilov, Alexander Feigin, and Michael Ghil (2014) "Predicting critical transitions in ENSO models, Part II: Spatially dependent models", Journal of Climate (accepted, doi: 10.1175/JCLI-D-14-00240.1). 4. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 5. Dmitry Mukhin, Andrey Gavrilov, Evgeny M Loskutov and Alexander M Feigin (2014) "Nonlinear Decomposition of Climate Data: a New Method for Reconstruction of Dynamical Modes", AGU 2014 Fall Meeting, Abstract NG43A-3752. 6. Andrey Gavrilov, Dmitry Mukhin, Evgeny Loskutov, and Alexander Feigin (2015) "Empirical decomposition of climate data into nonlinear dynamic modes", Geophysical Research Abstracts, Vol. 17, EGU2015-627. 7. Dmitry Mukhin, Andrey Gavrilov, Evgeny Loskutov, Alexander Feigin, and Juergen Kurths (2015) "Reconstruction of principal dynamical modes from climatic variability: nonlinear approach", Geophysical Research Abstracts, Vol. 17, EGU2015-5729. 8. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm. 9. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/.

  6. A Random Algorithm for Low-Rank Decomposition of Large-Scale Matrices With Missing Entries.

    PubMed

    Liu, Yiguang; Lei, Yinjie; Li, Chunguang; Xu, Wenzheng; Pu, Yifei

    2015-11-01

    A random submatrix method (RSM) is proposed to calculate the low-rank decomposition U(m×r)V(n×r)(T) (r < m, n) of the matrix Y∈R(m×n) (assuming m > n generally) with known entry percentage 0 < ρ ≤ 1. RSM is very fast as only O(mr(2)ρ(r)) or O(n(3)ρ(3r)) floating-point operations (flops) are required, compared favorably with O(mnr+r(2)(m+n)) flops required by the state-of-the-art algorithms. Meanwhile, RSM has the advantage of a small memory requirement as only max(n(2),mr+nr) real values need to be saved. With the assumption that known entries are uniformly distributed in Y, submatrices formed by known entries are randomly selected from Y with statistical size k×nρ(k) or mρ(l)×l , where k or l takes r+1 usually. We propose and prove a theorem, under random noises the probability that the subspace associated with a smaller singular value will turn into the space associated to anyone of the r largest singular values is smaller. Based on the theorem, the nρ(k)-k null vectors or the l-r right singular vectors associated with the minor singular values are calculated for each submatrix. The vectors ought to be the null vectors of the submatrix formed by the chosen nρ(k) or l columns of the ground truth of V(T). If enough submatrices are randomly chosen, V and U can be estimated accordingly. The experimental results on random synthetic matrices with sizes such as 13 1072 ×10(24) and on real data sets such as dinosaur indicate that RSM is 4.30 ∼ 197.95 times faster than the state-of-the-art algorithms. It, meanwhile, has considerable high precision achieving or approximating to the best.

  7. Young—Capelli symmetrizers in superalgebras†

    PubMed Central

    Brini, Andrea; Teolis, Antonio G. B.

    1989-01-01

    Let Supern[U [unk] V] be the nth homogeneous subspace of the supersymmetric algebra of U [unk] V, where U and V are Z2-graded vector spaces over a field K of characteristic zero. The actions of the general linear Lie superalgebras pl(U) and pl(V) span two finite-dimensional K-subalgebras B and [unk] of EndK(Supern[U [unk] V]) that are the centralizers of each other. Young—Capelli symmetrizers and Young—Capelli *-symmetrizers give rise to K-linear bases of B and [unk] containing orthogonal systems of idempotents; thus they yield complete decompositions of B and [unk] into minimal left and right ideals, respectively. PMID:16594014

  8. Combined empirical mode decomposition and texture features for skin lesion classification using quadratic support vector machine.

    PubMed

    Wahba, Maram A; Ashour, Amira S; Napoleon, Sameh A; Abd Elnaby, Mustafa M; Guo, Yanhui

    2017-12-01

    Basal cell carcinoma is one of the most common malignant skin lesions. Automated lesion identification and classification using image processing techniques is highly required to reduce the diagnosis errors. In this study, a novel technique is applied to classify skin lesion images into two classes, namely the malignant Basal cell carcinoma and the benign nevus. A hybrid combination of bi-dimensional empirical mode decomposition and gray-level difference method features is proposed after hair removal. The combined features are further classified using quadratic support vector machine (Q-SVM). The proposed system has achieved outstanding performance of 100% accuracy, sensitivity and specificity compared to other support vector machine procedures as well as with different extracted features. Basal Cell Carcinoma is effectively classified using Q-SVM with the proposed combined features.

  9. Applications of squeezed states: Bogoliubov transformations and wavelets to the statistical mechanics of water and its bubbles

    NASA Technical Reports Server (NTRS)

    Defacio, Brian; Kim, S.-H.; Vannevel, A.

    1994-01-01

    The squeezed states or Bogoliubov transformations and wavelets are applied to two problems in nonrelativistic statistical mechanics: the dielectric response of liquid water, epsilon(q-vector,w), and the bubble formation in water during insonnification. The wavelets are special phase-space windows which cover the domain and range of L(exp 1) intersection of L(exp 2) of classical causal, finite energy solutions. The multiresolution of discrete wavelets in phase space gives a decomposition into regions of time and scales of frequency thereby allowing the renormalization group to be applied to new systems in addition to the tired 'usual suspects' of the Ising models and lattice gasses. The Bogoliubov transformation: squeeze transformation is applied to the dipolaron collective mode in water and to the gas produced by the explosive cavitation process in bubble formation.

  10. A Systolic Architecture for Singular Value Decomposition,

    DTIC Science & Technology

    1983-01-01

    Presented at the 1 st International Colloquium on Vector and Parallel Computing in Scientific Applications, Paris, March 191J Contract N00014-82-K.0703...Gene Golub. Private comunication . given inputs x and n 2 , compute 2 2 2 2 /6/ G. H. Golub and F. T. Luk : "Singular Value I + X1 Decomposition

  11. Generalized Finsler geometric continuum physics with applications in fracture and phase transformations

    NASA Astrophysics Data System (ADS)

    Clayton, J. D.

    2017-02-01

    A theory of deformation of continuous media based on concepts from Finsler differential geometry is presented. The general theory accounts for finite deformations, nonlinear elasticity, and changes in internal state of the material, the latter represented by elements of a state vector of generalized Finsler space whose entries consist of one or more order parameter(s). Two descriptive representations of the deformation gradient are considered. The first invokes an additive decomposition and is applied to problems involving localized inelastic deformation mechanisms such as fracture. The second invokes a multiplicative decomposition and is applied to problems involving distributed deformation mechanisms such as phase transformations or twinning. Appropriate free energy functions are posited for each case, and Euler-Lagrange equations of equilibrium are derived. Solutions are obtained for specific problems of tensile fracture of an elastic cylinder and for amorphization of a crystal under spherical and uniaxial compression. The Finsler-based approach is demonstrated to be more general and potentially more physically descriptive than existing hyperelasticity models couched in Riemannian geometry or Euclidean space, without incorporation of supplementary ad hoc equations or spurious fitting parameters. Predictions for single crystals of boron carbide ceramic agree qualitatively, and in many instances quantitatively, with results from physical experiments and atomic simulations involving structural collapse and failure of the crystal along its c-axis.

  12. Extracting fingerprint of wireless devices based on phase noise and multiple level wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Zhao, Weichen; Sun, Zhuo; Kong, Song

    2016-10-01

    Wireless devices can be identified by the fingerprint extracted from the signal transmitted, which is useful in wireless communication security and other fields. This paper presents a method that extracts fingerprint based on phase noise of signal and multiple level wavelet decomposition. The phase of signal will be extracted first and then decomposed by multiple level wavelet decomposition. The statistic value of each wavelet coefficient vector is utilized for constructing fingerprint. Besides, the relationship between wavelet decomposition level and recognition accuracy is simulated. And advertised decomposition level is revealed as well. Compared with previous methods, our method is simpler and the accuracy of recognition remains high when Signal Noise Ratio (SNR) is low.

  13. Dominant modal decomposition method

    NASA Astrophysics Data System (ADS)

    Dombovari, Zoltan

    2017-03-01

    The paper deals with the automatic decomposition of experimental frequency response functions (FRF's) of mechanical structures. The decomposition of FRF's is based on the Green function representation of free vibratory systems. After the determination of the impulse dynamic subspace, the system matrix is formulated and the poles are calculated directly. By means of the corresponding eigenvectors, the contribution of each element of the impulse dynamic subspace is determined and the sufficient decomposition of the corresponding FRF is carried out. With the presented dominant modal decomposition (DMD) method, the mode shapes, the modal participation vectors and the modal scaling factors are identified using the decomposed FRF's. Analytical example is presented along with experimental case studies taken from machine tool industry.

  14. Generalized Weyl–Heisenberg Algebra, Qudit Systems and Entanglement Measure of Symmetric States via Spin Coherent States

    NASA Astrophysics Data System (ADS)

    Daoud, Mohammed; Kibler, Maurice

    2018-04-01

    A relation is established in the present paper between Dicke states in a d-dimensional space and vectors in the representation space of a generalized Weyl-Heisenberg algebra of finite dimension d. This provides a natural way to deal with the separable and entangled states of a system of N = d-1 symmetric qubit states. Using the decomposition property of Dicke states, it is shown that the separable states coincide with the Perelomov coherent states associated with the generalized Weyl-Heisenberg algebra considered in this paper. In the so-called Majorana scheme, the qudit (d-level) states are represented by N points on the Bloch sphere; roughly speaking, it can be said that a qudit (in a d-dimensional space) is describable by a N-qubit vector (in a N-dimensional space). In such a scheme, the permanent of the matrix describing the overlap between the N qubits makes it possible to measure the entanglement between the N qubits forming the qudit. This is confirmed by a Fubini-Study metric analysis. A new parameter, proportional to the permanent and called perma-concurrence, is introduced for characterizing the entanglement of a symmetric qudit arising from N qubits. For d=3 (i.e., N = 2), this parameter constitutes an alternative to the concurrence for two qubits. Other examples are given for d=4 and 5. A connection between Majorana stars and zeros of a Bargmmann function for qudits closes this article.

  15. Vector and axial-vector decomposition of Einstein's gravitational action

    NASA Astrophysics Data System (ADS)

    Soh, Kwang S.

    1991-08-01

    Vector and axial-vector gravitational fields are introduced to express the Einstein action in the manner of electromagnetism. Their conformal scaling properties are examined, and the resemblance between the general coordinate and electromagnetic gauge transformation is elucidated. The chiral formulation of the gravitational action is constructed. I am deeply grateful to Professor S. Hawking, and Professor G. Lloyd for warm hospitality at DAMTP, and Darwin College, University of Cambridge, respectively. I also appreciate much help received from Dr. Q.-H. Park.

  16. Structure disordering and thermal decomposition of manganese oxalate dihydrate, MnC2O4·2H2O

    NASA Astrophysics Data System (ADS)

    Puzan, Anna N.; Baumer, Vyacheslav N.; Lisovytskiy, Dmytro V.; Mateychenko, Pavel V.

    2018-04-01

    It is found that the known regular structures of MnC2O4·2H2O (I) do not allow to refine the powder X-ray pattern of (I) properly using the Rietveld method. Implementation of order-disorder scheme [28] via the including of appropriate displacement vector improves the refinement results. Also it is found that in the case of (I) the similar improvement may be achieved using the data on two phases of (I) obtained as result of decomposition MnC2O4·3H2O single crystal in the mother solution after growth. Thermal decomposition of (I) produce the anhydrous γ-MnC2O4 (II) the structure of which is differ from the known α- and β-modifications of VIIIb transition metal oxalates. The solved ab initio from the powder pattern structure (II) (space group Pmna, a = 7.1333 (1), b = 5.8787 (1), c = 9.0186 (2) Å, V = 378.19 (1) Å3, Z = 4 and Dx = 2.511 Mg m-3) contains seven-coordinated Mn atoms with Mn-O distances of 2.110-2.358 Å, and is not close-packed. Thermal decomposition of (II) in air flows via forming of amorphous MnO, the heating of which up to 723 K is accompanied by oxidation of MnO to Mn2O3 and further recrystallization of the latter.

  17. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Bo; Kowalski, Karol

    The representation and storage of two-electron integral tensors are vital in large- scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this paper, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition ofmore » the two-electron integral tensor in our implementation. For the size of atomic basis set N_b ranging from ~ 100 up to ~ 2, 000, the observed numerical scaling of our implementation shows O(N_b^{2.5~3}) versus O(N_b^{3~4}) of single CD in most of other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic-orbital (AO) two-electron integral tensor from O(N_b^4) to O(N_b^2 log_{10}(N_b)) with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled- cluster formalism employing single and double excitations (CCSD) on several bench- mark systems including the C_{60} molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10^{-4} to 10^{-3} to give acceptable compromise between efficiency and accuracy.« less

  18. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations.

    PubMed

    Peng, Bo; Kowalski, Karol

    2017-09-12

    The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, N b , ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C 60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10 -4 to 10 -3 to give acceptable compromise between efficiency and accuracy.

  19. Heterogeneous Tensor Decomposition for Clustering via Manifold Optimization.

    PubMed

    Sun, Yanfeng; Gao, Junbin; Hong, Xia; Mishra, Bamdev; Yin, Baocai

    2016-03-01

    Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.

  20. Regular Decompositions for H(div) Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolev, Tzanio; Vassilevski, Panayot

    We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.

  1. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    NASA Astrophysics Data System (ADS)

    Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).

  2. On bipartite pure-state entanglement structure in terms of disentanglement

    NASA Astrophysics Data System (ADS)

    Herbut, Fedor

    2006-12-01

    Schrödinger's disentanglement [E. Schrödinger, Proc. Cambridge Philos. Soc. 31, 555 (1935)], i.e., remote state decomposition, as a physical way to study entanglement, is carried one step further with respect to previous work in investigating the qualitative side of entanglement in any bipartite state vector. Remote measurement (or, equivalently, remote orthogonal state decomposition) from previous work is generalized to remote linearly independent complete state decomposition both in the nonselective and the selective versions. The results are displayed in terms of commutative square diagrams, which show the power and beauty of the physical meaning of the (antiunitary) correlation operator inherent in the given bipartite state vector. This operator, together with the subsystem states (reduced density operators), constitutes the so-called correlated subsystem picture. It is the central part of the antilinear representation of a bipartite state vector, and it is a kind of core of its entanglement structure. The generalization of previously elaborated disentanglement expounded in this article is a synthesis of the antilinear representation of bipartite state vectors, which is reviewed, and the relevant results of [Cassinelli et al., J. Math. Anal. Appl. 210, 472 (1997)] in mathematical analysis, which are summed up. Linearly independent bases (finite or infinite) are shown to be almost as useful in some quantum mechanical studies as orthonormal ones. Finally, it is shown that linearly independent remote pure-state preparation carries the highest probability of occurrence. This singles out linearly independent remote influence from all possible ones.

  3. Mach's principle: Exact frame-dragging via gravitomagnetism in perturbed Friedmann-Robertson-Walker universes with K=({+-}1,0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmid, Christoph

    We show that there is exact dragging of the axis directions of local inertial frames by a weighted average of the cosmological energy currents via gravitomagnetism for all linear perturbations of all Friedmann-Robertson-Walker (FRW) universes and of Einstein's static closed universe, and for all energy-momentum-stress tensors and in the presence of a cosmological constant. This includes FRW universes arbitrarily close to the Milne Universe and the de Sitter universe. Hence the postulate formulated by Ernst Mach about the physical cause for the time-evolution of inertial axes is shown to hold in general relativity for linear perturbations of FRW universes. -more » The time-evolution of local inertial axes (relative to given local fiducial axes) is given experimentally by the precession angular velocity {omega}-vector{sub gyro} of local gyroscopes, which in turn gives the operational definition of the gravitomagnetic field: B-vector{sub g}{identical_to}-2{omega}-vector{sub gyro}. The gravitomagnetic field is caused by energy currents J-vector{sub {epsilon}} via the momentum constraint, Einstein's G{sup 0-}circumflex{sub i-circumflex} equation, (-{delta}+{mu}{sup 2})A-vector{sub g}=-16{pi}G{sub N}J-vector{sub {epsilon}} with B-vector{sub g}=curl A-vector{sub g}. This equation is analogous to Ampere's law, but it holds for all time-dependent situations. {delta} is the de Rham-Hodge Laplacian, and {delta}=-curl curl for the vorticity sector in Riemannian 3-space. - In the solution for an open universe the 1/r{sup 2}-force of Ampere is replaced by a Yukawa force Y{sub {mu}}(r)=(-d/dr)[(1/R)exp(-{mu}r)], form-identical for FRW backgrounds with K=(-1,0). Here r is the measured geodesic distance from the gyroscope to the cosmological source, and 2{pi}R is the measured circumference of the sphere centered at the gyroscope and going through the source point. The scale of the exponential cutoff is the H-dot radius, where H is the Hubble rate, dot is the derivative with respect to cosmic time, and {mu}{sup 2}=-4(dH/dt). Analogous results hold in closed FRW universes and in Einstein's closed static universe.--We list six fundamental tests for the principle formulated by Mach: all of them are explicitly fulfilled by our solutions.--We show that only energy currents in the toroidal vorticity sector with l=1 can affect the precession of gyroscopes. We show that the harmonic decomposition of toroidal vorticity fields in terms of vector spherical harmonics X-vector{sub lm}{sup -} has radial functions which are form-identical for the 3-sphere, the hyperbolic 3-space, and Euclidean 3-space, and are form-identical with the spherical Bessel-, Neumann-, and Hankel functions. - The Appendix gives the de Rham-Hodge Laplacian on vorticity fields in Riemannian 3-spaces by equations connecting the calculus of differential forms with the curl notation. We also give the derivation the Weitzenboeck formula for the difference between the de Rham-Hodge Laplacian {delta} and the ''rough'' Laplacian {nabla}{sup 2} on vector fields.« less

  4. A comparison of linear approaches to filter out environmental effects in structural health monitoring

    NASA Astrophysics Data System (ADS)

    Deraemaeker, A.; Worden, K.

    2018-05-01

    This paper discusses the possibility of using the Mahalanobis squared-distance to perform robust novelty detection in the presence of important environmental variability in a multivariate feature vector. By performing an eigenvalue decomposition of the covariance matrix used to compute that distance, it is shown that the Mahalanobis squared-distance can be written as the sum of independent terms which result from a transformation from the feature vector space to a space of independent variables. In general, especially when the size of the features vector is large, there are dominant eigenvalues and eigenvectors associated with the covariance matrix, so that a set of principal components can be defined. Because the associated eigenvalues are high, their contribution to the Mahalanobis squared-distance is low, while the contribution of the other components is high due to the low value of the associated eigenvalues. This analysis shows that the Mahalanobis distance naturally filters out the variability in the training data. This property can be used to remove the effect of the environment in damage detection, in much the same way as two other established techniques, principal component analysis and factor analysis. The three techniques are compared here using real experimental data from a wooden bridge for which the feature vector consists in eigenfrequencies and modeshapes collected under changing environmental conditions, as well as damaged conditions simulated with an added mass. The results confirm the similarity between the three techniques and the ability to filter out environmental effects, while keeping a high sensitivity to structural changes. The results also show that even after filtering out the environmental effects, the normality assumption cannot be made for the residual feature vector. An alternative is demonstrated here based on extreme value statistics which results in a much better threshold which avoids false positives in the training data, while allowing detection of all damaged cases.

  5. Definition of Contravariant Velocity Components

    NASA Technical Reports Server (NTRS)

    Hung, Ching-moa; Kwak, Dochan (Technical Monitor)

    2002-01-01

    In this paper we have reviewed the basics of tensor analysis in an attempt to clarify some misconceptions regarding contravariant and covariant vector components as used in fluid dynamics. We have indicated that contravariant components are components of a given vector expressed as a unique combination of the covariant base vector system and, vice versa, that the covariant components are components of a vector expressed with the contravariant base vector system. Mathematically, expressing a vector with a combination of base vector is a decomposition process for a specific base vector system. Hence, the contravariant velocity components are decomposed components of velocity vector along the directions of coordinate lines, with respect to the covariant base vector system. However, the contravariant (and covariant) components are not physical quantities. Their magnitudes and dimensions are controlled by their corresponding covariant (and contravariant) base vectors.

  6. A projection method for low speed flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colella, P.; Pao, K.

    The authors propose a decomposition applicable to low speed, inviscid flows of all Mach numbers less than 1. By using the Hodge decomposition, they may write the velocity field as the sum of a divergence-free vector field and a gradient of a scalar function. Evolution equations for these parts are presented. A numerical procedure based on this decomposition is designed, using projection methods for solving the incompressible variables and a backward-Euler method for solving the potential variables. Numerical experiments are included to illustrate various aspects of the algorithm.

  7. On deformation of complex continuum immersed in a plane space

    NASA Astrophysics Data System (ADS)

    Kovalev, V. A.; Murashkin, E. V.; Radayev, Y. N.

    2018-05-01

    The present paper is devoted to mathematical modelling of complex continua deformations considered as immersed in an external plane space. The complex continuum is defined as a differential manifold supplied with metrics induced by the external space. A systematic derivation of strain tensors by notion of isometric immersion of the complex continuum into a plane space of a higher dimension is proposed. Problem of establishing complete systems of irreducible objective strain and extrastrain tensors for complex continuum immersed in an external plane space is resolved. The solution to the problem is obtained by methods of the field theory and the theory of rational algebraic invariants. Strain tensors of the complex continuum are derived as irreducible algebraic invariants of contravariant vectors of the external space emerging as functional arguments in the complex continuum action density. Present analysis is restricted to rational algebraic invariants. Completeness of the considered systems of rational algebraic invariants is established for micropolar elastic continua. Rational syzygies for non-quadratic invariants are discussed. Objective strain tensors (indifferent to frame rotations in the external plane space) for micropolar continuum are alternatively obtained by properly combining multipliers of polar decompositions of deformation and extra-deformation gradients. The latter is realized only for continua immersed in a plane space of the equal mathematical dimension.

  8. The Effects of City Streets on an Urban Disease Vector

    PubMed Central

    Barbu, Corentin M.; Hong, Andrew; Manne, Jennifer M.; Small, Dylan S.; Quintanilla Calderón, Javier E.; Sethuraman, Karthik; Quispe-Machaca, Víctor; Ancca-Juárez, Jenny; Cornejo del Carpio, Juan G.; Málaga Chavez, Fernando S.; Náquira, César; Levy, Michael Z.

    2013-01-01

    With increasing urbanization vector-borne diseases are quickly developing in cities, and urban control strategies are needed. If streets are shown to be barriers to disease vectors, city blocks could be used as a convenient and relevant spatial unit of study and control. Unfortunately, existing spatial analysis tools do not allow for assessment of the impact of an urban grid on the presence of disease agents. Here, we first propose a method to test for the significance of the impact of streets on vector infestation based on a decomposition of Moran's spatial autocorrelation index; and second, develop a Gaussian Field Latent Class model to finely describe the effect of streets while controlling for cofactors and imperfect detection of vectors. We apply these methods to cross-sectional data of infestation by the Chagas disease vector Triatoma infestans in the city of Arequipa, Peru. Our Moran's decomposition test reveals that the distribution of T. infestans in this urban environment is significantly constrained by streets (p<0.05). With the Gaussian Field Latent Class model we confirm that streets provide a barrier against infestation and further show that greater than 90% of the spatial component of the probability of vector presence is explained by the correlation among houses within city blocks. The city block is thus likely to be an appropriate spatial unit to describe and control T. infestans in an urban context. Characteristics of the urban grid can influence the spatial dynamics of vector borne disease and should be considered when designing public health policies. PMID:23341756

  9. Repeated decompositions reveal the stability of infomax decomposition of fMRI data

    PubMed Central

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2010-01-01

    In this study, we decomposed 12 fMRI data sets from six subjects each 101 times using the infomax algorithm. The first decomposition was taken as a reference decomposition; the others were used to form a component matrix of 100 by 100 components. Equivalence relations between components in this matrix, defined as maximum spatial correlations to the components of the reference decomposition, were found by the Hungarian sorting method and used to form 100 equivalence classes for each data set. We then tested the reproducibility of the matched components in the equivalence classes using uncertainty measures based on component distributions, time courses, and ROC curves. Infomax ICA rarely failed to derive nearly the same components in different decompositions. Very few components per data set were poorly reproduced, even using vector angle uncertainty measures stricter than correlation and detection theory measures. PMID:17281453

  10. MEMS-Based Satellite Micropropulsion Via Catalyzed Hydrogen Peroxide Decomposition

    NASA Technical Reports Server (NTRS)

    Hitt, Darren L.; Zakrzwski, Charles M.; Thomas, Michael A.; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    Micro-electromechanical systems (MEMS) techniques offer great potential in satisfying the mission requirements for the next generation of "micro-scale" satellites being designed by NASA and Department of Defense agencies. More commonly referred to as "nanosats", these miniature satellites feature masses in the range of 10-100 kg and therefore have unique propulsion requirements. The propulsion systems must be capable of providing extremely low levels of thrust and impulse while also satisfying stringent demands on size, mass, power consumption and cost. We begin with an overview of micropropulsion requirements and some current MEMS-based strategies being developed to meet these needs. The remainder of the article focuses the progress being made at NASA Goddard Space Flight Center towards the development of a prototype monopropellant MEMS thruster which uses the catalyzed chemical decomposition of high concentration hydrogen peroxide as a propulsion mechanism. The products of decomposition are delivered to a micro-scale converging/diverging supersonic nozzle which produces the thrust vector; the targeted thrust level approximately 500 N with a specific impulse of 140-180 seconds. Macro-scale hydrogen peroxide thrusters have been used for satellite propulsion for decades; however, the implementation of traditional thruster designs on a MEMS scale has uncovered new challenges in fabrication, materials compatibility, and combustion and hydrodynamic modeling. A summary of the achievements of the project to date is given, as is a discussion of remaining challenges and future prospects.

  11. Set-Based Discrete Particle Swarm Optimization Based on Decomposition for Permutation-Based Multiobjective Combinatorial Optimization Problems.

    PubMed

    Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun

    2018-07-01

    This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

  12. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD).

    PubMed

    Bermúdez Ordoñez, Juan Carlos; Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando

    2018-05-16

    A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ 1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain.

  13. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD)

    PubMed Central

    Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando

    2018-01-01

    A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain. PMID:29772731

  14. Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics

    NASA Astrophysics Data System (ADS)

    Kordy, Michal Adam

    The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the case of the right hand analytically dependent on frequency. The operator's null space is treated by decomposing the solution into the part in the null space and orthogonal to it.

  15. D → π and D → K semileptonic form factors with Nf = 2 + 1 + 1 twisted mass fermions

    NASA Astrophysics Data System (ADS)

    Lubicz, Vittorio; Riggio, Lorenzo; Salerno, Giorgio; Simula, Silvano; Tarantino, Cecilia

    2018-03-01

    We present a lattice determination of the vector and scalar form factors of the D → π(K)lv semileptonic decays, which are relevant for the extraction of the CKM matrix elements |Vcd| and |Vcs| from experimental data. Our analysis is based on the gauge configurations produced by the European Twisted Mass Collaboration with Nf = 2 + 1 +1 flavors of dynamical quarks. We simulated at three different values of the lattice spacing and with pion masses as small as 210 MeV. The matrix elements of both vector and scalar currents are determined for a plenty of kinematical conditions in which parent and child mesons are either moving or at rest. Lorentz symmetry breaking due to hypercubic effects is clearly observed in the data and included in the decomposition of the current matrix elements in terms of additional form factors. After the extrapolations to the physical pion mass and to the continuum limit the vector and scalar form factors are determined in the whole kinematical region from q2 = 0 up to qmax2 = (MD - Mπ(K))2 accessible in the experiments, obtaining a good overall agreement with experiments, except in the region at high values of q2 where some deviations are visible.

  16. Fuzzy scalar and vector median filters based on fuzzy distances.

    PubMed

    Chatzis, V; Pitas, I

    1999-01-01

    In this paper, the fuzzy scalar median (FSM) is proposed, defined by using ordering of fuzzy numbers based on fuzzy minimum and maximum operations defined by using the extension principle. Alternatively, the FSM is defined from the minimization of a fuzzy distance measure, and the equivalence of the two definitions is proven. Then, the fuzzy vector median (FVM) is proposed as an extension of vector median, based on a novel distance definition of fuzzy vectors, which satisfy the property of angle decomposition. By defining properly the fuzziness of a value, the combination of the basic properties of the classical scalar and vector median (VM) filter with other desirable characteristics can be succeeded.

  17. Anisotropic responses and initial decomposition of condensed-phase β-HMX under shock loadings via molecular dynamics simulations in conjunction with multiscale shock technique.

    PubMed

    Ge, Ni-Na; Wei, Yong-Kai; Song, Zhen-Fei; Chen, Xiang-Rong; Ji, Guang-Fu; Zhao, Feng; Wei, Dong-Qing

    2014-07-24

    Molecular dynamics simulations in conjunction with multiscale shock technique (MSST) are performed to study the initial chemical processes and the anisotropy of shock sensitivity of the condensed-phase HMX under shock loadings applied along the a, b, and c lattice vectors. A self-consistent charge density-functional tight-binding (SCC-DFTB) method was employed. Our results show that there is a difference between lattice vector a (or c) and lattice vector b in the response to a shock wave velocity of 11 km/s, which is investigated through reaction temperature and relative sliding rate between adjacent slipping planes. The response along lattice vectors a and c are similar to each other, whose reaction temperature is up to 7000 K, but quite different along lattice vector b, whose reaction temperature is only up to 4000 K. When compared with shock wave propagation along the lattice vectors a (18 Å/ps) and c (21 Å/ps), the relative sliding rate between adjacent slipping planes along lattice vector b is only 0.2 Å/ps. Thus, the small relative sliding rate between adjacent slipping planes results in the temperature and energy under shock loading increasing at a slower rate, which is the main reason leading to less sensitivity under shock wave compression along lattice vector b. In addition, the C-H bond dissociation is the primary pathway for HMX decomposition in early stages under high shock loading from various directions. Compared with the observation for shock velocities V(imp) = 10 and 11 km/s, the homolytic cleavage of N-NO2 bond was obviously suppressed with increasing pressure.

  18. Multi-color incomplete Cholesky conjugate gradient methods for vector computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poole, E.L.

    1986-01-01

    This research is concerned with the solution on vector computers of linear systems of equations. Ax = b, where A is a large, sparse symmetric positive definite matrix with non-zero elements lying only along a few diagonals of the matrix. The system is solved using the incomplete Cholesky conjugate gradient method (ICCG). Multi-color orderings are used of the unknowns in the linear system to obtain p-color matrices for which a no-fill block ICCG method is implemented on the CYBER 205 with O(N/p) length vector operations in both the decomposition of A and, more importantly, in the forward and back solvesmore » necessary at each iteration of the method. (N is the number of unknowns and p is a small constant). A p-colored matrix is a matrix that can be partitioned into a p x p block matrix where the diagonal blocks are diagonal matrices. The matrix is stored by diagonals and matrix multiplication by diagonals is used to carry out the decomposition of A and the forward and back solves. Additionally, if the vectors across adjacent blocks line up, then some of the overhead associated with vector startups can be eliminated in the matrix vector multiplication necessary at each conjugate gradient iteration. Necessary and sufficient conditions are given to determine which multi-color orderings of the unknowns correspond to p-color matrices, and a process is indicated for choosing multi-color orderings.« less

  19. Binary black hole spacetimes with a helical Killing vector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Christian

    Binary black hole spacetimes with a helical Killing vector, which are discussed as an approximation for the early stage of a binary system, are studied in a projection formalism. In this setting the four-dimensional Einstein equations are equivalent to a three-dimensional gravitational theory with a SL(2,R)/SO(1,1) sigma model as the material source. The sigma model is determined by a complex Ernst equation. 2+1 decompositions of the three-metric are used to establish the field equations on the orbit space of the Killing vector. The two Killing horizons of spherical topology which characterize the black holes, the cylinder of light where themore » Killing vector changes from timelike to spacelike, and infinity are singular points of the equations. The horizon and the light cylinder are shown to be regular singularities, i.e., the metric functions can be expanded in a formal power series in the vicinity. The behavior of the metric at spatial infinity is studied in terms of formal series solutions to the linearized Einstein equations. It is shown that the spacetime is not asymptotically flat in the strong sense to have a smooth null infinity under the assumption that the metric tends asymptotically to the Minkowski metric. In this case the metric functions have an oscillatory behavior in the radial coordinate in a nonaxisymmetric setting, the asymptotic multipoles are not defined. The asymptotic behavior of the Weyl tensor near infinity shows that there is no smooth null infinity.« less

  20. Performance of Scattering Matrix Decomposition and Color Spaces for Synthetic Aperture Radar Imagery

    DTIC Science & Technology

    2010-03-01

    Color Spaces and Synthetic Aperture Radar (SAR) Multicolor Imaging. 15 2.3.1 Colorimetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3.2...III. Decomposition Techniques on SAR Polarimetry and Colorimetry applied to SAR Imagery...space polarimetric SAR systems. Colorimetry is also introduced in this chapter, presenting the fundamentals of the RGB and CMY color spaces, defined for

  1. Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding

    PubMed Central

    Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping

    2015-01-01

    Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771

  2. Lossless and Sufficient - Invariant Decomposition of Deterministic Target

    NASA Astrophysics Data System (ADS)

    Paladini, Riccardo; Ferro Famil, Laurent; Pottier, Eric; Martorella, Marco; Berizzi, Fabrizio

    2011-03-01

    The symmetric radar scattering matrix of a reciprocal target is projected on the circular polarization basis and is decomposed into four orientation invariant parameters, relative phase and relative orientation. The physical interpretation of this results is found in the wave-particle nature of radar scattering due to the circular polarization nature of elemental packets of energy. The proposed decomposition, is based on left orthogonal to left Special Unitary basis, providing the target description in term of a unitary vector. A comparison between the proposed CTD and Cameron, Kennaugh and Krogager decompositions is also pointed out. A validation by the use of both anechoic chamber data and airborne EMISAR data of DTU is used to show the effectiveness of this decomposition for the analysis of coherent targets. In the second paper we will show the application of the rotation group U(3) for the decomposition of distributed targets into nine meaningful parameters.

  3. Two Dimensional Finite Element Based Magnetotelluric Inversion using Singular Value Decomposition Method on Transverse Electric Mode

    NASA Astrophysics Data System (ADS)

    Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy

    2018-04-01

    In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.

  4. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    NASA Astrophysics Data System (ADS)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  5. Machine Learning Techniques for Global Sensitivity Analysis in Climate Models

    NASA Astrophysics Data System (ADS)

    Safta, C.; Sargsyan, K.; Ricciuto, D. M.

    2017-12-01

    Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.

  6. Local structure-based image decomposition for feature extraction with applications to face recognition.

    PubMed

    Qian, Jianjun; Yang, Jian; Xu, Yong

    2013-09-01

    This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.

  7. Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar

    DOE PAGES

    Sen, Satyabrata

    2015-08-04

    We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less

  8. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  9. Single block three-dimensional volume grids about complex aerodynamic vehicles

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Weilmuenster, K. James

    1993-01-01

    This paper presents an alternate approach for the generation of volumetric grids for supersonic and hypersonic flows about complex configurations. The method uses parametric two dimensional block face grid definition within the framework of GRIDGEN2D. The incorporation of face decomposition reduces complex surfaces to simple shapes. These simple shapes are combined to obtain the final face definition. The advantages of this method include the reduction of overall grid generation time through the use of vectorized computer code, the elimination of the need to generate matching block faces, and the implementation of simplified boundary conditions. A simple axisymmetric grid is used to illustrate this method. In addition, volume grids for two complex configurations, the Langley Lifting Body (HL-20) and the Space Shuttle Orbiter, are shown.

  10. Kinematics of reflections in subsurface offset and angle-domain image gathers

    NASA Astrophysics Data System (ADS)

    Dafni, Raanan; Symes, William W.

    2018-05-01

    Seismic migration in the angle-domain generates multiple images of the earth's interior in which reflection takes place at different scattering-angles. Mechanically, the angle-dependent reflection is restricted to happen instantaneously and at a fixed point in space: Incident wave hits a discontinuity in the subsurface media and instantly generates a scattered wave at the same common point of interaction. Alternatively, the angle-domain image may be associated with space-shift (regarded as subsurface offset) extended migration that artificially splits the reflection geometry. Meaning that, incident and scattered waves interact at some offset distance. The geometric differences between the two approaches amount to a contradictory angle-domain behaviour, and unlike kinematic description. We present a phase space depiction of migration methods extended by the peculiar subsurface offset split and stress its profound dissimilarity. In spite of being in radical contradiction with the general physics, the subsurface offset reveals a link to some valuable angle-domain quantities, via post-migration transformations. The angle quantities are indicated by the direction normal to the subsurface offset extended image. They specifically define the local dip and scattering angles if the velocity at the split reflection coordinates is the same for incident and scattered wave pairs. Otherwise, the reflector normal is not a bisector of the opening angle, but of the corresponding slowness vectors. This evidence, together with the distinguished geometry configuration, fundamentally differentiates the angle-domain decomposition based on the subsurface offset split from the conventional decomposition at a common reflection point. An asymptotic simulation of angle-domain moveout curves in layered media exposes the notion of split versus common reflection point geometry. Traveltime inversion methods that involve the subsurface offset extended migration must accommodate the split geometry in the inversion scheme for a robust and successful convergence at the optimal velocity model.

  11. Segmentation of discrete vector fields.

    PubMed

    Li, Hongyu; Chen, Wenbin; Shen, I-Fan

    2006-01-01

    In this paper, we propose an approach for 2D discrete vector field segmentation based on the Green function and normalized cut. The method is inspired by discrete Hodge Decomposition such that a discrete vector field can be broken down into three simpler components, namely, curl-free, divergence-free, and harmonic components. We show that the Green Function Method (GFM) can be used to approximate the curl-free and the divergence-free components to achieve our goal of the vector field segmentation. The final segmentation curves that represent the boundaries of the influence region of singularities are obtained from the optimal vector field segmentations. These curves are composed of piecewise smooth contours or streamlines. Our method is applicable to both linear and nonlinear discrete vector fields. Experiments show that the segmentations obtained using our approach essentially agree with human perceptual judgement.

  12. TE/TM decomposition of electromagnetic sources

    NASA Technical Reports Server (NTRS)

    Lindell, Ismo V.

    1988-01-01

    Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.

  13. Decompositions of the polyhedral product functor with applications to moment-angle complexes and related spaces

    PubMed Central

    Bahri, A.; Bendersky, M.; Cohen, F. R.; Gitler, S.

    2009-01-01

    This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley–Reisner ring of a finite simplicial complex, and natural generalizations. PMID:19620727

  14. Decompositions of the polyhedral product functor with applications to moment-angle complexes and related spaces.

    PubMed

    Bahri, A; Bendersky, M; Cohen, F R; Gitler, S

    2009-07-28

    This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley-Reisner ring of a finite simplicial complex, and natural generalizations.

  15. On the Hilbert-Huang Transform Theoretical Developments

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Blank, Karin; Flatley, Thomas; Huang, Norden E.; Patrick, David; Hestnes, Phyllis

    2005-01-01

    One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). Both carry strong a-priori assumptions about the source data, such as linearity, of being stationary, and of satisfying the Dirichlet conditions. A recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT), proposes a novel approach to the solution for the nonlinear class of spectrum analysis problems. Using a-posteriori data processing based on the Empirical Mode Decomposition (EMD) sifting process (algorithm), followed by the normalized Hilbert Transform of the decomposition data, the HHT allows spectrum analysis of nonlinear and nonstationary data. The EMD sifting process results in a non-constrained decomposition of a source real value data vector into a finite set of Intrinsic Mode Functions (IMF). These functions form a near orthogonal adaptive basis, a basis that is derived from the data. The IMFs can be further analyzed for spectrum interpretation by the classical Hilbert Transform. A new engineering spectrum analysis tool using HHT has been developed at NASA GSFC, the HHT Data Processing System (HHT-DPS). As the HHT-DPS has been successfully used and commercialized, new applications post additional questions about the theoretical basis behind the HHT and EMD algorithms. Why is the fastest changing component of a composite signal being sifted out first in the EMD sifting process? Why does the EMD sifting process seemingly converge and why does it converge rapidly? Does an IMF have a distinctive structure? Why are the IMFs near orthogonal? We address these questions and develop the initial theoretical background for the HHT. This will contribute to the developments of new HHT processing options, such as real-time and 2-D processing using Field Programmable Array (FPGA) computational resources, enhanced HHT synthesis, and broaden the scope of HHT applications for signal processing.

  16. Rapid Transient Pressure Field Computations in the Nearfield of Circular Transducers using Frequency Domain Time-Space Decomposition

    PubMed Central

    Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.

    2013-01-01

    The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476

  17. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  18. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition.

    PubMed

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-03-27

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.

  19. Classification of subsurface objects using singular values derived from signal frames

    DOEpatents

    Chambers, David H; Paglieroni, David W

    2014-05-06

    The classification system represents a detected object with a feature vector derived from the return signals acquired by an array of N transceivers operating in multistatic mode. The classification system generates the feature vector by transforming the real-valued return signals into complex-valued spectra, using, for example, a Fast Fourier Transform. The classification system then generates a feature vector of singular values for each user-designated spectral sub-band by applying a singular value decomposition (SVD) to the N.times.N square complex-valued matrix formed from sub-band samples associated with all possible transmitter-receiver pairs. The resulting feature vector of singular values may be transformed into a feature vector of singular value likelihoods and then subjected to a multi-category linear or neural network classifier for object classification.

  20. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    NASA Astrophysics Data System (ADS)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  1. Reducing Memory Cost of Exact Diagonalization using Singular Value Decomposition

    NASA Astrophysics Data System (ADS)

    Weinstein, Marvin; Chandra, Ravi; Auerbach, Assa

    2012-02-01

    We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements. In contrast to variational approaches and most implementations of DMRG, Lanczos rotations towards the ground state do not involve incremental minimizations, (e.g. sweeping procedures) which may get stuck in false local minima. The lattice of size N is partitioned into two subclusters. At each iteration the rotating Lanczos vector is compressed into two sets of nsvd small subcluster vectors using singular value decomposition. For low entanglement entropy See, (satisfied by short range Hamiltonians), the truncation error is bounded by (-nsvd^1/See). Convergence is tested for the Heisenberg model on Kagom'e clusters of 24, 30 and 36 sites, with no lattice symmetries exploited, using less than 15GB of dynamical memory. Generalization of the Lanczos-SVD algorithm to multiple partitioning is discussed, and comparisons to other techniques are given. Reference: arXiv:1105.0007

  2. Truncated feature representation for automatic target detection using transformed data-based decomposition

    NASA Astrophysics Data System (ADS)

    Riasati, Vahid R.

    2016-05-01

    In this work, the data covariance matrix is diagonalized to provide an orthogonal bases set using the eigen vectors of the data. The eigen-vector decomposition of the data is transformed and filtered in the transform domain to truncate the data for robust features related to a specified set of targets. These truncated eigen features are then combined and reconstructed to utilize in a composite filter and consequently utilized for the automatic target detection of the same class of targets. The results associated with the testing of the current technique are evaluated using the peak-correlation and peak-correlation energy metrics and are presented in this work. The inverse transformed eigen-bases of the current technique may be thought of as an injected sparsity to minimize data in representing the skeletal data structure information associated with the set of targets under consideration.

  3. Adaptive-projection intrinsically transformed multivariate empirical mode decomposition in cooperative brain-computer interface applications.

    PubMed

    Hemakom, Apit; Goverdovsky, Valentin; Looney, David; Mandic, Danilo P

    2016-04-13

    An extension to multivariate empirical mode decomposition (MEMD), termed adaptive-projection intrinsically transformed MEMD (APIT-MEMD), is proposed to cater for power imbalances and inter-channel correlations in real-world multichannel data. It is shown that the APIT-MEMD exhibits similar or better performance than MEMD for a large number of projection vectors, whereas it outperforms MEMD for the critical case of a small number of projection vectors within the sifting algorithm. We also employ the noise-assisted APIT-MEMD within our proposed intrinsic multiscale analysis framework and illustrate the advantages of such an approach in notoriously noise-dominated cooperative brain-computer interface (BCI) based on the steady-state visual evoked potentials and the P300 responses. Finally, we show that for a joint cognitive BCI task, the proposed intrinsic multiscale analysis framework improves system performance in terms of the information transfer rate. © 2016 The Author(s).

  4. Warps, grids and curvature in triple vector bundles

    NASA Astrophysics Data System (ADS)

    Flari, Magdalini K.; Mackenzie, Kirill

    2018-06-01

    A triple vector bundle is a cube of vector bundle structures which commute in the (strict) categorical sense. A grid in a triple vector bundle is a collection of sections of each bundle structure with certain linearity properties. A grid provides two routes around each face of the triple vector bundle, and six routes from the base manifold to the total manifold; the warps measure the lack of commutativity of these routes. In this paper we first prove that the sum of the warps in a triple vector bundle is zero. The proof we give is intrinsic and, we believe, clearer than the proof using decompositions given earlier by one of us. We apply this result to the triple tangent bundle T^3M of a manifold and deduce (as earlier) the Jacobi identity. We further apply the result to the triple vector bundle T^2A for a vector bundle A using a connection in A to define a grid in T^2A . In this case the curvature emerges from the warp theorem.

  5. Hidden Surface Removal through Object Space Decomposition.

    DTIC Science & Technology

    1982-01-01

    12 2.1 Methods of Subdividing the Object Space ..................................................... 14 2.2 Accessing...AC.AIIA TO5ASK FORCE MNT OF TECH WRIONT-PATTERSON AFB 0O4 P/O 1a/I 64100(6 SURFACE REMOVAL THROWN4 OBJECT SPACE 0(COMPOSIT109d.(U UiCLASIFIEC AFZITNl...Surface Removal Through Object Space THESlS/ J AJ;I Decomposition 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR() a. CONTRACT OR GRANT NUMBER(s) Robert

  6. Classification of Partial Discharge Signals by Combining Adaptive Local Iterative Filtering and Entropy Features

    PubMed Central

    Morison, Gordon; Boreham, Philip

    2018-01-01

    Electromagnetic Interference (EMI) is a technique for capturing Partial Discharge (PD) signals in High-Voltage (HV) power plant apparatus. EMI signals can be non-stationary which makes their analysis difficult, particularly for pattern recognition applications. This paper elaborates upon a previously developed software condition-monitoring model for improved EMI events classification based on time-frequency signal decomposition and entropy features. The idea of the proposed method is to map multiple discharge source signals captured by EMI and labelled by experts, including PD, from the time domain to a feature space, which aids in the interpretation of subsequent fault information. Here, instead of using only one permutation entropy measure, a more robust measure, called Dispersion Entropy (DE), is added to the feature vector. Multi-Class Support Vector Machine (MCSVM) methods are utilized for classification of the different discharge sources. Results show an improved classification accuracy compared to previously proposed methods. This yields to a successful development of an expert’s knowledge-based intelligent system. Since this method is demonstrated to be successful with real field data, it brings the benefit of possible real-world application for EMI condition monitoring. PMID:29385030

  7. Invariant object recognition based on the generalized discrete radon transform

    NASA Astrophysics Data System (ADS)

    Easley, Glenn R.; Colonna, Flavia

    2004-04-01

    We introduce a method for classifying objects based on special cases of the generalized discrete Radon transform. We adjust the transform and the corresponding ridgelet transform by means of circular shifting and a singular value decomposition (SVD) to obtain a translation, rotation and scaling invariant set of feature vectors. We then use a back-propagation neural network to classify the input feature vectors. We conclude with experimental results and compare these with other invariant recognition methods.

  8. Discriminative Dictionary Learning With Two-Level Low Rank and Group Sparse Decomposition for Image Classification.

    PubMed

    Wen, Zaidao; Hou, Zaidao; Jiao, Licheng

    2017-11-01

    Discriminative dictionary learning (DDL) framework has been widely used in image classification which aims to learn some class-specific feature vectors as well as a representative dictionary according to a set of labeled training samples. However, interclass similarities and intraclass variances among input samples and learned features will generally weaken the representability of dictionary and the discrimination of feature vectors so as to degrade the classification performance. Therefore, how to explicitly represent them becomes an important issue. In this paper, we present a novel DDL framework with two-level low rank and group sparse decomposition model. In the first level, we learn a class-shared and several class-specific dictionaries, where a low rank and a group sparse regularization are, respectively, imposed on the corresponding feature matrices. In the second level, the class-specific feature matrix will be further decomposed into a low rank and a sparse matrix so that intraclass variances can be separated to concentrate the corresponding feature vectors. Extensive experimental results demonstrate the effectiveness of our model. Compared with the other state-of-the-arts on several popular image databases, our model can achieve a competitive or better performance in terms of the classification accuracy.

  9. Protein sequence comparison based on K-string dictionary.

    PubMed

    Yu, Chenglong; He, Rong L; Yau, Stephen S-T

    2013-10-25

    The current K-string-based protein sequence comparisons require large amounts of computer memory because the dimension of the protein vector representation grows exponentially with K. In this paper, we propose a novel concept, the "K-string dictionary", to solve this high-dimensional problem. It allows us to use a much lower dimensional K-string-based frequency or probability vector to represent a protein, and thus significantly reduce the computer memory requirements for their implementation. Furthermore, based on this new concept, we use Singular Value Decomposition to analyze real protein datasets, and the improved protein vector representation allows us to obtain accurate gene trees. © 2013.

  10. An Efficient and Robust Singular Value Method for Star Pattern Recognition and Attitude Determination

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Kim, Hye-Young; Junkins, John L.

    2003-01-01

    A new star pattern recognition method is developed using singular value decomposition of a measured unit column vector matrix in a measurement frame and the corresponding cataloged vector matrix in a reference frame. It is shown that singular values and right singular vectors are invariant with respect to coordinate transformation and robust under uncertainty. One advantage of singular value comparison is that a pairing process for individual measured and cataloged stars is not necessary, and the attitude estimation and pattern recognition process are not separated. An associated method for mission catalog design is introduced and simulation results are presented.

  11. Visualization of the energy flow for guided forward and backward waves in and around a fluid-loaded elastic cylindrical shell via the Poynting vector field

    NASA Astrophysics Data System (ADS)

    Dean, Cleon E.; Braselton, James P.

    2004-05-01

    Color-coded and vector-arrow grid representations of the Poynting vector field are used to show the energy flow in and around a fluid-loaded elastic cylindrical shell for both forward- and backward-propagating waves. The present work uses a method adapted from a simpler technique due to Kaduchak and Marston [G. Kaduchak and P. L. Marston, ``Traveling-wave decomposition of surface displacements associated with scattering by a cylindrical shell: Numerical evaluation displaying guided forward and backward wave properties,'' J. Acoust. Soc. Am. 98, 3501-3507 (1995)] to isolate unidirectional energy flows.

  12. Fundamental Principles of Classical Mechanics: a Geometrical Perspectives

    NASA Astrophysics Data System (ADS)

    Lam, Kai S.

    2014-07-01

    Classical mechanics is the quantitative study of the laws of motion for oscopic physical systems with mass. The fundamental laws of this subject, known as Newton's Laws of Motion, are expressed in terms of second-order differential equations governing the time evolution of vectors in a so-called configuration space of a system (see Chapter 12). In an elementary setting, these are usually vectors in 3-dimensional Euclidean space, such as position vectors of point particles; but typically they can be vectors in higher dimensional and more abstract spaces. A general knowledge of the mathematical properties of vectors, not only in their most intuitive incarnations as directed arrows in physical space but as elements of abstract linear vector spaces, and those of linear operators (transformations) on vector spaces as well, is then indispensable in laying the groundwork for both the physical and the more advanced mathematical - more precisely topological and geometrical - concepts that will prove to be vital in our subject. In this beginning chapter we will review these properties, and introduce the all-important related notions of dual spaces and tensor products of vector spaces. The notational convention for vectorial and tensorial indices used for the rest of this book (except when otherwise specified) will also be established...

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Ning; Shen, Tielong; Kurtz, Richard

    The properties of nano-scale interstitial dislocation loops under the coupling effect of stress and temperature are studied using atomistic simulation methods and experiments. The decomposition of a loop by the emission of smaller loops is identified as one of the major mechanisms to release the localized stress induced by the coupling effect, which is validated by the TEM observations. The classical conservation law of Burgers vector cannot be applied during such decomposition process. The dislocation network is formed from the decomposed loops, which may initiate the irradiation creep much earlier than expected through the mechanism of climb-controlled glide of dislocations.

  14. Solving the multi-frequency electromagnetic inverse source problem by the Fourier method

    NASA Astrophysics Data System (ADS)

    Wang, Guan; Ma, Fuming; Guo, Yukun; Li, Jingzhi

    2018-07-01

    This work is concerned with an inverse problem of identifying the current source distribution of the time-harmonic Maxwell's equations from multi-frequency measurements. Motivated by the Fourier method for the scalar Helmholtz equation and the polarization vector decomposition, we propose a novel method for determining the source function in the full vector Maxwell's system. Rigorous mathematical justifications of the method are given and numerical examples are provided to demonstrate the feasibility and effectiveness of the method.

  15. FIDEP2 User Manual to Micromechanical Models for Thermoviscoplastic Behavior of Metal Matrix Composites

    DTIC Science & Technology

    1998-09-01

    1 .AND. ICOUNT .GT. ISTRAIN )GOTO 55 Add additional terms in equations for interface nodes If radial loading is applied, add term BMAT (NTOT-1) = SR...term in bmat Using Bmat , and the L-U decomposition of Amat determine XSOL, the vector of radial and hoop stresses CALL LUBKSB(AMAT,NRA,LDA,IPVT... BMAT ,XSOL) Compute stresses from the XSOL solution vector Use Boundary conditions S(1,NTOT2) = SR S(2,1) = S(1,1) Compute total axial

  16. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition

    PubMed Central

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-01-01

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K-nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction. PMID:28346385

  17. Decomposition of group-velocity-locked-vector-dissipative solitons and formation of the high-order soliton structure by the product of their recombination.

    PubMed

    Wang, Xuan; Li, Lei; Geng, Ying; Wang, Hanxiao; Su, Lei; Zhao, Luming

    2018-02-01

    By using a polarization manipulation and projection system, we numerically decomposed the group-velocity-locked-vector-dissipative solitons (GVLVDSs) from a normal dispersion fiber laser and studied the combination of the projections of the phase-modulated components of the GVLVDS through a polarization beam splitter. Pulses with a structure similar to a high-order vector soliton could be obtained, which could be considered as a pseudo-high-order GVLVDS. It is found that, although GVLVDSs are intrinsically different from group-velocity-locked-vector solitons generated in fiber lasers operated in the anomalous dispersion regime, similar characteristics for the generation of pseudo-high-order GVLVDS are obtained. However, pulse chirp plays a significant role on the generation of pseudo-high-order GVLVDS.

  18. Separation of spatial-temporal patterns ('climatic modes') by combined analysis of really measured and generated numerically vector time series

    NASA Astrophysics Data System (ADS)

    Feigin, A. M.; Mukhin, D.; Volodin, E. M.; Gavrilov, A.; Loskutov, E. M.

    2013-12-01

    The new method of decomposition of the Earth's climate system into well separated spatial-temporal patterns ('climatic modes') is discussed. The method is based on: (i) generalization of the MSSA (Multichannel Singular Spectral Analysis) [1] for expanding vector (space-distributed) time series in basis of spatial-temporal empirical orthogonal functions (STEOF), which makes allowance delayed correlations of the processes recorded in spatially separated points; (ii) expanding both real SST data, and longer by several times SST data generated numerically, in STEOF basis; (iii) use of the numerically produced STEOF basis for exclusion of 'too slow' (and thus not represented correctly) processes from real data. The application of the method allows by means of vector time series generated numerically by the INM RAS Coupled Climate Model [2] to separate from real SST anomalies data [3] two climatic modes possessing by noticeably different time scales: 3-5 and 9-11 years. Relations of separated modes to ENSO and PDO are investigated. Possible applications of spatial-temporal climatic patterns concept to prognosis of climate system evolution is discussed. 1. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 2. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm 3. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/

  19. Multiscale vector fields for image pattern recognition

    NASA Technical Reports Server (NTRS)

    Low, Kah-Chan; Coggins, James M.

    1990-01-01

    A uniform processing framework for low-level vision computing in which a bank of spatial filters maps the image intensity structure at each pixel into an abstract feature space is proposed. Some properties of the filters and the feature space are described. Local orientation is measured by a vector sum in the feature space as follows: each filter's preferred orientation along with the strength of the filter's output determine the orientation and the length of a vector in the feature space; the vectors for all filters are summed to yield a resultant vector for a particular pixel and scale. The orientation of the resultant vector indicates the local orientation, and the magnitude of the vector indicates the strength of the local orientation preference. Limitations of the vector sum method are discussed. Investigations show that the processing framework provides a useful, redundant representation of image structure across orientation and scale.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seljak, Uroš; McDonald, Patrick, E-mail: useljak@berkeley.edu, E-mail: pvmcdonald@lbl.gov

    We develop a phase space distribution function approach to redshift space distortions (RSD), in which the redshift space density can be written as a sum over velocity moments of the distribution function. These moments are density weighted and have well defined physical interpretation: their lowest orders are density, momentum density, and stress energy density. The series expansion is convergent if kμu/aH < 1, where k is the wavevector, H the Hubble parameter, u the typical gravitational velocity and μ = cos θ, with θ being the angle between the Fourier mode and the line of sight. We perform an expansionmore » of these velocity moments into helicity modes, which are eigenmodes under rotation around the axis of Fourier mode direction, generalizing the scalar, vector, tensor decomposition of perturbations to an arbitrary order. We show that only equal helicity moments correlate and derive the angular dependence of the individual contributions to the redshift space power spectrum. We show that the dominant term of μ{sup 2} dependence on large scales is the cross-correlation between the density and scalar part of momentum density, which can be related to the time derivative of the matter power spectrum. Additional terms contributing to μ{sup 2} and dominating on small scales are the vector part of momentum density-momentum density correlations, the energy density-density correlations, and the scalar part of anisotropic stress density-density correlations. The second term is what is usually associated with the small scale Fingers-of-God damping and always suppresses power, but the first term comes with the opposite sign and always adds power. Similarly, we identify 7 terms contributing to μ{sup 4} dependence. Some of the advantages of the distribution function approach are that the series expansion converges on large scales and remains valid in multi-stream situations. We finish with a brief discussion of implications for RSD in galaxies relative to dark matter, highlighting the issue of scale dependent bias of velocity moments correlators.« less

  1. Vector calculus in non-integer dimensional space and its applications to fractal media

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2015-02-01

    We suggest a generalization of vector calculus for the case of non-integer dimensional space. The first and second orders operations such as gradient, divergence, the scalar and vector Laplace operators for non-integer dimensional space are defined. For simplification we consider scalar and vector fields that are independent of angles. We formulate a generalization of vector calculus for rotationally covariant scalar and vector functions. This generalization allows us to describe fractal media and materials in the framework of continuum models with non-integer dimensional space. As examples of application of the suggested calculus, we consider elasticity of fractal materials (fractal hollow ball and fractal cylindrical pipe with pressure inside and outside), steady distribution of heat in fractal media, electric field of fractal charged cylinder. We solve the correspondent equations for non-integer dimensional space models.

  2. Method of assessing the state of a rolling bearing based on the relative compensation distance of multiple-domain features and locally linear embedding

    NASA Astrophysics Data System (ADS)

    Kang, Shouqiang; Ma, Danyang; Wang, Yujing; Lan, Chaofeng; Chen, Qingguo; Mikulovich, V. I.

    2017-03-01

    To effectively assess different fault locations and different degrees of performance degradation of a rolling bearing with a unified assessment index, a novel state assessment method based on the relative compensation distance of multiple-domain features and locally linear embedding is proposed. First, for a single-sample signal, time-domain and frequency-domain indexes can be calculated for the original vibration signal and each sensitive intrinsic mode function obtained by improved ensemble empirical mode decomposition, and the singular values of the sensitive intrinsic mode function matrix can be extracted by singular value decomposition to construct a high-dimensional hybrid-domain feature vector. Second, a feature matrix can be constructed by arranging each feature vector of multiple samples, the dimensions of each row vector of the feature matrix can be reduced by the locally linear embedding algorithm, and the compensation distance of each fault state of the rolling bearing can be calculated using the support vector machine. Finally, the relative distance between different fault locations and different degrees of performance degradation and the normal-state optimal classification surface can be compensated, and on the basis of the proposed relative compensation distance, the assessment model can be constructed and an assessment curve drawn. Experimental results show that the proposed method can effectively assess different fault locations and different degrees of performance degradation of the rolling bearing under certain conditions.

  3. Extending the length and time scales of Gram-Schmidt Lyapunov vector computations

    NASA Astrophysics Data System (ADS)

    Costa, Anthony B.; Green, Jason R.

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram-Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram-Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard-Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram-Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  4. Decomposition of Fuzzy Soft Sets with Finite Value Spaces

    PubMed Central

    Jun, Young Bae

    2014-01-01

    The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter. PMID:24558342

  5. Decomposition of fuzzy soft sets with finite value spaces.

    PubMed

    Feng, Feng; Fujita, Hamido; Jun, Young Bae; Khan, Madad

    2014-01-01

    The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter.

  6. De Rham-Hodge decomposition and vanishing of harmonic forms by derivation operators on the Poisson space

    NASA Astrophysics Data System (ADS)

    Privault, Nicolas

    2016-05-01

    We construct differential forms of all orders and a covariant derivative together with its adjoint on the probability space of a standard Poisson process, using derivation operators. In this framewok we derive a de Rham-Hodge-Kodaira decomposition as well as Weitzenböck and Clark-Ocone formulas for random differential forms. As in the Wiener space setting, this construction provides two distinct approaches to the vanishing of harmonic differential forms.

  7. Integral transformation solution of free-space cylindrical vector beams and prediction of modified Bessel-Gaussian vector beams.

    PubMed

    Li, Chun-Fang

    2007-12-15

    A unified description of free-space cylindrical vector beams is presented that is an integral transformation solution to the vector Helmholtz equation and the transversality condition. In the paraxial condition, this solution not only includes the known J(1) Bessel-Gaussian vector beam and the axisymmetric Laguerre-Gaussian vector beam that were obtained by solving the paraxial wave equations but also predicts two kinds of vector beam, called a modified Bessel-Gaussian vector beam.

  8. Hydrologic Process Parameterization of Electrical Resistivity Imaging of Solute Plumes Using POD McMC

    NASA Astrophysics Data System (ADS)

    Awatey, M. T.; Irving, J.; Oware, E. K.

    2016-12-01

    Markov chain Monte Carlo (McMC) inversion frameworks are becoming increasingly popular in geophysics due to their ability to recover multiple equally plausible geologic features that honor the limited noisy measurements. Standard McMC methods, however, become computationally intractable with increasing dimensionality of the problem, for example, when working with spatially distributed geophysical parameter fields. We present a McMC approach based on a sparse proper orthogonal decomposition (POD) model parameterization that implicitly incorporates the physics of the underlying process. First, we generate training images (TIs) via Monte Carlo simulations of the target process constrained to a conceptual model. We then apply POD to construct basis vectors from the TIs. A small number of basis vectors can represent most of the variability in the TIs, leading to dimensionality reduction. A projection of the starting model into the reduced basis space generates the starting POD coefficients. At each iteration, only coefficients within a specified sampling window are resimulated assuming a Gaussian prior. The sampling window grows at a specified rate as the number of iteration progresses starting from the coefficients corresponding to the highest ranked basis to those of the least informative basis. We found this gradual increment in the sampling window to be more stable compared to resampling all the coefficients right from the first iteration. We demonstrate the performance of the algorithm with both synthetic and lab-scale electrical resistivity imaging of saline tracer experiments, employing the same set of basis vectors for all inversions. We consider two scenarios of unimodal and bimodal plumes. The unimodal plume is consistent with the hypothesis underlying the generation of the TIs whereas bimodality in plume morphology was not theorized. We show that uncertainty quantification using McMC can proceed in the reduced dimensionality space while accounting for the physics of the underlying process.

  9. Domain decomposition for a mixed finite element method in three dimensions

    USGS Publications Warehouse

    Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.

    2003-01-01

    We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.

  10. A Domain Decomposition Parallelization of the Fast Marching Method

    NASA Technical Reports Server (NTRS)

    Herrmann, M.

    2003-01-01

    In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.

  11. Quadrature demultiplexing using a degenerate vector parametric amplifier.

    PubMed

    Lorences-Riesgo, Abel; Liu, Lan; Olsson, Samuel L I; Malik, Rohit; Kumpera, Aleš; Lundström, Carl; Radic, Stojan; Karlsson, Magnus; Andrekson, Peter A

    2014-12-01

    We report on quadrature demultiplexing of a quadrature phase-shift keying (QPSK) signal into two cross-polarized binary phase-shift keying (BPSK) signals with negligible penalty at bit-error rate (BER) equal to 10(-9). The all-optical quadrature demultiplexing is achieved using a degenerate vector parametric amplifier operating in phase-insensitive mode. We also propose and demonstrate the use of a novel and simple phase-locked loop (PLL) scheme based on detecting the envelope of one of the signals after demultiplexing in order to achieve stable quadrature decomposition.

  12. Research on the application of a decoupling algorithm for structure analysis

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1980-01-01

    The mathematical theory for decoupling mth-order matrix differential equations is presented. It is shown that the decoupling precedure can be developed from the algebraic theory of matrix polynomials. The role of eigenprojectors and latent projectors in the decoupling process is discussed and the mathematical relationships between eigenvalues, eigenvectors, latent roots, and latent vectors are developed. It is shown that the eigenvectors of the companion form of a matrix contains the latent vectors as a subset. The spectral decomposition of a matrix and the application to differential equations is given.

  13. Progressive Vector Quantization on a massively parallel SIMD machine with application to multispectral image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.

  14. Morphometry of network and nonnetwork space of basins

    NASA Astrophysics Data System (ADS)

    Chockalingam, L.; Daya Sagar, B. S.

    2005-08-01

    Morphometric analysis of channel network of a basin provides several scale- independent measures. To better characterize basin morphology, one requires, besides channel morphometric properties, scale-independent but shape-dependent measures to record the sensitive differences in the morphological organization of nonnetwork spaces. These spaces are planar forms of hillslopes or the retained portion after subtracting the channel network from the basin space. The principal aim of this paper is to focus on explaining the importance of alternative scale-independent but shape-dependent measures of nonnetwork spaces of basins. Toward this goal, we explore how mathematical morphology-based decomposition procedures can be used to derive basic measures required to quantify estimates, such as dimensionless power laws, that are useful to express the importance of characteristics of nonnetwork spaces via decomposition rules. We demonstrate our results through characterization of nonnetwork spaces of eight subbasins of the Gunung Ledang region of peninsular Malaysia. We decompose the nonnetwork spaces of eight fourth-order basins in a two-dimensional discrete space into simple nonoverlapping disks (NODs) of various sizes by employing morphological transformations. Furthermore, we show relationships between the dimensions estimated via morphometries of the network and their corresponding nonnetwork spaces. This study can be extended to characterize hillslope morphologies, where decomposition of three-dimensional hillslopes needs to be addressed.

  15. Predicting domain-domain interaction based on domain profiles with feature selection and support vector machines

    PubMed Central

    2010-01-01

    Background Protein-protein interaction (PPI) plays essential roles in cellular functions. The cost, time and other limitations associated with the current experimental methods have motivated the development of computational methods for predicting PPIs. As protein interactions generally occur via domains instead of the whole molecules, predicting domain-domain interaction (DDI) is an important step toward PPI prediction. Computational methods developed so far have utilized information from various sources at different levels, from primary sequences, to molecular structures, to evolutionary profiles. Results In this paper, we propose a computational method to predict DDI using support vector machines (SVMs), based on domains represented as interaction profile hidden Markov models (ipHMM) where interacting residues in domains are explicitly modeled according to the three dimensional structural information available at the Protein Data Bank (PDB). Features about the domains are extracted first as the Fisher scores derived from the ipHMM and then selected using singular value decomposition (SVD). Domain pairs are represented by concatenating their selected feature vectors, and classified by a support vector machine trained on these feature vectors. The method is tested by leave-one-out cross validation experiments with a set of interacting protein pairs adopted from the 3DID database. The prediction accuracy has shown significant improvement as compared to InterPreTS (Interaction Prediction through Tertiary Structure), an existing method for PPI prediction that also uses the sequences and complexes of known 3D structure. Conclusions We show that domain-domain interaction prediction can be significantly enhanced by exploiting information inherent in the domain profiles via feature selection based on Fisher scores, singular value decomposition and supervised learning based on support vector machines. Datasets and source code are freely available on the web at http://liao.cis.udel.edu/pub/svdsvm. Implemented in Matlab and supported on Linux and MS Windows. PMID:21034480

  16. Effect of bait decomposition on the attractiveness to species of Diptera of veterinary and forensic importance in a rainforest fragment in Brazil.

    PubMed

    Oliveira, Diego L; Soares, Thiago F; Vasconcelos, Simão D

    2016-01-01

    Insects associated with carrion can have parasitological importance as vectors of several pathogens and causal agents of myiasis to men and to domestic and wild animals. We tested the attractiveness of animal baits (chicken liver) at different stages of decomposition to necrophagous species of Diptera (Calliphoridae, Fanniidae, Muscidae, Phoridae and Sarcophagidae) in a rainforest fragment in Brazil. Five types of bait were used: fresh and decomposed at room temperature (26 °C) for 24, 48, 72 and 96 h. A positive correlation was detected between the time of decomposition and the abundance of Calliphoridae and Muscidae, whilst the abundance of adults of Phoridae decreased with the time of decomposition. Ten species of calliphorids were registered, of which Chrysomya albiceps, Chrysomya megacephala and Chloroprocta idioidea showed a positive significant correlation between abundance and decomposition. Specimens of Sarcophagidae and Fanniidae did not discriminate between fresh and highly decomposed baits. A strong female bias was registered for all species of Calliphoridae irrespective of the type of bait. The results reinforce the feasibility of using animal tissues as attractants to a wide diversity of dipterans of medical, parasitological and forensic importance in short-term surveys, especially using baits at intermediate stages of decomposition.

  17. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  18. Compressed Continuous Computation v. 12/20/2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorodetsky, Alex

    2017-02-17

    A library for performing numerical computation with low-rank functions. The (C3) library enables performing continuous linear and multilinear algebra with multidimensional functions. Common tasks include taking "matrix" decompositions of vector- or matrix-valued functions, approximating multidimensional functions in low-rank format, adding or multiplying functions together, integrating multidimensional functions.

  19. Optimal classification for the diagnosis of duchenne muscular dystrophy images using support vector machines.

    PubMed

    Zhang, Ming-Huan; Ma, Jun-Shan; Shen, Ying; Chen, Ying

    2016-09-01

    This study aimed to investigate the optimal support vector machines (SVM)-based classifier of duchenne muscular dystrophy (DMD) magnetic resonance imaging (MRI) images. T1-weighted (T1W) and T2-weighted (T2W) images of the 15 boys with DMD and 15 normal controls were obtained. Textural features of the images were extracted and wavelet decomposed, and then, principal features were selected. Scale transform was then performed for MRI images. Afterward, SVM-based classifiers of MRI images were analyzed based on the radical basis function and decomposition levels. The cost (C) parameter and kernel parameter [Formula: see text] were used for classification. Then, the optimal SVM-based classifier, expressed as [Formula: see text]), was identified by performance evaluation (sensitivity, specificity and accuracy). Eight of 12 textural features were selected as principal features (eigenvalues [Formula: see text]). The 16 SVM-based classifiers were obtained using combination of (C, [Formula: see text]), and those with lower C and [Formula: see text] values showed higher performances, especially classifier of [Formula: see text]). The SVM-based classifiers of T1W images showed higher performance than T1W images at the same decomposition level. The T1W images in classifier of [Formula: see text]) at level 2 decomposition showed the highest performance of all, and its overall correct sensitivity, specificity, and accuracy reached 96.9, 97.3, and 97.1 %, respectively. The T1W images in SVM-based classifier [Formula: see text] at level 2 decomposition showed the highest performance of all, demonstrating that it was the optimal classification for the diagnosis of DMD.

  20. Domain decomposition: A bridge between nature and parallel computers

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1992-01-01

    Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.

  1. 3D tensor-based blind multispectral image decomposition for tumor demarcation

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun

    2010-03-01

    Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).

  2. Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models

    USGS Publications Warehouse

    Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.

    2011-01-01

    We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.

  3. Experimental Investigation for Fault Diagnosis Based on a Hybrid Approach Using Wavelet Packet and Support Vector Classification

    PubMed Central

    Li, Pengfei; Jiang, Yongying; Xiang, Jiawei

    2014-01-01

    To deal with the difficulty to obtain a large number of fault samples under the practical condition for mechanical fault diagnosis, a hybrid method that combined wavelet packet decomposition and support vector classification (SVC) is proposed. The wavelet packet is employed to decompose the vibration signal to obtain the energy ratio in each frequency band. Taking energy ratios as feature vectors, the pattern recognition results are obtained by the SVC. The rolling bearing and gear fault diagnostic results of the typical experimental platform show that the present approach is robust to noise and has higher classification accuracy and, thus, provides a better way to diagnose mechanical faults under the condition of small fault samples. PMID:24688361

  4. The Vector Space as a Unifying Concept in School Mathematics.

    ERIC Educational Resources Information Center

    Riggle, Timothy Andrew

    The purpose of this study was to show how the concept of vector space can serve as a unifying thread for mathematics programs--elementary school to pre-calculus college level mathematics. Indicated are a number of opportunities to demonstrate how emphasis upon the vector space structure can enhance the organization of the mathematics curriculum.…

  5. HOSVD-Based 3D Active Appearance Model: Segmentation of Lung Fields in CT Images.

    PubMed

    Wang, Qingzhu; Kang, Wanjun; Hu, Haihui; Wang, Bin

    2016-07-01

    An Active Appearance Model (AAM) is a computer vision model which can be used to effectively segment lung fields in CT images. However, the fitting result is often inadequate when the lungs are affected by high-density pathologies. To overcome this problem, we propose a Higher-order Singular Value Decomposition (HOSVD)-based Three-dimensional (3D) AAM. An evaluation was performed on 310 diseased lungs form the Lung Image Database Consortium Image Collection. Other contemporary AAMs operate directly on patterns represented by vectors, i.e., before applying the AAM to a 3D lung volume,it has to be vectorized first into a vector pattern by some technique like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. According to the nature of the 3D lung volume, HOSVD is introduced to represent and process the lung in tensor space. Our method can not only directly operate on the original 3D tensor patterns, but also efficiently reduce the computer memory usage. The evaluation resulted in an average Dice coefficient of 97.0 % ± 0.59 %, a mean absolute surface distance error of 1.0403 ± 0.5716 mm, a mean border positioning errors of 0.9187 ± 0.5381 pixel, and a Hausdorff Distance of 20.4064 ± 4.3855, respectively. Experimental results showed that our methods delivered significant and better segmentation results, compared with the three other model-based lung segmentation approaches, namely 3D Snake, 3D ASM and 3D AAM.

  6. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Anthony B., E-mail: acosta@northwestern.edu; Green, Jason R., E-mail: jason.green@umb.edu; Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 betweenmore » Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.« less

  7. Surface fuel litterfall and decomposition in the northern Rocky Mountains, U.S.A.

    Treesearch

    Robert E. Keane

    2008-01-01

    Surface fuel deposition and decomposition rates are important to fire management and research because they can define the longevity of fuel treatments in time and space and they can be used to design, build, test, and validate complex fire and ecosystem models useful in evaluating management alternatives. We determined rates of surface fuel litterfall and decomposition...

  8. A Thin Codimension-One Decomposition of the Hilbert Cube

    ERIC Educational Resources Information Center

    Phon-On, Aniruth

    2010-01-01

    For cell-like upper semicontinuous (usc) decompositions "G" of finite dimensional manifolds "M", the decomposition space "M/G" turns out to be an ANR provided "M/G" is finite dimensional ([Dav07], page 129). Furthermore, if "M/G" is finite dimensional and has the Disjoint Disks Property (DDP), then "M/G" is homeomorphic to "M" ([Dav07], page 181).…

  9. A multilevel preconditioner for domain decomposition boundary systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1991-12-11

    In this note, we consider multilevel preconditioning of the reduced boundary systems which arise in non-overlapping domain decomposition methods. It will be shown that the resulting preconditioned systems have condition numbers which be bounded in the case of multilevel spaces on the whole domain and grow at most proportional to the number of levels in the case of multilevel boundary spaces without multilevel extensions into the interior.

  10. Rhotrix Vector Spaces

    ERIC Educational Resources Information Center

    Aminu, Abdulhadi

    2010-01-01

    By rhotrix we understand an object that lies in some way between (n x n)-dimensional matrices and (2n - 1) x (2n - 1)-dimensional matrices. Representation of vectors in rhotrices is different from the representation of vectors in matrices. A number of vector spaces in matrices and their properties are known. On the other hand, little seems to be…

  11. Hybrid diversity method utilizing adaptive diversity function for recovering unknown aberrations in an optical system

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H. (Inventor)

    2009-01-01

    A method of recovering unknown aberrations in an optical system includes collecting intensity data produced by the optical system, generating an initial estimate of a phase of the optical system, iteratively performing a phase retrieval on the intensity data to generate a phase estimate using an initial diversity function corresponding to the intensity data, generating a phase map from the phase retrieval phase estimate, decomposing the phase map to generate a decomposition vector, generating an updated diversity function by combining the initial diversity function with the decomposition vector, generating an updated estimate of the phase of the optical system by removing the initial diversity function from the phase map. The method may further include repeating the process beginning with iteratively performing a phase retrieval on the intensity data using the updated estimate of the phase of the optical system in place of the initial estimate of the phase of the optical system, and using the updated diversity function in place of the initial diversity function, until a predetermined convergence is achieved.

  12. A truncated generalized singular value decomposition algorithm for moving force identification with ill-posed problems

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.

    2017-08-01

    This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.

  13. (2,2) and (0,4) supersymmetric boundary conditions in 3d N =4 theories and type IIB branes

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Okazaki, Tadashi

    2017-10-01

    The half-BPS boundary conditions preserving N =(2 ,2 ) and N =(0 ,4 ) supersymmetry in 3d N =4 supersymmetric gauge theories are examined. The BPS equations admit decomposition of the bulk supermultiplets into specific boundary supermultiplets of preserved supersymmetry. Nahm-like equations arise in the vector multiplet BPS boundary condition preserving N =(0 ,4 ) supersymmetry, and Robin-type boundary conditions appear for the hypermultiplet coupled to the vector multiplet when N =(2 ,2 ) supersymmetry is preserved. The half-BPS boundary conditions are realized in the brane configurations of type IIB string theory.

  14. The inner topological structure and defect control of magnetic skyrmions

    NASA Astrophysics Data System (ADS)

    Ren, Ji-Rong; Yu, Zhong-Xi

    2017-10-01

    We prove that the integrand of magnetic skyrmions can be expressed as curvature tensor of Wu-Yang potential. Taking the projection of the normalized magnetization vector on the 2-dim material surface, and according to Duan's decomposition theory of gauge potential, we reveal that every single skyrmion is just characterized by Hopf index and Brouwer degree at the zero point of this vector field. Our theory meet the results that experimental physicists have achieved by many technologies. The inner topological structure expression of skyrmion with Hopf index and Brouwer degree will be indispensable mathematical basis of skyrmion logic gates.

  15. Separation of spin and orbital angular coherence momenta in the second-order coherence theory of vector electromagnetic fields.

    PubMed

    Wang, Wei; Takeda, Mitsuo

    2007-09-15

    In analogy with the separation of the total optical angular momentum into a spin and an orbital part in electrodynamics, we introduce a new concept of spin and orbital angular coherence momenta into the general coherence theory of vector electromagnetic fields. The properties of the newly introduced spin and orbital angular coherence momenta are investigated through the decomposition of the total coherence angular momentum into the sum of these two components, and their separate conservations have been derived for what is believed to be the first time.

  16. On a concurrent element-by-element preconditioned conjugate gradient algorithm for multiple load cases

    NASA Technical Reports Server (NTRS)

    Watson, Brian; Kamat, M. P.

    1990-01-01

    Element-by-element preconditioned conjugate gradient (EBE-PCG) algorithms have been advocated for use in parallel/vector processing environments as being superior to the conventional LDL(exp T) decomposition algorithm for single load cases. Although there may be some advantages in using such algorithms for a single load case, when it comes to situations involving multiple load cases, the LDL(exp T) decomposition algorithm would appear to be decidedly more cost-effective. The authors have outlined an EBE-PCG algorithm suitable for multiple load cases and compared its effectiveness to the highly efficient LDL(exp T) decomposition scheme. The proposed algorithm offers almost no advantages over the LDL(exp T) algorithm for the linear problems investigated on the Alliant FX/8. However, there may be some merit in the algorithm in solving nonlinear problems with load incrementation, but that remains to be investigated.

  17. Offset-sparsity decomposition for enhancement of color microscopic image of stained specimen in histopathology: further results

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Popović Hadžija, Marijana; Hadžija, Mirko; Aralica, Gorana

    2016-03-01

    Recently, novel data-driven offset-sparsity decomposition (OSD) method was proposed by us to increase colorimetric difference between tissue-structures present in the color microscopic image of stained specimen in histopathology. The OSD method performs additive decomposition of vectorized spectral images into image-adapted offset term and sparse term. Thereby, the sparse term represents an enhanced image. The method was tested on images of the histological slides of human liver stained with hematoxylin and eosin, anti-CD34 monoclonal antibody and Sudan III. Herein, we present further results related to increase of colorimetric difference between tissue structures present in the images of human liver specimens with pancreatic carcinoma metastasis stained with Gomori, CK7, CDX2 and LCA, and with colon carcinoma metastasis stained with Gomori, CK20 and PAN CK. Obtained relative increase of colorimetric difference is in the range [19.36%, 103.94%].

  18. Image characterization by fractal descriptors in variational mode decomposition domain: Application to brain magnetic resonance

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-08-01

    The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.

  19. Thyra Abstract Interface Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe A.

    2005-09-01

    Thrya primarily defines a set of abstract C++ class interfaces needed for the development of abstract numerical atgorithms (ANAs) such as iterative linear solvers, transient solvers all the way up to optimization. At the foundation of these interfaces are abstract C++ classes for vectors, vector spaces, linear operators and multi-vectors. Also included in the Thyra package is C++ code for creating concrete vector, vector space, linear operator, and multi-vector subclasses as well as other utilities to aid in the development of ANAs. Currently, very general and efficient concrete subclass implementations exist for serial and SPMD in-core vectors and multi-vectors. Codemore » also currently exists for testing objects and providing composite objects such as product vectors.« less

  20. New Term Weighting Formulas for the Vector Space Method in Information Retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chisholm, E.; Kolda, T.G.

    The goal in information retrieval is to enable users to automatically and accurately find data relevant to their queries. One possible approach to this problem i use the vector space model, which models documents and queries as vectors in the term space. The components of the vectors are determined by the term weighting scheme, a function of the frequencies of the terms in the document or query as well as throughout the collection. We discuss popular term weighting schemes and present several new schemes that offer improved performance.

  1. An investigation of the use of temporal decomposition in space mission scheduling

    NASA Technical Reports Server (NTRS)

    Bullington, Stanley E.; Narayanan, Venkat

    1994-01-01

    This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.

  2. Parallelization of the Physical-Space Statistical Analysis System (PSAS)

    NASA Technical Reports Server (NTRS)

    Larson, J. W.; Guo, J.; Lyster, P. M.

    1999-01-01

    Atmospheric data assimilation is a method of combining observations with model forecasts to produce a more accurate description of the atmosphere than the observations or forecast alone can provide. Data assimilation plays an increasingly important role in the study of climate and atmospheric chemistry. The NASA Data Assimilation Office (DAO) has developed the Goddard Earth Observing System Data Assimilation System (GEOS DAS) to create assimilated datasets. The core computational components of the GEOS DAS include the GEOS General Circulation Model (GCM) and the Physical-space Statistical Analysis System (PSAS). The need for timely validation of scientific enhancements to the data assimilation system poses computational demands that are best met by distributed parallel software. PSAS is implemented in Fortran 90 using object-based design principles. The analysis portions of the code solve two equations. The first of these is the "innovation" equation, which is solved on the unstructured observation grid using a preconditioned conjugate gradient (CG) method. The "analysis" equation is a transformation from the observation grid back to a structured grid, and is solved by a direct matrix-vector multiplication. Use of a factored-operator formulation reduces the computational complexity of both the CG solver and the matrix-vector multiplication, rendering the matrix-vector multiplications as a successive product of operators on a vector. Sparsity is introduced to these operators by partitioning the observations using an icosahedral decomposition scheme. PSAS builds a large (approx. 128MB) run-time database of parameters used in the calculation of these operators. Implementing a message passing parallel computing paradigm into an existing yet developing computational system as complex as PSAS is nontrivial. One of the technical challenges is balancing the requirements for computational reproducibility with the need for high performance. The problem of computational reproducibility is well known in the parallel computing community. It is a requirement that the parallel code perform calculations in a fashion that will yield identical results on different configurations of processing elements on the same platform. In some cases this problem can be solved by sacrificing performance. Meeting this requirement and still achieving high performance is very difficult. Topics to be discussed include: current PSAS design and parallelization strategy; reproducibility issues; load balance vs. database memory demands, possible solutions to these problems.

  3. Detecting objects in radiographs for homeland security

    NASA Astrophysics Data System (ADS)

    Prasad, Lakshman; Snyder, Hans

    2005-05-01

    We present a general scheme for segmenting a radiographic image into polygons that correspond to visual features. This decomposition provides a vectorized representation that is a high-level description of the image. The polygons correspond to objects or object parts present in the image. This characterization of radiographs allows the direct application of several shape recognition algorithms to identify objects. In this paper we describe the use of constrained Delaunay triangulations as a uniform foundational tool to achieve multiple visual tasks, namely image segmentation, shape decomposition, and parts-based shape matching. Shape decomposition yields parts that serve as tokens representing local shape characteristics. Parts-based shape matching enables the recognition of objects in the presence of occlusions, which commonly occur in radiographs. The polygonal representation of image features affords the efficient design and application of sophisticated geometric filtering methods to detect large-scale structural properties of objects in images. Finally, the representation of radiographs via polygons results in significant reduction of image file sizes and permits the scalable graphical representation of images, along with annotations of detected objects, in the SVG (scalable vector graphics) format that is proposed by the world wide web consortium (W3C). This is a textual representation that can be compressed and encrypted for efficient and secure transmission of information over wireless channels and on the Internet. In particular, our methods described here provide an algorithmic framework for developing image analysis tools for screening cargo at ports of entry for homeland security.

  4. Some Remarks on Space-Time Decompositions, and Degenerate Metrics, in General Relativity

    NASA Astrophysics Data System (ADS)

    Bengtsson, Ingemar

    Space-time decomposition of the Hilbert-Palatini action, written in a form which admits degenerate metrics, is considered. Simple numerology shows why D = 3 and 4 are singled out as admitting a simple phase space. The canonical structure of the degenerate sector turns out to be awkward. However, the real degenerate metrics obtained as solutions are the same as those that occur in Ashtekar's formulation of complex general relativity. An exact solution of Ashtekar's equations, with degenerate metric, shows that the manifestly four-dimensional form of the action, and its 3 + 1 form, are not quite equivalent.

  5. SU-F-J-138: An Extension of PCA-Based Respiratory Deformation Modeling Via Multi-Linear Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, AS; Sun, X; Pitsianis, N

    Purpose: To address and lift the limited degree of freedom (DoF) of globally bilinear motion components such as those based on principal components analysis (PCA), for encoding and modeling volumetric deformation motion. Methods: We provide a systematic approach to obtaining a multi-linear decomposition (MLD) and associated motion model from deformation vector field (DVF) data. We had previously introduced MLD for capturing multi-way relationships between DVF variables, without being restricted by the bilinear component format of PCA-based models. PCA-based modeling is commonly used for encoding patient-specific deformation as per planning 4D-CT images, and aiding on-board motion estimation during radiotherapy. However, themore » bilinear space-time decomposition inherently limits the DoF of such models by the small number of respiratory phases. While this limit is not reached in model studies using analytical or digital phantoms with low-rank motion, it compromises modeling power in the presence of relative motion, asymmetries and hysteresis, etc, which are often observed in patient data. Specifically, a low-DoF model will spuriously couple incoherent motion components, compromising its adaptability to on-board deformation changes. By the multi-linear format of extracted motion components, MLD-based models can encode higher-DoF deformation structure. Results: We conduct mathematical and experimental comparisons between PCA- and MLD-based models. A set of temporally-sampled analytical trajectories provides a synthetic, high-rank DVF; trajectories correspond to respiratory and cardiac motion factors, including different relative frequencies and spatial variations. Additionally, a digital XCAT phantom is used to simulate a lung lesion deforming incoherently with respect to the body, which adheres to a simple respiratory trend. In both cases, coupling of incoherent motion components due to a low model DoF is clearly demonstrated. Conclusion: Multi-linear decomposition can enable decoupling of distinct motion factors in high-rank DVF measurements. This may improve motion model expressiveness and adaptability to on-board deformation, aiding model-based image reconstruction for target verification. NIH Grant No. R01-184173.« less

  6. Velocity boundary conditions for vorticity formulations of the incompressible Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kempka, S.N.; Strickland, J.H.; Glass, M.W.

    1995-04-01

    formulation to satisfy velocity boundary conditions for the vorticity form of the incompressible, viscous fluid momentum equations is presented. The tangential and normal components of the velocity boundary condition are satisfied simultaneously by creating vorticity adjacent to boundaries. The newly created vorticity is determined using a kinematical formulation which is a generalization of Helmholtz` decomposition of a vector field. Though it has not been generally recognized, these formulations resolve the over-specification issue associated with creating voracity to satisfy velocity boundary conditions. The generalized decomposition has not been widely used, apparently due to a lack of a useful physical interpretation. Anmore » analysis is presented which shows that the generalized decomposition has a relatively simple physical interpretation which facilitates its numerical implementation. The implementation of the generalized decomposition is discussed in detail. As an example the flow in a two-dimensional lid-driven cavity is simulated. The solution technique is based on a Lagrangian transport algorithm in the hydrocode ALEGRA. ALEGRA`s Lagrangian transport algorithm has been modified to solve the vorticity transport equation and the generalized decomposition, thus providing a new, accurate method to simulate incompressible flows. This numerical implementation and the new boundary condition formulation allow vorticity-based formulations to be used in a wider range of engineering problems.« less

  7. Domain decomposition methods for nonconforming finite element spaces of Lagrange-type

    NASA Technical Reports Server (NTRS)

    Cowsar, Lawrence C.

    1993-01-01

    In this article, we consider the application of three popular domain decomposition methods to Lagrange-type nonconforming finite element discretizations of scalar, self-adjoint, second order elliptic equations. The additive Schwarz method of Dryja and Widlund, the vertex space method of Smith, and the balancing method of Mandel applied to nonconforming elements are shown to converge at a rate no worse than their applications to the standard conforming piecewise linear Galerkin discretization. Essentially, the theory for the nonconforming elements is inherited from the existing theory for the conforming elements with only modest modification by constructing an isomorphism between the nonconforming finite element space and a space of continuous piecewise linear functions.

  8. Evaluation of a Nonlinear Finite Element Program - ABAQUS.

    DTIC Science & Technology

    1983-03-15

    anisotropic properties. * MATEXP - Linearly elastic thermal expansions with isotropic, orthotropic and anisotropic properties. * MATELG - Linearly...elastic materials for general sections (options available for beam and shell elements). • MATEXG - Linearly elastic thermal expansions for general...decomposition of a matrix. * Q-R algorithm • Vector normalization, etc. Obviously, by consolidating all the utility subroutines in a library, ABAQUS has

  9. System for thermal energy storage, space heating and cooling and power conversion

    DOEpatents

    Gruen, Dieter M.; Fields, Paul R.

    1981-04-21

    An integrated system for storing thermal energy, for space heating and cong and for power conversion is described which utilizes the reversible thermal decomposition characteristics of two hydrides having different decomposition pressures at the same temperature for energy storage and space conditioning and the expansion of high-pressure hydrogen for power conversion. The system consists of a plurality of reaction vessels, at least one containing each of the different hydrides, three loops of circulating heat transfer fluid which can be selectively coupled to the vessels for supplying the heat of decomposition from any appropriate source of thermal energy from the outside ambient environment or from the spaces to be cooled and for removing the heat of reaction to the outside ambient environment or to the spaces to be heated, and a hydrogen loop for directing the flow of hydrogen gas between the vessels. When used for power conversion, at least two vessels contain the same hydride and the hydrogen loop contains an expansion engine. The system is particularly suitable for the utilization of thermal energy supplied by solar collectors and concentrators, but may be used with any source of heat, including a source of low-grade heat.

  10. Statistical analysis of geomagnetic field intensity differences between ASM and VFM instruments onboard Swarm constellation

    NASA Astrophysics Data System (ADS)

    De Michelis, Paola; Tozzi, Roberta; Consolini, Giuseppe

    2017-02-01

    From the very first measurements made by the magnetometers onboard Swarm satellites launched by European Space Agency (ESA) in late 2013, it emerged a discrepancy between scalar and vector measurements. An accurate analysis of this phenomenon brought to build an empirical model of the disturbance, highly correlated with the Sun incidence angle, and to correct vector data accordingly. The empirical model adopted by ESA results in a significant decrease in the amplitude of the disturbance affecting VFM measurements so greatly improving the vector magnetic data quality. This study is focused on the characterization of the difference between magnetic field intensity measured by the absolute scalar magnetometer (ASM) and that reconstructed using the vector field magnetometer (VFM) installed on Swarm constellation. Applying empirical mode decomposition method, we find the intrinsic mode functions (IMFs) associated with ASM-VFM total intensity differences obtained with data both uncorrected and corrected for the disturbance correlated with the Sun incidence angle. Surprisingly, no differences are found in the nature of the IMFs embedded in the analyzed signals, being these IMFs characterized by the same dominant periodicities before and after correction. The effect of correction manifests in the decrease in the energy associated with some IMFs contributing to corrected data. Some IMFs identified by analyzing the ASM-VFM intensity discrepancy are characterized by the same dominant periodicities of those obtained by analyzing the temperature fluctuations of the VFM electronic unit. Thus, the disturbance correlated with the Sun incidence angle could be still present in the corrected magnetic data. Furthermore, the ASM-VFM total intensity difference and the VFM electronic unit temperature display a maximal shared information with a time delay that depends on local time. Taken together, these findings may help to relate the features of the observed VFM-ASM total intensity difference to the physical characteristics of the real disturbance thus contributing to improve the empirical model proposed for the correction of data.[Figure not available: see fulltext.

  11. Hydrologic Process Regularization for Improved Geoelectrical Monitoring of a Lab-Scale Saline Tracer Experiment

    NASA Astrophysics Data System (ADS)

    Oware, E. K.; Moysey, S. M.

    2016-12-01

    Regularization stabilizes the geophysical imaging problem resulting from sparse and noisy measurements that render solutions unstable and non-unique. Conventional regularization constraints are, however, independent of the physics of the underlying process and often produce smoothed-out tomograms with mass underestimation. Cascaded time-lapse (CTL) is a widely used reconstruction technique for monitoring wherein a tomogram obtained from the background dataset is employed as starting model for the inversion of subsequent time-lapse datasets. In contrast, a proper orthogonal decomposition (POD)-constrained inversion framework enforces physics-based regularization based upon prior understanding of the expected evolution of state variables. The physics-based constraints are represented in the form of POD basis vectors. The basis vectors are constructed from numerically generated training images (TIs) that mimic the desired process. The target can be reconstructed from a small number of selected basis vectors, hence, there is a reduction in the number of inversion parameters compared to the full dimensional space. The inversion involves finding the optimal combination of the selected basis vectors conditioned on the geophysical measurements. We apply the algorithm to 2-D lab-scale saline transport experiments with electrical resistivity (ER) monitoring. We consider two transport scenarios with one and two mass injection points evolving into unimodal and bimodal plume morphologies, respectively. The unimodal plume is consistent with the assumptions underlying the generation of the TIs, whereas bimodality in plume morphology was not conceptualized. We compare difference tomograms retrieved from POD with those obtained from CTL. Qualitative comparisons of the difference tomograms with images of their corresponding dye plumes suggest that POD recovered more compact plumes in contrast to those of CTL. While mass recovery generally deteriorated with increasing number of time-steps, POD outperformed CTL in terms of mass recovery accuracy rates. POD is computationally superior requiring only 2.5 mins to complete each inversion compared to 3 hours for CTL to do the same.

  12. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.

    1990-01-01

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

  13. On orthogonal expansions of the space of vector functions which are square-summable over a given domain and the vector analysis operators

    NASA Technical Reports Server (NTRS)

    Bykhovskiy, E. B.; Smirnov, N. V.

    1983-01-01

    The Hilbert space L2(omega) of vector functions is studied. A breakdown of L2(omega) into orthogonal subspaces is discussed and the properties of the operators for projection onto these subspaces are investigated from the standpoint of preserving the differential properties of the vectors being projected. Finally, the properties of the operators are examined.

  14. Bundles over nearly-Kahler homogeneous spaces in heterotic string theory

    NASA Astrophysics Data System (ADS)

    Klaput, Michael; Lukas, Andre; Matti, Cyril

    2011-09-01

    We construct heterotic vacua based on six-dimensional nearly-Kahler homogeneous manifolds and non-trivial vector bundles thereon. Our examples are based on three specific group coset spaces. It is shown how to construct line bundles over these spaces, compute their properties and build up vector bundles consistent with supersymmetry and anomaly cancelation. It turns out that the most interesting coset is SU(3)/U(1)2. This space supports a large number of vector bundles which lead to consistent heterotic vacua, some of them with three chiral families.

  15. Performance impact of stop lists and morphological decomposition on word-word corpus-based semantic space models.

    PubMed

    Keith, Jeff; Westbury, Chris; Goldman, James

    2015-09-01

    Corpus-based semantic space models, which primarily rely on lexical co-occurrence statistics, have proven effective in modeling and predicting human behavior in a number of experimental paradigms that explore semantic memory representation. The most widely studied extant models, however, are strongly influenced by orthographic word frequency (e.g., Shaoul & Westbury, Behavior Research Methods, 38, 190-195, 2006). This has the implication that high-frequency closed-class words can potentially bias co-occurrence statistics. Because these closed-class words are purported to carry primarily syntactic, rather than semantic, information, the performance of corpus-based semantic space models may be improved by excluding closed-class words (using stop lists) from co-occurrence statistics, while retaining their syntactic information through other means (e.g., part-of-speech tagging and/or affixes from inflected word forms). Additionally, very little work has been done to explore the effect of employing morphological decomposition on the inflected forms of words in corpora prior to compiling co-occurrence statistics, despite (controversial) evidence that humans perform early morphological decomposition in semantic processing. In this study, we explored the impact of these factors on corpus-based semantic space models. From this study, morphological decomposition appears to significantly improve performance in word-word co-occurrence semantic space models, providing some support for the claim that sublexical information-specifically, word morphology-plays a role in lexical semantic processing. An overall decrease in performance was observed in models employing stop lists (e.g., excluding closed-class words). Furthermore, we found some evidence that weakens the claim that closed-class words supply primarily syntactic information in word-word co-occurrence semantic space models.

  16. Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1995-01-01

    The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.

  17. Dual Vector Spaces and Physical Singularities

    NASA Astrophysics Data System (ADS)

    Rowlands, Peter

    Though we often refer to 3-D vector space as constructed from points, there is no mechanism from within its definition for doing this. In particular, space, on its own, cannot accommodate the singularities that we call fundamental particles. This requires a commutative combination of space as we know it with another 3-D vector space, which is dual to the first (in a physical sense). The combination of the two spaces generates a nilpotent quantum mechanics/quantum field theory, which incorporates exact supersymmetry and ultimately removes the anomalies due to self-interaction. Among the many natural consequences of the dual space formalism are half-integral spin for fermions, zitterbewegung, Berry phase and a zero norm Berwald-Moor metric for fermionic states.

  18. Application of Lanczos vectors to control design of flexible structures

    NASA Technical Reports Server (NTRS)

    Craig, Roy R., Jr.; Su, Tzu-Jeng

    1990-01-01

    This report covers research conducted during the first year of the two-year grant. The research, entitled 'Application of Lanczos Vectors to Control Design of Flexible Structures' concerns various ways to obtain reduced-order mathematical models for use in dynamic response analyses and in control design studies. This report summarizes research described in several reports and papers that were written under this contract. Extended abstracts are presented for technical papers covering the following topics: controller reduction by preserving impulse response energy; substructuring decomposition and controller synthesis; model reduction methods for structural control design; and recent literature on structural modeling, identification, and analysis.

  19. Objective research of auscultation signals in Traditional Chinese Medicine based on wavelet packet energy and support vector machine.

    PubMed

    Yan, Jianjun; Shen, Xiaojing; Wang, Yiqin; Li, Fufeng; Xia, Chunming; Guo, Rui; Chen, Chunfeng; Shen, Qingwei

    2010-01-01

    This study aims at utilising Wavelet Packet Transform (WPT) and Support Vector Machine (SVM) algorithm to make objective analysis and quantitative research for the auscultation in Traditional Chinese Medicine (TCM) diagnosis. First, Wavelet Packet Decomposition (WPD) at level 6 was employed to split more elaborate frequency bands of the auscultation signals. Then statistic analysis was made based on the extracted Wavelet Packet Energy (WPE) features from WPD coefficients. Furthermore, the pattern recognition was used to distinguish mixed subjects' statistical feature values of sample groups through SVM. Finally, the experimental results showed that the classification accuracies were at a high level.

  20. Analysis of recoverable current from one component of magnetic flux density in MREIT and MRCDI.

    PubMed

    Park, Chunjae; Lee, Byung Il; Kwon, Oh In

    2007-06-07

    Magnetic resonance current density imaging (MRCDI) provides a current density image by measuring the induced magnetic flux density within the subject with a magnetic resonance imaging (MRI) scanner. Magnetic resonance electrical impedance tomography (MREIT) has been focused on extracting some useful information of the current density and conductivity distribution in the subject Omega using measured B(z), one component of the magnetic flux density B. In this paper, we analyze the map Tau from current density vector field J to one component of magnetic flux density B(z) without any assumption on the conductivity. The map Tau provides an orthogonal decomposition J = J(P) + J(N) of the current J where J(N) belongs to the null space of the map Tau. We explicitly describe the projected current density J(P) from measured B(z). Based on the decomposition, we prove that B(z) data due to one injection current guarantee a unique determination of the isotropic conductivity under assumptions that the current is two-dimensional and the conductivity value on the surface is known. For a two-dimensional dominating current case, the projected current density J(P) provides a good approximation of the true current J without accumulating noise effects. Numerical simulations show that J(P) from measured B(z) is quite similar to the target J. Biological tissue phantom experiments compare J(P) with the reconstructed J via the reconstructed isotropic conductivity using the harmonic B(z) algorithm.

  1. Enhancement of nitric oxide decomposition efficiency achieved with lanthanum-based perovskite-type catalyst.

    PubMed

    Pan, Kuan Lun; Chen, Mei Chung; Yu, Sheng Jen; Yan, Shaw Yi; Chang, Moo Been

    2016-06-01

    Direct decompositions of nitric oxide (NO) by La0.7Ce0.3SrNiO4, La0.4Ba0.4Ce0.2SrNiO4, and Pr0.4Ba0.4Ce0.2SrNiO4 are experimentally investigated, and the catalysts are tested with different operating parameters to evaluate their activities. Experimental results indicate that the physical and chemical properties of La0.7Ce0.3SrNiO4 are significantly improved by doping with Ba and partial substitution with Pr. NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4 are 32% and 68%, respectively, at 400 °C with He as carrier gas. As the temperature is increased to 600 °C, NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4, respectively, reach 100% with the inlet NO concentration of 1000 ppm while the space velocity is fixed at 8000 hr(-1). Effects of O2, H2O(g), and CO2 contents and space velocity on NO decomposition are also explored. The results indicate that NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4, respectively, are slightly reduced as space velocity is increased from 8000 to 20,000 hr(-1) at 500 °C. In addition, the activities of both catalysts (La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4) for NO decomposition are slightly reduced in the presence of 5% O2, 5% CO2, or 5% H2O(g). For durability test, with the space velocity of 8000 hr(-1) and operating temperature of 600 °C, high N2 yield is maintained throughout the durability test of 60 hr, revealing the long-term stability of Pr0.4Ba0.4Ce0.2SrNiO4 for NO decomposition. Overall, Pr0.4Ba0.4Ce0.2SrNiO4 shows good catalytic activity for NO decomposition. Nitrous oxide (NO) not only causes adverse environmental effects such as acid rain, photochemical smog, and deterioration of visibility and water quality, but also harms human lungs and respiratory system. Pervoskite-type catalysts, including La0.7Ce0.3SrNiO4, La0.4Ba0.4Ce0.2SrNiO4, and Pr0.4Ba0.4Ce0.2SrNiO4, are applied for direct NO decomposition. The results show that NO decomposition can be enhanced as La0.7Ce0.3SrNiO4 is substituted with Ba and/or Pr. At 600 °C, NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4 reach 100%, demonstrating high activity and good potential for direct NO decomposition. Effects of O2, H2O(g), and CO2 contents on catalytic activities are also evaluated and discussed.

  2. Rapid surface defect detection based on singular value decomposition using steel strips as an example

    NASA Astrophysics Data System (ADS)

    Sun, Qianlai; Wang, Yin; Sun, Zhiyi

    2018-05-01

    For most surface defect detection methods based on image processing, image segmentation is a prerequisite for determining and locating the defect. In our previous work, a method based on singular value decomposition (SVD) was used to determine and approximately locate surface defects on steel strips without image segmentation. For the SVD-based method, the image to be inspected was projected onto its first left and right singular vectors respectively. If there were defects in the image, there would be sharp changes in the projections. Then the defects may be determined and located according sharp changes in the projections of each image to be inspected. This method was simple and practical but the SVD should be performed for each image to be inspected. Owing to the high time complexity of SVD itself, it did not have a significant advantage in terms of time consumption over image segmentation-based methods. Here, we present an improved SVD-based method. In the improved method, a defect-free image is considered as the reference image which is acquired under the same environment as the image to be inspected. The singular vectors of each image to be inspected are replaced by the singular vectors of the reference image, and SVD is performed only once for the reference image off-line before detecting of the defects, thus greatly reducing the time required. The improved method is more conducive to real-time defect detection. Experimental results confirm its validity.

  3. Remarks on the regularity criteria of three-dimensional magnetohydrodynamics system in terms of two velocity field components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamazaki, Kazuo

    2014-03-15

    We study the three-dimensional magnetohydrodynamics system and obtain its regularity criteria in terms of only two velocity vector field components eliminating the condition on the third component completely. The proof consists of a new decomposition of the four nonlinear terms of the system and estimating a component of the magnetic vector field in terms of the same component of the velocity vector field. This result may be seen as a component reduction result of many previous works [C. He and Z. Xin, “On the regularity of weak solutions to the magnetohydrodynamic equations,” J. Differ. Equ. 213(2), 234–254 (2005); Y. Zhou,more » “Remarks on regularities for the 3D MHD equations,” Discrete Contin. Dyn. Syst. 12(5), 881–886 (2005)].« less

  4. A third-generation density-functional-theory-based method for calculating canonical molecular orbitals of large molecules.

    PubMed

    Hirano, Toshiyuki; Sato, Fumitoshi

    2014-07-28

    We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.

  5. Effects of OCR Errors on Ranking and Feedback Using the Vector Space Model.

    ERIC Educational Resources Information Center

    Taghva, Kazem; And Others

    1996-01-01

    Reports on the performance of the vector space model in the presence of OCR (optical character recognition) errors in information retrieval. Highlights include precision and recall, a full-text test collection, smart vector representation, impact of weighting parameters, ranking variability, and the effect of relevance feedback. (Author/LRW)

  6. Acid-Labile Poly(glycidyl methacrylate)-Based Star Gene Vectors.

    PubMed

    Yang, Yan-Yu; Hu, Hao; Wang, Xing; Yang, Fei; Shen, Hong; Xu, Fu-Jian; Wu, De-Cheng

    2015-06-10

    It was recently reported that ethanolamine-functionalized poly(glycidyl methacrylate) (PGEA) possesses great potential applications in gene therapy due to its good biocompatibility and high transfection efficiency. Importing responsivity into PGEA vectors would further improve their performances. Herein, a series of responsive star-shaped vectors, acetaled β-cyclodextrin-PGEAs (A-CD-PGEAs) consisting of a β-CD core and five PGEA arms linked by acid-labile acetal groups, were proposed and characterized as therapeutic pDNA vectors. The A-CD-PGEAs owned abundant hydroxyl groups to shield extra positive charges of A-CD-PGEAs/pDNA complexes, and the star structure could decrease charge density. The incorporation of acetal linkers endowed A-CD-PGEAs with pH responsivity and degradation. In weakly acidic endosome, the broken acetal linkers resulted in decomposition of A-CD-PGEAs and morphological transformation of A-CD-PGEAs/pDNA complexes, lowering cytotoxicity and accelerating release of pDNA. In comparison with control CD-PGEAs without acetal linkers, A-CD-PGEAs exhibited significantly better transfection performances.

  7. A vector space model approach to identify genetically related diseases.

    PubMed

    Sarkar, Indra Neil

    2012-01-01

    The relationship between diseases and their causative genes can be complex, especially in the case of polygenic diseases. Further exacerbating the challenges in their study is that many genes may be causally related to multiple diseases. This study explored the relationship between diseases through the adaptation of an approach pioneered in the context of information retrieval: vector space models. A vector space model approach was developed that bridges gene disease knowledge inferred across three knowledge bases: Online Mendelian Inheritance in Man, GenBank, and Medline. The approach was then used to identify potentially related diseases for two target diseases: Alzheimer disease and Prader-Willi Syndrome. In the case of both Alzheimer Disease and Prader-Willi Syndrome, a set of plausible diseases were identified that may warrant further exploration. This study furthers seminal work by Swanson, et al. that demonstrated the potential for mining literature for putative correlations. Using a vector space modeling approach, information from both biomedical literature and genomic resources (like GenBank) can be combined towards identification of putative correlations of interest. To this end, the relevance of the predicted diseases of interest in this study using the vector space modeling approach were validated based on supporting literature. The results of this study suggest that a vector space model approach may be a useful means to identify potential relationships between complex diseases, and thereby enable the coordination of gene-based findings across multiple complex diseases.

  8. Decomposability and scalability in space-based observatory scheduling

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Stephen F.

    1992-01-01

    In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.

  9. A technique for plasma velocity-space cross-correlation

    NASA Astrophysics Data System (ADS)

    Mattingly, Sean; Skiff, Fred

    2018-05-01

    An advance in experimental plasma diagnostics is presented and used to make the first measurement of a plasma velocity-space cross-correlation matrix. The velocity space correlation function can detect collective fluctuations of plasmas through a localized measurement. An empirical decomposition, singular value decomposition, is applied to this Hermitian matrix in order to obtain the plasma fluctuation eigenmode structure on the ion distribution function. A basic theory is introduced and compared to the modes obtained by the experiment. A full characterization of these modes is left for future work, but an outline of this endeavor is provided. Finally, the requirements for this experimental technique in other plasma regimes are discussed.

  10. All ASD complex and real 4-dimensional Einstein spaces with Λ≠0 admitting a nonnull Killing vector

    NASA Astrophysics Data System (ADS)

    Chudecki, Adam

    2016-12-01

    Anti-self-dual (ASD) 4-dimensional complex Einstein spaces with nonzero cosmological constant Λ equipped with a nonnull Killing vector are considered. It is shown that any conformally nonflat metric of such spaces can be always brought to a special form and the Einstein field equations can be reduced to the Boyer-Finley-Plebański equation (Toda field equation). Some alternative forms of the metric are discussed. All possible real slices (neutral, Euclidean and Lorentzian) of ASD complex Einstein spaces with Λ≠0 admitting a nonnull Killing vector are found.

  11. Pressure-induced metallization of condensed phase β-HMX under shock loadings via molecular dynamics simulations in conjunction with multi-scale shock technique.

    PubMed

    Ge, Ni-Na; Wei, Yong-Kai; Zhao, Feng; Chen, Xiang-Rong; Ji, Guang-Fu

    2014-07-01

    The electronic structure and initial decomposition in high explosive HMX under conditions of shock loading are examined. The simulation is performed using quantum molecular dynamics in conjunction with multi-scale shock technique (MSST). A self-consistent charge density-functional tight-binding (SCC-DFTB) method is adapted. The results show that the N-N-C angle has a drastic change under shock wave compression along lattice vector b at shock velocity 11 km/s, which is the main reason that leads to an insulator-to-metal transition for the HMX system. The metallization pressure (about 130 GPa) of condensed-phase HMX is predicted firstly. We also detect the formation of several key products of condensed-phase HMX decomposition, such as NO2, NO, N2, N2O, H2O, CO, and CO2, and all of them have been observed in previous experimental studies. Moreover, the initial decomposition products include H2 due to the C-H bond breaking as a primary reaction pathway at extreme condition, which presents a new insight into the initial decomposition mechanism of HMX under shock loading at the atomistic level.

  12. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  13. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  14. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2005-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  15. Fault detection, isolation, and diagnosis of self-validating multifunctional sensors.

    PubMed

    Yang, Jing-Li; Chen, Yin-Sheng; Zhang, Li-Li; Sun, Zhen

    2016-06-01

    A novel fault detection, isolation, and diagnosis (FDID) strategy for self-validating multifunctional sensors is presented in this paper. The sparse non-negative matrix factorization-based method can effectively detect faults by using the squared prediction error (SPE) statistic, and the variables contribution plots based on SPE statistic can help to locate and isolate the faulty sensitive units. The complete ensemble empirical mode decomposition is employed to decompose the fault signals to a series of intrinsic mode functions (IMFs) and a residual. The sample entropy (SampEn)-weighted energy values of each IMFs and the residual are estimated to represent the characteristics of the fault signals. Multi-class support vector machine is introduced to identify the fault mode with the purpose of diagnosing status of the faulty sensitive units. The performance of the proposed strategy is compared with other fault detection strategies such as principal component analysis, independent component analysis, and fault diagnosis strategies such as empirical mode decomposition coupled with support vector machine. The proposed strategy is fully evaluated in a real self-validating multifunctional sensors experimental system, and the experimental results demonstrate that the proposed strategy provides an excellent solution to the FDID research topic of self-validating multifunctional sensors.

  16. Arrowheaded enhanced multivariance products representation for matrices (AEMPRM): Specifically focusing on infinite matrices and converting arrowheadedness to tridiagonality

    NASA Astrophysics Data System (ADS)

    Özdemir, Gizem; Demiralp, Metin

    2015-12-01

    In this work, Enhanced Multivariance Products Representation (EMPR) approach which is a Demiralp-and-his- group extension to the Sobol's High Dimensional Model Representation (HDMR) has been used as the basic tool. Their discrete form have also been developed and used in practice by Demiralp and his group in addition to some other authors for the decomposition of the arrays like vectors, matrices, or multiway arrays. This work specifically focuses on the decomposition of infinite matrices involving denumerable infinitely many rows and columns. To this end the target matrix is first decomposed to the sum of certain outer products and then each outer product is treated by Tridiagonal Matrix Enhanced Multivariance Products Representation (TMEMPR) which has been developed by Demiralp and his group. The result is a three-matrix- factor-product whose kernel (the middle factor) is an arrowheaded matrix while the pre and post factors are invertable matrices decomposed of the support vectors of TMEMPR. This new method is called as Arrowheaded Enhanced Multivariance Products Representation for Matrices. The general purpose is approximation of denumerably infinite matrices with the new method.

  17. NASREN: Standard reference model for telerobot control

    NASA Technical Reports Server (NTRS)

    Albus, J. S.; Lumia, R.; Mccain, H.

    1987-01-01

    A hierarchical architecture is described which supports space station telerobots in a variety of modes. The system is divided into three hierarchies: task decomposition, world model, and sensory processing. Goals at each level of the task dedomposition heirarchy are divided both spatially and temporally into simpler commands for the next lower level. This decomposition is repreated until, at the lowest level, the drive signals to the robot actuators are generated. To accomplish its goals, task decomposition modules must often use information stored it the world model. The purpose of the sensory system is to update the world model as rapidly as possible to keep the model in registration with the physical world. The architecture of the entire control system hierarch is described and how it can be applied to space telerobot applications.

  18. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

    1987-10-09

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.

  19. Scare Tactics: Evaluating Problem Decompositions Using Failure Scenarios

    NASA Technical Reports Server (NTRS)

    Helm, B. Robert; Fickas, Stephen

    1992-01-01

    Our interest is in the design of multi-agent problem-solving systems, which we refer to as composite systems. We have proposed an approach to composite system design by decomposition of problem statements. An automated assistant called Critter provides a library of reusable design transformations which allow a human analyst to search the space of decompositions for a problem. In this paper we describe a method for evaluating and critiquing problem decompositions generated by this search process. The method uses knowledge stored in the form of failure decompositions attached to design transformations. We suggest the benefits of our critiquing method by showing how it could re-derive steps of a published development example. We then identify several open issues for the method.

  20. Learning with LOGO: Logo and Vectors.

    ERIC Educational Resources Information Center

    Lough, Tom; Tipps, Steve

    1986-01-01

    This is the first of a two-part series on the general concept of vector space. Provides tool procedures to allow investigation of vector properties, vector addition and subtraction, and X and Y components. Lists several sources of additional vector ideas. (JM)

  1. Principal fiber bundle description of number scaling for scalars and vectors: application to gauge theory

    NASA Astrophysics Data System (ADS)

    Benioff, Paul

    2015-05-01

    The purpose of this paper is to put the description of number scaling and its effects on physics and geometry on a firmer foundation, and to make it more understandable. A main point is that two different concepts, number and number value are combined in the usual representations of number structures. This is valid as long as just one structure of each number type is being considered. It is not valid when different structures of each number type are being considered. Elements of base sets of number structures, considered by themselves, have no meaning. They acquire meaning or value as elements of a number structure. Fiber bundles over a space or space time manifold, M, are described. The fiber consists of a collection of many real or complex number structures and vector space structures. The structures are parameterized by a real or complex scaling factor, s. A vector space at a fiber level, s, has, as scalars, real or complex number structures at the same level. Connections are described that relate scalar and vector space structures at both neighbor M locations and at neighbor scaling levels. Scalar and vector structure valued fields are described and covariant derivatives of these fields are obtained. Two complex vector fields, each with one real and one imaginary field, appear, with one complex field associated with positions in M and the other with position dependent scaling factors. A derivation of the covariant derivative for scalar and vector valued fields gives the same vector fields. The derivation shows that the complex vector field associated with scaling fiber levels is the gradient of a complex scalar field. Use of these results in gauge theory shows that the imaginary part of the vector field associated with M positions acts like the electromagnetic field. The physical relevance of the other three fields, if any, is not known.

  2. Managing the resilience space of the German energy system - A vector analysis.

    PubMed

    Schlör, Holger; Venghaus, Sandra; Märker, Carolin; Hake, Jürgen-Friedrich

    2018-07-15

    The UN Sustainable Development Goals formulated in 2016 confirmed the sustainability concept of the Earth Summit of 1992 and supported UNEP's green economy transition concept. The transformation of the energy system (Energiewende) is the keystone of Germany's sustainability strategy and of the German green economy concept. We use ten updated energy-related indicators of the German sustainability strategy to analyse the German energy system. The development of the sustainable indicators is examined in the monitoring process by a vector analysis performed in two-dimensional Euclidean space (Euclidean plane). The aim of the novel vector analysis is to measure the current status of the Energiewende in Germany and thereby provide decision makers with information about the strains for the specific remaining pathway of the single indicators and of the total system in order to meet the sustainability targets of the Energiewende. Within this vector model, three vectors (the normative sustainable development vector, the real development vector, and the green economy vector) define the resilience space of our analysis. The resilience space encloses a number of vectors representing different pathways with different technological and socio-economic strains to achieve a sustainable development of the green economy. In this space, the decision will be made as to whether the government measures will lead to a resilient energy system or whether a readjustment of indicator targets or political measures is necessary. The vector analysis enables us to analyse both the government's ambitiousness, which is expressed in the sustainability target for the indicators at the start of the sustainability strategy representing the starting preference order of the German government (SPO) and, secondly, the current preference order of German society in order to bridge the remaining distance to reach the specific sustainability goals of the strategy summarized in the current preference order (CPO). Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. On-line range images registration with GPGPU

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Naruniec, J.

    2013-03-01

    This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.

  4. Radiative albedo from a linearly fibered half-space

    NASA Astrophysics Data System (ADS)

    Grzesik, J. A.

    2018-05-01

    A growing acceptance of fiber-reinforced composite materials imparts some relevance to exploring the effects which a predominantly linear scattering lattice may have upon interior radiative transport. Indeed, a central feature of electromagnetic wave propagation within such a lattice, if sufficiently dilute, is ray confinement to cones whose half-angles are set by that between lattice and the incident ray. When such propagation is subordinated to a viewpoint of an unpolarized intensity transport, one arrives at a somewhat simplified variant of the Boltzmann equation with spherical scattering demoted to its cylindrical counterpart. With a view to initiating a hopefully wider discussion of such phenomena, we follow through in detail the half-space albedo problem. This is done first along canonical lines that harness the Wiener-Hopf technique, and then once more in a discrete ordinates setting via flux decomposition along the eigenbasis of the underlying attenuation/scattering matrix. Good agreement is seen to prevail. We further suggest that the Case singular eigenfunction apparatus could likewise be evolved here in close analogy to its original, spherical scattering model. A cursory contact with related problems in the astrophysical literature suggests, in addition, that the basic physical fidelity of our scalar radiative transfer equation (RTE) remains open to improvement by passage to a (4×1) Stokes vector, (4×4) matricial setting.

  5. Preprocessed cumulative reconstructor with domain decomposition: a fast wavefront reconstruction method for pyramid wavefront sensor.

    PubMed

    Shatokhina, Iuliia; Obereder, Andreas; Rosensteiner, Matthias; Ramlau, Ronny

    2013-04-20

    We present a fast method for the wavefront reconstruction from pyramid wavefront sensor (P-WFS) measurements. The method is based on an analytical relation between pyramid and Shack-Hartmann sensor (SH-WFS) data. The algorithm consists of two steps--a transformation of the P-WFS data to SH data, followed by the application of cumulative reconstructor with domain decomposition, a wavefront reconstructor from SH-WFS measurements. The closed loop simulations confirm that our method provides the same quality as the standard matrix vector multiplication method. A complexity analysis as well as speed tests confirm that the method is very fast. Thus, the method can be used on extremely large telescopes, e.g., for eXtreme adaptive optics systems.

  6. Isospin amplitudes of B → Kππ decays

    NASA Astrophysics Data System (ADS)

    Calderón, G.; García-Duque, Cristian H.

    2015-10-01

    We obtain two isospin amplitude decompositions of the B → Kππ decays. We use the method of addition of three isospin vectors, and prove the equivalency of the two isospin amplitude decompositions obtained. We only have considered contributions of the Hamiltonian to the transitions with change of isospin ΔI = 0 and ΔI = 1. The symmetry of isospin allows to relate the charged B+ →K+π+π-, B+ →K+π0π0, and B+ →K0π0π+ channels with the neutral B0 →K0π+π-, B0 →K0π0π0, and B0 →K+π0π- channels. Additionally, we obtain equivalent triangular relations in the charged and neutral channels.

  7. On Certain Theoretical Developments Underlying the Hilbert-Huang Transform

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Blank, Karin; Flatley, Thomas; Huang, Norden E.; Petrick, David; Hestness, Phyllis

    2006-01-01

    One of the main traditional tools used in scientific and engineering data spectral analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). Both carry strong a-priori assumptions about the source data, such as being linear and stationary, and of satisfying the Dirichlet conditions. A recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT), proposes a novel approach to the solution for the nonlinear class of spectral analysis problems. Using a-posteriori data processing based on the Empirical Mode Decomposition (EMD) sifting process (algorithm), followed by the normalized Hilbert Transform of the decomposed data, the HHT allows spectral analysis of nonlinear and nonstationary data. The EMD sifting process results in a non-constrained decomposition of a source real-value data vector into a finite set of Intrinsic Mode Functions (IMF). These functions form a nearly orthogonal derived from the data (adaptive) basis. The IMFs can be further analyzed for spectrum content by using the classical Hilbert Transform. A new engineering spectral analysis tool using HHT has been developed at NASA GSFC, the HHT Data Processing System (HHT-DPS). As the HHT-DPS has been successfully used and commercialized, new applications pose additional questions about the theoretical basis behind the HHT and EMD algorithms. Why is the fastest changing component of a composite signal being sifted out first in the EMD sifting process? Why does the EMD sifting process seemingly converge and why does it converge rapidly? Does an IMF have a distinctive structure? Why are the IMFs nearly orthogonal? We address these questions and develop the initial theoretical background for the HHT. This will contribute to the development of new HHT processing options, such as real-time and 2-D processing using Field Programmable Gate Array (FPGA) computational resources,

  8. Enhanced Thermal Decomposition Properties of CL-20 through Space-Confining in Three-Dimensional Hierarchically Ordered Porous Carbon.

    PubMed

    Chen, Jin; He, Simin; Huang, Bing; Wu, Peng; Qiao, Zhiqiang; Wang, Jun; Zhang, Liyuan; Yang, Guangcheng; Huang, Hui

    2017-03-29

    High energy and low signature properties are the future trend of solid propellant development. As a new and promising oxidizer, hexanitrohexaazaisowurtzitane (CL-20) is expected to replace the conventional oxidizer ammonium perchlorate to reach above goals. However, the high pressure exponent of CL-20 hinders its application in solid propellants so that the development of effective catalysts to improve the thermal decomposition properties of CL-20 still remains challenging. Here, 3D hierarchically ordered porous carbon (3D HOPC) is presented as a catalyst for the thermal decomposition of CL-20 via synthesizing a series of nanostructured CL-20/HOPC composites. In these nanocomposites, CL-20 is homogeneously space-confined into the 3D HOPC scaffold as nanocrystals 9.2-26.5 nm in diameter. The effect of the pore textural parameters and surface modification of 3D HOPC as well as CL-20 loading amount on the thermal decomposition of CL-20 is discussed. A significant improvement of the thermal decomposition properties of CL-20 is achieved with remarkable decrease in decomposition peak temperature (from 247.0 to 174.8 °C) and activation energy (from 165.5 to 115.3 kJ/mol). The exceptional performance of 3D HOPC could be attributed to its well-connected 3D hierarchically ordered porous structure, high surface area, and the confined CL-20 nanocrystals. This work clearly demonstrates that 3D HOPC is a superior catalyst for CL-20 thermal decomposition and opens new potential for further applications of CL-20 in solid propellants.

  9. The decomposition of deformation: New metrics to enhance shape analysis in medical imaging.

    PubMed

    Varano, Valerio; Piras, Paolo; Gabriele, Stefano; Teresi, Luciano; Nardinocchi, Paola; Dryden, Ian L; Torromeo, Concetta; Puddu, Paolo E

    2018-05-01

    In landmarks-based Shape Analysis size is measured, in most cases, with Centroid Size. Changes in shape are decomposed in affine and non affine components. Furthermore the non affine component can be in turn decomposed in a series of local deformations (partial warps). If the extent of deformation between two shapes is small, the difference between Centroid Size and m-Volume increment is barely appreciable. In medical imaging applied to soft tissues bodies can undergo very large deformations, involving large changes in size. The cardiac example, analyzed in the present paper, shows changes in m-Volume that can reach the 60%. We show here that standard Geometric Morphometrics tools (landmarks, Thin Plate Spline, and related decomposition of the deformation) can be generalized to better describe the very large deformations of biological tissues, without losing a synthetic description. In particular, the classical decomposition of the space tangent to the shape space in affine and non affine components is enriched to include also the change in size, in order to give a complete description of the tangent space to the size-and-shape space. The proposed generalization is formulated by means of a new Riemannian metric describing the change in size as change in m-Volume rather than change in Centroid Size. This leads to a redefinition of some aspects of the Kendall's size-and-shape space without losing Kendall's original formulation. This new formulation is discussed by means of simulated examples using 2D and 3D platonic shapes as well as a real example from clinical 3D echocardiographic data. We demonstrate that our decomposition based approaches discriminate very effectively healthy subjects from patients affected by Hypertrophic Cardiomyopathy. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Fast flux module detection using matroid theory.

    PubMed

    Reimers, Arne C; Bruggeman, Frank J; Olivier, Brett G; Stougie, Leen

    2015-05-01

    Flux balance analysis (FBA) is one of the most often applied methods on genome-scale metabolic networks. Although FBA uniquely determines the optimal yield, the pathway that achieves this is usually not unique. The analysis of the optimal-yield flux space has been an open challenge. Flux variability analysis is only capturing some properties of the flux space, while elementary mode analysis is intractable due to the enormous number of elementary modes. However, it has been found by Kelk et al. (2012) that the space of optimal-yield fluxes decomposes into flux modules. These decompositions allow a much easier but still comprehensive analysis of the optimal-yield flux space. Using the mathematical definition of module introduced by Müller and Bockmayr (2013b), we discovered useful connections to matroid theory, through which efficient algorithms enable us to compute the decomposition into modules in a few seconds for genome-scale networks. Using that every module can be represented by one reaction that represents its function, in this article, we also present a method that uses this decomposition to visualize the interplay of modules. We expect the new method to replace flux variability analysis in the pipelines for metabolic networks.

  11. Families of vector-like deformations of relativistic quantum phase spaces, twists and symmetries

    NASA Astrophysics Data System (ADS)

    Meljanac, Daniel; Meljanac, Stjepan; Pikutić, Danijel

    2017-12-01

    Families of vector-like deformed relativistic quantum phase spaces and corresponding realizations are analyzed. A method for a general construction of the star product is presented. The corresponding twist, expressed in terms of phase space coordinates, in the Hopf algebroid sense is presented. General linear realizations are considered and corresponding twists, in terms of momenta and Poincaré-Weyl generators or gl(n) generators are constructed and R-matrix is discussed. A classification of linear realizations leading to vector-like deformed phase spaces is given. There are three types of spaces: (i) commutative spaces, (ii) κ -Minkowski spaces and (iii) κ -Snyder spaces. The corresponding star products are (i) associative and commutative (but non-local), (ii) associative and non-commutative and (iii) non-associative and non-commutative, respectively. Twisted symmetry algebras are considered. Transposed twists and left-right dual algebras are presented. Finally, some physical applications are discussed.

  12. MGRA: Motion Gesture Recognition via Accelerometer.

    PubMed

    Hong, Feng; You, Shujuan; Wei, Meiyu; Zhang, Yongtuo; Guo, Zhongwen

    2016-04-13

    Accelerometers have been widely embedded in most current mobile devices, enabling easy and intuitive operations. This paper proposes a Motion Gesture Recognition system (MGRA) based on accelerometer data only, which is entirely implemented on mobile devices and can provide users with real-time interactions. A robust and unique feature set is enumerated through the time domain, the frequency domain and singular value decomposition analysis using our motion gesture set containing 11,110 traces. The best feature vector for classification is selected, taking both static and mobile scenarios into consideration. MGRA exploits support vector machine as the classifier with the best feature vector. Evaluations confirm that MGRA can accommodate a broad set of gesture variations within each class, including execution time, amplitude and non-gestural movement. Extensive evaluations confirm that MGRA achieves higher accuracy under both static and mobile scenarios and costs less computation time and energy on an LG Nexus 5 than previous methods.

  13. Improved wavelet packet classification algorithm for vibrational intrusions in distributed fiber-optic monitoring systems

    NASA Astrophysics Data System (ADS)

    Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo

    2015-05-01

    An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.

  14. FAST TRACK COMMUNICATION: \\ {P}\\ {T}-symmetry, Cartan decompositions, Lie triple systems and Krein space-related Clifford algebras

    NASA Astrophysics Data System (ADS)

    Günther, Uwe; Kuzhel, Sergii

    2010-10-01

    Gauged \\ {P}\\ {T} quantum mechanics (PTQM) and corresponding Krein space setups are studied. For models with constant non-Abelian gauge potentials and extended parity inversions compact and noncompact Lie group components are analyzed via Cartan decompositions. A Lie-triple structure is found and an interpretation as \\ {P}\\ {T}-symmetrically generalized Jaynes-Cummings model is possible with close relation to recently studied cavity QED setups with transmon states in multilevel artificial atoms. For models with Abelian gauge potentials a hidden Clifford algebra structure is found and used to obtain the fundamental symmetry of Krein space-related J-self-adjoint extensions for PTQM setups with ultra-localized potentials.

  15. The Local Stellar Velocity Field via Vector Spherical Harmonics

    NASA Technical Reports Server (NTRS)

    Markarov, V. V.; Murphy, D. W.

    2007-01-01

    We analyze the local field of stellar tangential velocities for a sample of 42,339 nonbinary Hipparcos stars with accurate parallaxes, using a vector spherical harmonic formalism. We derive simple relations between the parameters of the classical linear model (Ogorodnikov-Milne) of the local systemic field and low-degree terms of the general vector harmonic decomposition. Taking advantage of these relationships, we determine the solar velocity with respect to the local stars of (V(sub X), V(sub Y), V(sub Z)) (10.5, 18.5, 7.3) +/- 0.1 km s(exp -1) not corrected for the asymmetric drift with respect to the local standard of rest. If only stars more distant than 100 pc are considered, the peculiar solar motion is (V(sub X), V(sub Y), V(sub Z)) (9.9, 15.6, 6.9) +/- 0.2 km s(exp -1). The adverse effects of harmonic leakage, which occurs between the reflex solar motion represented by the three electric vector harmonics in the velocity space and higher degree harmonics in the proper-motion space, are eliminated in our analysis by direct subtraction of the reflex solar velocity in its tangential components for each star. The Oort parameters determined by a straightforward least-squares adjustment in vector spherical harmonics are A=14.0 +/- 1.4, B=13.1 +/- 1.2, K=1.1 +/- 1.8, and C=2.9 +/- 1.4 km s(exp -1) kpc(exp -1). The physical meaning and the implications of these parameters are discussed in the framework of a general linear model of the velocity field. We find a few statistically significant higher degree harmonic terms that do not correspond to any parameters in the classical linear model. One of them, a third-degree electric harmonic, is tentatively explained as the response to a negative linear gradient of rotation velocity with distance from the Galactic plane, which we estimate at approximately -20 km s(exp -1) kpc(exp -1). A similar vertical gradient of rotation velocity has been detected for more distant stars representing the thick disk (z greater than 1 kpc), but here we surmise its existence in the thin disk at z less than 200 pc. The most unexpected and unexplained term within the Ogorodnikov-Milne model is the first-degree magnetic harmonic, representing a rigid rotation of the stellar field about the axis -Y pointing opposite to the direction of rotation. This harmonic comes out with a statistically robust coefficient of 6.2 +/- 0.9 km s(exp -1) kpc(exp -1) and is also present in the velocity field of more distant stars. The ensuing upward vertical motion of stars in the general direction of the Galactic center and the downward motion in the anticenter direction are opposite to the vector field expected from the stationary Galactic warp model.

  16. A hybrid algorithm for parallel molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Mangiardi, Chris M.; Meyer, R.

    2017-10-01

    This article describes algorithms for the hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-range forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds or thousands of cores and SIMD units with large vector sizes. In order to test the efficiency of the method, simulations of a variety of configurations with up to 74 million atoms have been performed. Results are shown that were obtained on multi-core systems with Sandy Bridge and Haswell processors as well as systems with Xeon Phi many-core processors.

  17. Low bit rate coding of Earth science images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1993-01-01

    In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.

  18. Video segmentation and camera motion characterization using compressed data

    NASA Astrophysics Data System (ADS)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  19. Projective mappings and dimensions of vector spaces of three types of Killing-Yano tensors on pseudo Riemannian manifolds of constant curvature

    NASA Astrophysics Data System (ADS)

    Mikeš, Josef; Stepanov, Sergey; Hinterleitner, Irena

    2012-07-01

    In our paper we have determined the dimension of the space of conformal Killing-Yano tensors and the dimensions of its two subspaces of closed conformal Killing-Yano and Killing-Yano tensors on pseudo Riemannian manifolds of constant curvature. This result is a generalization of well known results on sharp upper bounds of the dimensions of the vector spaces of conformal Killing-Yano, Killing-Yano and concircular vector fields on pseudo Riemannian manifolds of constant curvature.

  20. Energy Dissipation of Rayleigh Waves due to Absorption Along the Path by the Use of Finite Element Method

    DTIC Science & Technology

    1979-07-31

    3 x 3 t Strain vector a ij,j Space derivative of the stress tensor Fi Force vector per unit volume o Density x CHAPTER III F Total force K Stiffness...matrix 6Vector displacements M Mass matrix B Space operating matrix DO Matrix moduli 2 x 3 DZ Operating matrix in Z direction N Matrix of shape...dissipating medium the deformation of a solid is a function of time, temperature and space . Creep phenomenon is a deformation process in which there is

  1. The Sequential Implementation of Array Processors when there is Directional Uncertainty

    DTIC Science & Technology

    1975-08-01

    University of Washington kindly supplied office space and ccputing facilities. -The author hat, benefited greatly from discussions with several other...if i Q- inverse of Q I L general observation space R general vector of observation _KR general observation vector of dimension K Exiv] "Tf -- ’ -"-T’T...7" i ’i ’:"’ - ’ ; ’ ’ ’ ’ ’ ’" ’"- Glossary of Symbols (continued) R. ith observation 1 Rm real vector space of dimension m R(T) autocorrelation

  2. Tuition and Fees and Tax Revolt Provisions: Exploring State Fiscal Policy Impacts Using Fixed-Effects Vector Decomposition

    ERIC Educational Resources Information Center

    Serna, Gabriel Ramom

    2012-01-01

    It is arguably the case that one of the most pressing issues in higher education finance is the increasing price of obtaining a college education, and, more specifically, rising tuition and fees. Because state support to public higher education and tuition and fees at publicly supported colleges and universities have been shown to share an inverse…

  3. Research in computer science

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1986-01-01

    Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.

  4. A Higher-Order Generalized Singular Value Decomposition for Comparison of Global mRNA Expression from Multiple Organisms

    PubMed Central

    Ponnapalli, Sri Priya; Saunders, Michael A.; Van Loan, Charles F.; Alter, Orly

    2011-01-01

    The number of high-dimensional datasets recording multiple aspects of a single phenomenon is increasing in many areas of science, accompanied by a need for mathematical frameworks that can compare multiple large-scale matrices with different row dimensions. The only such framework to date, the generalized singular value decomposition (GSVD), is limited to two matrices. We mathematically define a higher-order GSVD (HO GSVD) for N≥2 matrices , each with full column rank. Each matrix is exactly factored as Di = UiΣiVT, where V, identical in all factorizations, is obtained from the eigensystem SV = VΛ of the arithmetic mean S of all pairwise quotients of the matrices , i≠j. We prove that this decomposition extends to higher orders almost all of the mathematical properties of the GSVD. The matrix S is nondefective with V and Λ real. Its eigenvalues satisfy λk≥1. Equality holds if and only if the corresponding eigenvector vk is a right basis vector of equal significance in all matrices Di and Dj, that is σi,k/σj,k = 1 for all i and j, and the corresponding left basis vector ui,k is orthogonal to all other vectors in Ui for all i. The eigenvalues λk = 1, therefore, define the “common HO GSVD subspace.” We illustrate the HO GSVD with a comparison of genome-scale cell-cycle mRNA expression from S. pombe, S. cerevisiae and human. Unlike existing algorithms, a mapping among the genes of these disparate organisms is not required. We find that the approximately common HO GSVD subspace represents the cell-cycle mRNA expression oscillations, which are similar among the datasets. Simultaneous reconstruction in the common subspace, therefore, removes the experimental artifacts, which are dissimilar, from the datasets. In the simultaneous sequence-independent classification of the genes of the three organisms in this common subspace, genes of highly conserved sequences but significantly different cell-cycle peak times are correctly classified. PMID:22216090

  5. Electromagnetic and axial-vector form factors of the quarks and nucleon

    NASA Astrophysics Data System (ADS)

    Dahiya, Harleen; Randhawa, Monika

    2017-11-01

    In light of the improved precision of the experimental measurements and enormous theoretical progress, the nucleon form factors have been evaluated with an aim to understand how the static properties and dynamical behavior of nucleons emerge from the theory of strong interactions between quarks. We have analyzed the vector and axial-vector nucleon form factors (GE,Mp,n(Q2) and GAp,n(Q2)) using the spin observables in the chiral constituent quark model (χCQM) which has made a significant contribution to the unraveling of the internal structure of the nucleon in the nonperturbative regime. We have also presented a comprehensive analysis of the flavor decomposition of the form factors (GEq(Q2), GMq(Q2) and GAq(Q2) for q = u,d,s) within the framework of χCQM with emphasis on the extraction of the strangeness form factors which are fundamental to determine the spin structure and test the chiral symmetry breaking effects in the nucleon. The Q2 dependence of the vector and axial-vector form factors of the nucleon has been studied using the conventional dipole form of parametrization. The results are in agreement with the available experimental data.

  6. Parallel processing methods for space based power systems

    NASA Technical Reports Server (NTRS)

    Berry, F. C.

    1993-01-01

    This report presents a method for doing load-flow analysis of a power system by using a decomposition approach. The power system for the Space Shuttle is used as a basis to build a model for the load-flow analysis. To test the decomposition method for doing load-flow analysis, simulations were performed on power systems of 16, 25, 34, 43, 52, 61, 70, and 79 nodes. Each of the power systems was divided into subsystems and simulated under steady-state conditions. The results from these tests have been found to be as accurate as tests performed using a standard serial simulator. The division of the power systems into different subsystems was done by assigning a processor to each area. There were 13 transputers available, therefore, up to 13 different subsystems could be simulated at the same time. This report has preliminary results for a load-flow analysis using a decomposition principal. The report shows that the decomposition algorithm for load-flow analysis is well suited for parallel processing and provides increases in the speed of execution.

  7. A machine learning approach to galaxy-LSS classification - I. Imprints on halo merger trees

    NASA Astrophysics Data System (ADS)

    Hui, Jianan; Aragon, Miguel; Cui, Xinping; Flegal, James M.

    2018-04-01

    The cosmic web plays a major role in the formation and evolution of galaxies and defines, to a large extent, their properties. However, the relation between galaxies and environment is still not well understood. Here, we present a machine learning approach to study imprints of environmental effects on the mass assembly of haloes. We present a galaxy-LSS machine learning classifier based on galaxy properties sensitive to the environment. We then use the classifier to assess the relevance of each property. Correlations between galaxy properties and their cosmic environment can be used to predict galaxy membership to void/wall or filament/cluster with an accuracy of 93 per cent. Our study unveils environmental information encoded in properties of haloes not normally considered directly dependent on the cosmic environment such as merger history and complexity. Understanding the physical mechanism by which the cosmic web is imprinted in a halo can lead to significant improvements in galaxy formation models. This is accomplished by extracting features from galaxy properties and merger trees, computing feature scores for each feature and then applying support vector machine (SVM) to different feature sets. To this end, we have discovered that the shape and depth of the merger tree, formation time, and density of the galaxy are strongly associated with the cosmic environment. We describe a significant improvement in the original classification algorithm by performing LU decomposition of the distance matrix computed by the feature vectors and then using the output of the decomposition as input vectors for SVM.

  8. Antepartum fetal heart rate feature extraction and classification using empirical mode decomposition and support vector machine

    PubMed Central

    2011-01-01

    Background Cardiotocography (CTG) is the most widely used tool for fetal surveillance. The visual analysis of fetal heart rate (FHR) traces largely depends on the expertise and experience of the clinician involved. Several approaches have been proposed for the effective interpretation of FHR. In this paper, a new approach for FHR feature extraction based on empirical mode decomposition (EMD) is proposed, which was used along with support vector machine (SVM) for the classification of FHR recordings as 'normal' or 'at risk'. Methods The FHR were recorded from 15 subjects at a sampling rate of 4 Hz and a dataset consisting of 90 randomly selected records of 20 minutes duration was formed from these. All records were labelled as 'normal' or 'at risk' by two experienced obstetricians. A training set was formed by 60 records, the remaining 30 left as the testing set. The standard deviations of the EMD components are input as features to a support vector machine (SVM) to classify FHR samples. Results For the training set, a five-fold cross validation test resulted in an accuracy of 86% whereas the overall geometric mean of sensitivity and specificity was 94.8%. The Kappa value for the training set was .923. Application of the proposed method to the testing set (30 records) resulted in a geometric mean of 81.5%. The Kappa value for the testing set was .684. Conclusions Based on the overall performance of the system it can be stated that the proposed methodology is a promising new approach for the feature extraction and classification of FHR signals. PMID:21244712

  9. Capabilities of software "Vector-M" for a diagnostics of the ionosphere state from auroral emissions images and plasma characteristics from the different orbits as a part of the system of control of space weather

    NASA Astrophysics Data System (ADS)

    Avdyushev, V.; Banshchikova, M.; Chuvashov, I.; Kuzmin, A.

    2017-09-01

    In the paper are presented capabilities of software "Vector-M" for a diagnostics of the ionosphere state from auroral emissions images and plasma characteristics from the different orbits as a part of the system of control of space weather. The software "Vector-M" is developed by the celestial mechanics and astrometry department of Tomsk State University in collaboration with Space Research Institute (Moscow) and Central Aerological Observatory of Russian Federal Service for Hydrometeorology and Environmental Monitoring. The software "Vector-M" is intended for calculation of attendant geophysical and astronomical information for the centre of mass of the spacecraft and the space of observations in the experiment with auroral imager Aurovisor-VIS/MP in the orbit of the perspective Meteor-MP spacecraft.

  10. NUDTSNA at TREC 2015 Microblog Track: A Live Retrieval System Framework for Social Network based on Semantic Expansion and Quality Model

    DTIC Science & Technology

    2015-11-20

    between tweets and profiles as follow, • TFIDF Score, which calculates the cosine similarity between a tweet and a profile in vector space model with...TFIDF weight of terms. Vector space model is a model which represents a document as a vector. Tweets and profiles can be expressed as vectors, ~ T = (t...gain(Tr i ) (13) where Tr is the returned tweet sets, gain() is the score func- tion for a tweet. Not interesting, spam/ junk tweets receive a gain of 0

  11. Trends in space activities in 2014: The significance of the space activities of governments

    NASA Astrophysics Data System (ADS)

    Paikowsky, Deganit; Baram, Gil; Ben-Israel, Isaac

    2016-01-01

    This article addresses the principal events of 2014 in the field of space activities, and extrapolates from them the primary trends that can be identified in governmental space activities. In 2014, global space activities centered on two vectors. The first was geopolitical, and the second relates to the matrix between increasing commercial space activities and traditional governmental space activities. In light of these two vectors, the article outlines and analyzes trends of space exploration, human spaceflights, industry and technology, cooperation versus self-reliance, and space security and sustainability. It also reviews the space activities of the leading space-faring nations.

  12. A comparison of breeding and ensemble transform vectors for global ensemble generation

    NASA Astrophysics Data System (ADS)

    Deng, Guo; Tian, Hua; Li, Xiaoli; Chen, Jing; Gong, Jiandong; Jiao, Meiyan

    2012-02-01

    To compare the initial perturbation techniques using breeding vectors and ensemble transform vectors, three ensemble prediction systems using both initial perturbation methods but with different ensemble member sizes based on the spectral model T213/L31 are constructed at the National Meteorological Center, China Meteorological Administration (NMC/CMA). A series of ensemble verification scores such as forecast skill of the ensemble mean, ensemble resolution, and ensemble reliability are introduced to identify the most important attributes of ensemble forecast systems. The results indicate that the ensemble transform technique is superior to the breeding vector method in light of the evaluation of anomaly correlation coefficient (ACC), which is a deterministic character of the ensemble mean, the root-mean-square error (RMSE) and spread, which are of probabilistic attributes, and the continuous ranked probability score (CRPS) and its decomposition. The advantage of the ensemble transform approach is attributed to its orthogonality among ensemble perturbations as well as its consistence with the data assimilation system. Therefore, this study may serve as a reference for configuration of the best ensemble prediction system to be used in operation.

  13. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    NASA Astrophysics Data System (ADS)

    Hartmann, L.

    2002-01-01

    As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react to state and environment and in general can terminate the execution of a decomposition and attempt a new decomposition at any level in the hierarchy. This goal decomposition system is suitable for workstation, microprocessor and fpga implementation and thus is able to support the full range of prototyping activities, from mission design in the laboratory to development of the fpga firmware for the flight system. This approach is based on previous artificial intelligence work including (1) Brooks' subsumption architecture for robot control, (2) Firby's Reactive Action Package System (RAPS) for mediating between high level automated planning and low level execution and (3) hierarchical task networks for automated planning. Reactive goal decomposition hierarchies can be used for a wide variety of on-board autonomy applications including automating low level operation sequences (such as scheduling prerequisite operations, e.g., heaters, warm-up periods, monitoring power constraints), coordinating multiple spacecraft as in formation flying and constellations, robot manipulator operations, rendez-vous, docking, servicing, assembly, on-orbit maintenance, planetary rover operations, solar system and interstellar probes, intelligent science data gathering and disaster early warning. Goal decomposition hierarchies can support high level fault tolerance. Given models of on-board resources and goals to accomplish, the decomposition hierarchy could allocate resources to goals taking into account existing faults and in real-time reallocating resources as new faults arise. Resources to be modeled include memory (e.g., ROM, FPGA configuration memory, processor memory, payload instrument memory), processors, on-board and interspacecraft network nodes and links, sensors, actuators (e.g., attitude determination and control, guidance and navigation) and payload instruments. A goal decomposition hierarchy could be defined to map mission goals and tasks to available on-board resources. As faults occur and are detected the resource allocation is modified to avoid using the faulty resource. Goal decomposition hierarchies can implement variable autonomy (in which the operator chooses to command the system at a high or low level, mixed initiative planning (in which the system is able to interact with the operator, e.g, to request operator intervention when a working envelope is exceeded) and distributed control (in which, for example, multiple spacecraft cooperate to accomplish a task without a fixed master). The full paper will describe in greater detail how goal decompositions work, how they can be implemented, techniques for implementing a candidate application and the current state of the fpga implementation.

  14. Analysis of nucleon electromagnetic form factors from light-front holographic QCD: The spacelike region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sufian, Raza Sabbir; de Teramond, Guy F.; Brodsky, Stanley J.

    We present a comprehensive analysis of the space-like nucleon electromagnetic form factors and their flavor decomposition within the framework of light-front holographic QCD. We show that the inclusion of the higher Fock componentsmore » $$|{qqqq\\bar{q}}$$ has a significant effect on the spin-flip elastic Pauli form factor and almost zero effect on the spin-conserving Dirac form factor. We present light-front holographic QCD results for the proton and neutron form factors at any momentum transfer range, including asymptotic predictions, and show that our results agree with the available experimental data with high accuracy. In order to correctly describe the Pauli form factor we need an admixture of a five quark state of about 30$$\\%$$ in the proton and about 40$$\\%$$ in the neutron. We also extract the nucleon charge and magnetic radii and perform a flavor decomposition of the nucleon electromagnetic form factors. The free parameters needed to describe the experimental nucleon form factors are very few: two parameters for the probabilities of higher Fock states for the spin-flip form factor and a phenomenological parameter $r$, required to account for possible SU(6) spin-flavor symmetry breaking effects in the neutron, whereas the Pauli form factors are normalized to the experimental values of the anomalous magnetic moments. As a result, the covariant spin structure for the Dirac and Pauli nucleon form factors prescribed by AdS$$_5$$ semiclassical gravity incorporates the correct twist scaling behavior from hard scattering and also leads to vector dominance at low energy.« less

  15. Analysis of nucleon electromagnetic form factors from light-front holographic QCD: The spacelike region

    DOE PAGES

    Sufian, Raza Sabbir; de Teramond, Guy F.; Brodsky, Stanley J.; ...

    2017-01-10

    We present a comprehensive analysis of the space-like nucleon electromagnetic form factors and their flavor decomposition within the framework of light-front holographic QCD. We show that the inclusion of the higher Fock componentsmore » $$|{qqqq\\bar{q}}$$ has a significant effect on the spin-flip elastic Pauli form factor and almost zero effect on the spin-conserving Dirac form factor. We present light-front holographic QCD results for the proton and neutron form factors at any momentum transfer range, including asymptotic predictions, and show that our results agree with the available experimental data with high accuracy. In order to correctly describe the Pauli form factor we need an admixture of a five quark state of about 30$$\\%$$ in the proton and about 40$$\\%$$ in the neutron. We also extract the nucleon charge and magnetic radii and perform a flavor decomposition of the nucleon electromagnetic form factors. The free parameters needed to describe the experimental nucleon form factors are very few: two parameters for the probabilities of higher Fock states for the spin-flip form factor and a phenomenological parameter $r$, required to account for possible SU(6) spin-flavor symmetry breaking effects in the neutron, whereas the Pauli form factors are normalized to the experimental values of the anomalous magnetic moments. As a result, the covariant spin structure for the Dirac and Pauli nucleon form factors prescribed by AdS$$_5$$ semiclassical gravity incorporates the correct twist scaling behavior from hard scattering and also leads to vector dominance at low energy.« less

  16. An exact solution for axial flow in cylindrically symmetric, steady-state detonation in polytropic explosive with an arbitrary rate of decomposition

    NASA Astrophysics Data System (ADS)

    Cowperthwaite, M.

    1994-03-01

    Methods of differential geometry and Bernoulli's equation, written as B=0, are used to develop a new approach for constructing an exact solution for axial flow in a classical, two-dimensional, ZND detonation wave in a polytropic explosive with an arbitrary rate of decomposition. This geometric approach is fundamentally different from the traditional approaches to this axial flow problem formulated by Wood and Kirkwood (WK) and Fickett and Davis (FD), and gives equations for the axial particle velocity (u), the sound speed (c), the pressure (p), and the density (ρ), that are expressed in terms of the detonation velocity (D), the extent of decomposition (λ), the polytropic index (K), and two nonideal parameters ɛ3 and ɛ1, and reduce to the equations for steady-state, one-dimensional detonation as ɛ3 and ɛ1 approach zero. In contrast to the FD approach, the equations for u and c are obtained from first integrals of a tangent vector à on (u,c,λ) space, and the invariant condition, ÃB=aB=0, bypasses the FD eigenvalue problem by defining ɛ3 in terms of the detonation velocity deficit D/D∞ and K. In contrast to the WK approach, the equations for p and ρ are obtained from equations expressing the conservation of axial momentum and energy. Because the equations for these flow variables are derived without using the conservation of mass, the axial radial particle velocity gradient (war) associated with the flow can be obtained from the continuity equation without making approximations. The relationship between ɛ1 and ɛ3 that closes the solution is obtained from equations expressing constraints imposed on the axial flow at the shock front by the axial and radial momentum equations, the curved shock and the decomposition rate law, and a particular solution is constructed from the ɛ1-ɛ3 relationship determined by a prescribed rate law and value of K. Properties of particular solutions are presented to provide a better understanding of two-dimensional detonation, and a new axial condition for detonation failure is used to show that detonation failure can occur before the curve relating D/D∞ to the axial radius of curvature of the shock (Sa) becomes infinite.

  17. Non invasive transcostal focusing based on the decomposition of the time reversal operator: in vitro validation

    NASA Astrophysics Data System (ADS)

    Cochard, Étienne; Prada, Claire; Aubry, Jean-François; Fink, Mathias

    2010-03-01

    Thermal ablation induced by high intensity focused ultrasound has produced promising clinical results to treat hepatocarcinoma and other liver tumors. However skin burns have been reported due to the high absorption of ultrasonic energy by the ribs. This study proposes a method to produce an acoustic field focusing on a chosen target while sparing the ribs, using the decomposition of the time-reversal operator (DORT method). The idea is to apply an excitation weight vector to the transducers array which is orthogonal to the subspace of emissions focusing on the ribs. The ratio of the energies absorbed at the focal point and on the ribs has been enhanced up to 100-fold as demonstrated by the measured specific absorption rates.

  18. Polynomial Chaos decomposition applied to stochastic dosimetry: study of the influence of the magnetic field orientation on the pregnant woman exposure at 50 Hz.

    PubMed

    Liorni, I; Parazzini, M; Fiocchi, S; Guadagnin, V; Ravazzani, P

    2014-01-01

    Polynomial Chaos (PC) is a decomposition method used to build a meta-model, which approximates the unknown response of a model. In this paper the PC method is applied to the stochastic dosimetry to assess the variability of human exposure due to the change of the orientation of the B-field vector respect to the human body. In detail, the analysis of the pregnant woman exposure at 7 months of gestational age is carried out, to build-up a statistical meta-model of the induced electric field for each fetal tissue and in the fetal whole-body by means of the PC expansion as a function of the B-field orientation, considering a uniform exposure at 50 Hz.

  19. The SAMEX Vector Magnetograph: A Design Study for a Space-Based Solar Vector Magnetograph

    NASA Technical Reports Server (NTRS)

    Hagyard, M. J.; Gary, G. A.; West, E. A.

    1988-01-01

    This report presents the results of a pre-phase A study performed by the Marshall Space Flight Center (MSFC) for the Air Force Geophysics Laboratory (AFGL) to develop a design concept for a space-based solar vector magnetograph and hydrogen-alpha telescope. These are two of the core instruments for a proposed Air Force mission, the Solar Activities Measurement Experiments (SAMEX). This mission is designed to study the processes which give rise to activity in the solar atmosphere and to develop techniques for predicting solar activity and its effects on the terrestrial environment.

  20. Vectoring of parallel synthetic jets: A parametric study

    NASA Astrophysics Data System (ADS)

    Berk, Tim; Gomit, Guillaume; Ganapathisubramani, Bharathram

    2016-11-01

    The vectoring of a pair of parallel synthetic jets can be described using five dimensionless parameters: the aspect ratio of the slots, the Strouhal number, the Reynolds number, the phase difference between the jets and the spacing between the slots. In the present study, the influence of the latter four on the vectoring behaviour of the jets is examined experimentally using particle image velocimetry. Time-averaged velocity maps are used to study the variations in vectoring behaviour for a parametric sweep of each of the four parameters independently. A topological map is constructed for the full four-dimensional parameter space. The vectoring behaviour is described both qualitatively and quantitatively. A vectoring mechanism is proposed, based on measured vortex positions. We acknowledge the financial support from the European Research Council (ERC Grant Agreement No. 277472).

  1. An operational modal analysis method in frequency and spatial domain

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Zhang, Lingmi; Tamura, Yukio

    2005-12-01

    A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.

  2. Structural system identification based on variational mode decomposition

    NASA Astrophysics Data System (ADS)

    Bagheri, Abdollah; Ozbulut, Osman E.; Harris, Devin K.

    2018-03-01

    In this paper, a new structural identification method is proposed to identify the modal properties of engineering structures based on dynamic response decomposition using the variational mode decomposition (VMD). The VMD approach is a decomposition algorithm that has been developed as a means to overcome some of the drawbacks and limitations of the empirical mode decomposition method. The VMD-based modal identification algorithm decomposes the acceleration signal into a series of distinct modal responses and their respective center frequencies, such that when combined their cumulative modal responses reproduce the original acceleration response. The decaying amplitude of the extracted modal responses is then used to identify the modal damping ratios using a linear fitting function on modal response data. Finally, after extracting modal responses from available sensors, the mode shape vector for each of the decomposed modes in the system is identified from all obtained modal response data. To demonstrate the efficiency of the algorithm, a series of numerical, laboratory, and field case studies were evaluated. The laboratory case study utilized the vibration response of a three-story shear frame, whereas the field study leveraged the ambient vibration response of a pedestrian bridge to characterize the modal properties of the structure. The modal properties of the shear frame were computed using analytical approach for a comparison with the experimental modal frequencies. Results from these case studies demonstrated that the proposed method is efficient and accurate in identifying modal data of the structures.

  3. Ultra-Dense Quantum Communication Using Integrated Photonic Architecture

    DTIC Science & Technology

    2012-02-03

    and tae have the same right singular vectors , and their singular-value decompositions can be written as tab = uabsabv †, (30) tae = uaesaev †, (31...freedom such as polarization or spatial modes), making its implementation ideal for fiber optics networks. (iii) The protocol promises unprecedented...well as temporal correlations. In particular, using 8 wavelength channels for an additional 3 bpp and two polarization states for one additional bpp

  4. A parabolic velocity-decomposition method for wind turbines

    NASA Astrophysics Data System (ADS)

    Mittal, Anshul; Briley, W. Roger; Sreenivas, Kidambi; Taylor, Lafayette K.

    2017-02-01

    An economical parabolized Navier-Stokes approximation for steady incompressible flow is combined with a compatible wind turbine model to simulate wind turbine flows, both upstream of the turbine and in downstream wake regions. The inviscid parabolizing approximation is based on a Helmholtz decomposition of the secondary velocity vector and physical order-of-magnitude estimates, rather than an axial pressure gradient approximation. The wind turbine is modeled by distributed source-term forces incorporating time-averaged aerodynamic forces generated by a blade-element momentum turbine model. A solution algorithm is given whose dependent variables are streamwise velocity, streamwise vorticity, and pressure, with secondary velocity determined by two-dimensional scalar and vector potentials. In addition to laminar and turbulent boundary-layer test cases, solutions for a streamwise vortex-convection test problem are assessed by mesh refinement and comparison with Navier-Stokes solutions using the same grid. Computed results for a single turbine and a three-turbine array are presented using the NREL offshore 5-MW baseline wind turbine. These are also compared with an unsteady Reynolds-averaged Navier-Stokes solution computed with full rotor resolution. On balance, the agreement in turbine wake predictions for these test cases is very encouraging given the substantial differences in physical modeling fidelity and computer resources required.

  5. An integrated analysis-synthesis array system for spatial sound fields.

    PubMed

    Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao

    2015-03-01

    An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.

  6. Anisotropic fractal media by vector calculus in non-integer dimensional space

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2014-08-01

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.

  7. Placement-aware decomposition of a digital standard cells library for double patterning lithography

    NASA Astrophysics Data System (ADS)

    Wassal, Amr G.; Sharaf, Heba; Hammouda, Sherif

    2012-11-01

    To continue scaling the circuit features down, Double Patterning (DP) technology is needed in 22nm technologies and lower. DP requires decomposing the layout features into two masks for pitch relaxation, such that the spacing between any two features on each mask is greater than the minimum allowed mask spacing. The relaxed pitches of each mask are then processed on two separate exposure steps. In many cases, post-layout decomposition fails to decompose the layout into two masks due to the presence of conflicts. Post-layout decomposition of a standard cells block can result in native conflicts inside the cells (internal conflict), or native conflicts on the boundary between two cells (boundary conflict). Resolving native conflicts requires a redesign and/or multiple iterations for the placement and routing phases to get a clean decomposition. Therefore, DP compliance must be considered in earlier phases, before getting the final placed cell block. The main focus of this paper is generating a library of decomposed standard cells to be used in a DP-aware placer. This library should contain all possible decompositions for each standard cell, i.e., these decompositions consider all possible combinations of boundary conditions. However, the large number of combinations of boundary conditions for each standard cell will significantly increase the processing time and effort required to obtain all possible decompositions. Therefore, an efficient methodology is required to reduce this large number of combinations. In this paper, three different reduction methodologies are proposed to reduce the number of different combinations processed to get the decomposed library. Experimental results show a significant reduction in the number of combinations and decompositions needed for the library processing. To generate and verify the proposed flow and methodologies, a prototype for a placement-aware DP-ready cell-library is developed with an optimized number of cell views.

  8. Combined-probability space and certainty or uncertainty relations for a finite-level quantum system

    NASA Astrophysics Data System (ADS)

    Sehrawat, Arun

    2017-08-01

    The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.

  9. An NN-Based SRD Decomposition Algorithm and Its Application in Nonlinear Compensation

    PubMed Central

    Yan, Honghang; Deng, Fang; Sun, Jian; Chen, Jie

    2014-01-01

    In this study, a neural network-based square root of descending (SRD) order decomposition algorithm for compensating for nonlinear data generated by sensors is presented. The study aims at exploring the optimized decomposition of data 1.00,0.00,0.00 and minimizing the computational complexity and memory space of the training process. A linear decomposition algorithm, which automatically finds the optimal decomposition of N subparts and reduces the training time to 1N and memory cost to 1N, has been implemented on nonlinear data obtained from an encoder. Particular focus is given to the theoretical access of estimating the numbers of hidden nodes and the precision of varying the decomposition method. Numerical experiments are designed to evaluate the effect of this algorithm. Moreover, a designed device for angular sensor calibration is presented. We conduct an experiment that samples the data of an encoder and compensates for the nonlinearity of the encoder to testify this novel algorithm. PMID:25232912

  10. Iterative variational mode decomposition based automated detection of glaucoma using fundus images.

    PubMed

    Maheshwari, Shishir; Pachori, Ram Bilas; Kanhangad, Vivek; Bhandary, Sulatha V; Acharya, U Rajendra

    2017-09-01

    Glaucoma is one of the leading causes of permanent vision loss. It is an ocular disorder caused by increased fluid pressure within the eye. The clinical methods available for the diagnosis of glaucoma require skilled supervision. They are manual, time consuming, and out of reach of common people. Hence, there is a need for an automated glaucoma diagnosis system for mass screening. In this paper, we present a novel method for an automated diagnosis of glaucoma using digital fundus images. Variational mode decomposition (VMD) method is used in an iterative manner for image decomposition. Various features namely, Kapoor entropy, Renyi entropy, Yager entropy, and fractal dimensions are extracted from VMD components. ReliefF algorithm is used to select the discriminatory features and these features are then fed to the least squares support vector machine (LS-SVM) for classification. Our proposed method achieved classification accuracies of 95.19% and 94.79% using three-fold and ten-fold cross-validation strategies, respectively. This system can aid the ophthalmologists in confirming their manual reading of classes (glaucoma or normal) using fundus images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Coherence and dimensionality of intense spatiospectral twin beams

    NASA Astrophysics Data System (ADS)

    Peřina, Jan

    2015-07-01

    Spatiospectral properties of twin beams at their transition from low to high intensities are analyzed in parametric and paraxial approximations using decomposition into paired spatial and spectral modes. Intensity auto- and cross-correlation functions are determined and compared in the spectral and temporal domains as well as the transverse wave-vector and crystal output planes. Whereas the spectral, temporal, and transverse wave-vector coherence increases with the increasing pump intensity, coherence in the crystal output plane is almost independent of the pump intensity owing to the mode structure in this plane. The corresponding auto- and cross-correlation functions approach each other for larger pump intensities. The entanglement dimensionality of a twin beam is determined with a comparison of several approaches.

  12. Human action classification using procrustes shape theory

    NASA Astrophysics Data System (ADS)

    Cho, Wanhyun; Kim, Sangkyoon; Park, Soonyoung; Lee, Myungeun

    2015-02-01

    In this paper, we propose new method that can classify a human action using Procrustes shape theory. First, we extract a pre-shape configuration vector of landmarks from each frame of an image sequence representing an arbitrary human action, and then we have derived the Procrustes fit vector for pre-shape configuration vector. Second, we extract a set of pre-shape vectors from tanning sample stored at database, and we compute a Procrustes mean shape vector for these preshape vectors. Third, we extract a sequence of the pre-shape vectors from input video, and we project this sequence of pre-shape vectors on the tangent space with respect to the pole taking as a sequence of mean shape vectors corresponding with a target video. And we calculate the Procrustes distance between two sequences of the projection pre-shape vectors on the tangent space and the mean shape vectors. Finally, we classify the input video into the human action class with minimum Procrustes distance. We assess a performance of the proposed method using one public dataset, namely Weizmann human action dataset. Experimental results reveal that the proposed method performs very good on this dataset.

  13. A Cross-Lingual Similarity Measure for Detecting Biomedical Term Translations

    PubMed Central

    Bollegala, Danushka; Kontonatsios, Georgios; Ananiadou, Sophia

    2015-01-01

    Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks. PMID:26030738

  14. Scalar and vector form factors of D →π (K )ℓν decays with Nf=2 +1 +1 twisted fermions

    NASA Astrophysics Data System (ADS)

    Lubicz, V.; Riggio, L.; Salerno, G.; Simula, S.; Tarantino, C.; ETM Collaboration

    2017-09-01

    We present a lattice determination of the vector and scalar form factors of the D →π ℓν and D →K ℓν semileptonic decays, which are relevant for the extraction of the CKM matrix elements |Vc d| and |Vc s| from experimental data. Our analysis is based on the gauge configurations produced by the European Twisted Mass Collaboration with Nf=2 +1 +1 flavors of dynamical quarks, at three different values of the lattice spacing (a ≃0.062 ,0.082 ,0.089 fm ) and with pion masses as small as 210 MeV. Quark momenta are injected on the lattice using nonperiodic boundary conditions. The matrix elements of both vector and scalar currents are determined for plenty of kinematical conditions in which parent and child mesons are either moving or at rest. Lorentz symmetry breaking due to hypercubic effects is clearly observed in the data and included in the decomposition of the current matrix elements in terms of additional form factors. After the extrapolations to the physical pion mass and to the continuum limit, we determine the vector and scalar form factors in the whole kinematical region from q2=0 up to qmax2=(MD-Mπ (K ))2 accessible in the experiments, obtaining a good overall agreement with experiments, except in the region at high values of q2 where some deviations are visible. A set of synthetic data points, representing our results for f+Dπ (K )(q2) and f0D π (K )(q2) for several selected values of q2, is provided and also the corresponding covariance matrix is available. At zero four-momentum transfer, we get f+D→π(0 )=0.612 (35 ) and f+D→K(0 )=0.765 (31 ). Using the experimental averages for |Vc d|f+D→π(0 ) and |Vc s|f+D→K(0 ), we extract |Vc d|=0.2330 (137 ) and |Vc s|=0.945 (38 ), respectively. The second row of the CKM matrix is found to be in agreement with unitarity within the current uncertainties: |Vc d|2+|Vc s|2+|Vc b|2=0.949 (78 ).

  15. Interoperability Policy Roadmap

    DTIC Science & Technology

    2010-01-01

    Retrieval – SMART The technique developed by Dr. Gerard Salton for automated information retrieval and text analysis is called the vector-space... Salton , G., Wong, A., Yang, C.S., “A Vector Space Model for Automatic Indexing”, Commu- nications of the ACM, 18, 613-620. [10] Salton , G., McGill

  16. The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware

    NASA Astrophysics Data System (ADS)

    Kathiara, Jainik

    There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.

  17. Orthonormal vector general polynomials derived from the Cartesian gradient of the orthonormal Zernike-based polynomials.

    PubMed

    Mafusire, Cosmas; Krüger, Tjaart P J

    2018-06-01

    The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.

  18. A Fantastic Decomposition: Unsettling the Fury of Having to Wait

    ERIC Educational Resources Information Center

    Holmes, Rachel

    2012-01-01

    This article draws on data from a single element of a larger project, which focused on the issue of "how children develop a reputation as "naughty" in the early years classroom." The author draws attention to the (in)corporeal (re)formation of the line in school, undertaking a decomposition of the topological spaces of research/art/education. She…

  19. Application of Hyperspectal Techniques to Monitoring & Management of Invasive Plant Species Infestation

    DTIC Science & Technology

    2008-01-09

    The image data as acquired from the sensor is a data cloud in multi- dimensional space with each band generating an axis of dimension. When the data... The color of a material is defined by the direction of its unit vector in n- dimensional spectral space . The length of the vector relates only to how...to n- dimensional space . SAM determines the similarity

  20. Development of a NEW Vector Magnetograph at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    West, Edward; Hagyard, Mona; Gary, Allen; Smith, James; Adams, Mitzi; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    This paper will describe the Experimental Vector Magnetograph that has been developed at the Marshall Space Flight Center (MSFC). This instrument was designed to improve linear polarization measurements by replacing electro-optic and rotating waveplate modulators with a rotating linear analyzer. Our paper will describe the motivation for developing this magnetograph, compare this instrument with traditional magnetograph designs, and present a comparison of the data acquired by this instrument and original MSFC vector magnetograph.

  1. A Subspace Approach to the Structural Decomposition and Identification of Ankle Joint Dynamic Stiffness.

    PubMed

    Jalaleddini, Kian; Tehrani, Ehsan Sobhani; Kearney, Robert E

    2017-06-01

    The purpose of this paper is to present a structural decomposition subspace (SDSS) method for decomposition of the joint torque to intrinsic, reflexive, and voluntary torques and identification of joint dynamic stiffness. First, it formulates a novel state-space representation for the joint dynamic stiffness modeled by a parallel-cascade structure with a concise parameter set that provides a direct link between the state-space representation matrices and the parallel-cascade parameters. Second, it presents a subspace method for the identification of the new state-space model that involves two steps: 1) the decomposition of the intrinsic and reflex pathways and 2) the identification of an impulse response model of the intrinsic pathway and a Hammerstein model of the reflex pathway. Extensive simulation studies demonstrate that SDSS has significant performance advantages over some other methods. Thus, SDSS was more robust under high noise conditions, converging where others failed; it was more accurate, giving estimates with lower bias and random errors. The method also worked well in practice and yielded high-quality estimates of intrinsic and reflex stiffnesses when applied to experimental data at three muscle activation levels. The simulation and experimental results demonstrate that SDSS accurately decomposes the intrinsic and reflex torques and provides accurate estimates of physiologically meaningful parameters. SDSS will be a valuable tool for studying joint stiffness under functionally important conditions. It has important clinical implications for the diagnosis, assessment, objective quantification, and monitoring of neuromuscular diseases that change the muscle tone.

  2. Space-by-Time Modular Decomposition Effectively Describes Whole-Body Muscle Activity During Upright Reaching in Various Directions

    PubMed Central

    Hilt, Pauline M.; Delis, Ioannis; Pozzo, Thierry; Berret, Bastien

    2018-01-01

    The modular control hypothesis suggests that motor commands are built from precoded modules whose specific combined recruitment can allow the performance of virtually any motor task. Despite considerable experimental support, this hypothesis remains tentative as classical findings of reduced dimensionality in muscle activity may also result from other constraints (biomechanical couplings, data averaging or low dimensionality of motor tasks). Here we assessed the effectiveness of modularity in describing muscle activity in a comprehensive experiment comprising 72 distinct point-to-point whole-body movements during which the activity of 30 muscles was recorded. To identify invariant modules of a temporal and spatial nature, we used a space-by-time decomposition of muscle activity that has been shown to encompass classical modularity models. To examine the decompositions, we focused not only on the amount of variance they explained but also on whether the task performed on each trial could be decoded from the single-trial activations of modules. For the sake of comparison, we confronted these scores to the scores obtained from alternative non-modular descriptions of the muscle data. We found that the space-by-time decomposition was effective in terms of data approximation and task discrimination at comparable reduction of dimensionality. These findings show that few spatial and temporal modules give a compact yet approximate representation of muscle patterns carrying nearly all task-relevant information for a variety of whole-body reaching movements. PMID:29666576

  3. Task-discriminative space-by-time factorization of muscle activity

    PubMed Central

    Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien

    2015-01-01

    Movement generation has been hypothesized to rely on a modular organization of muscle activity. Crucial to this hypothesis is the ability to perform reliably a variety of motor tasks by recruiting a limited set of modules and combining them in a task-dependent manner. Thus far, existing algorithms that extract putative modules of muscle activations, such as Non-negative Matrix Factorization (NMF), identify modular decompositions that maximize the reconstruction of the recorded EMG data. Typically, the functional role of the decompositions, i.e., task accomplishment, is only assessed a posteriori. However, as motor actions are defined in task space, we suggest that motor modules should be computed in task space too. In this study, we propose a new module extraction algorithm, named DsNM3F, that uses task information during the module identification process. DsNM3F extends our previous space-by-time decomposition method (the so-called sNM3F algorithm, which could assess task performance only after having computed modules) to identify modules gauging between two complementary objectives: reconstruction of the original data and reliable discrimination of the performed tasks. We show that DsNM3F recovers the task dependence of module activations more accurately than sNM3F. We also apply it to electromyographic signals recorded during performance of a variety of arm pointing tasks and identify spatial and temporal modules of muscle activity that are highly consistent with previous studies. DsNM3F achieves perfect task categorization without significant loss in data approximation when task information is available and generalizes as well as sNM3F when applied to new data. These findings suggest that the space-by-time decomposition of muscle activity finds robust task-discriminating modular representations of muscle activity and that the insertion of task discrimination objectives is useful for describing the task modulation of module recruitment. PMID:26217213

  4. Task-discriminative space-by-time factorization of muscle activity.

    PubMed

    Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien

    2015-01-01

    Movement generation has been hypothesized to rely on a modular organization of muscle activity. Crucial to this hypothesis is the ability to perform reliably a variety of motor tasks by recruiting a limited set of modules and combining them in a task-dependent manner. Thus far, existing algorithms that extract putative modules of muscle activations, such as Non-negative Matrix Factorization (NMF), identify modular decompositions that maximize the reconstruction of the recorded EMG data. Typically, the functional role of the decompositions, i.e., task accomplishment, is only assessed a posteriori. However, as motor actions are defined in task space, we suggest that motor modules should be computed in task space too. In this study, we propose a new module extraction algorithm, named DsNM3F, that uses task information during the module identification process. DsNM3F extends our previous space-by-time decomposition method (the so-called sNM3F algorithm, which could assess task performance only after having computed modules) to identify modules gauging between two complementary objectives: reconstruction of the original data and reliable discrimination of the performed tasks. We show that DsNM3F recovers the task dependence of module activations more accurately than sNM3F. We also apply it to electromyographic signals recorded during performance of a variety of arm pointing tasks and identify spatial and temporal modules of muscle activity that are highly consistent with previous studies. DsNM3F achieves perfect task categorization without significant loss in data approximation when task information is available and generalizes as well as sNM3F when applied to new data. These findings suggest that the space-by-time decomposition of muscle activity finds robust task-discriminating modular representations of muscle activity and that the insertion of task discrimination objectives is useful for describing the task modulation of module recruitment.

  5. Mechanical Fault Diagnosis of High Voltage Circuit Breakers Based on Variational Mode Decomposition and Multi-Layer Classifier.

    PubMed

    Huang, Nantian; Chen, Huaijin; Cai, Guowei; Fang, Lihua; Wang, Yuqiang

    2016-11-10

    Mechanical fault diagnosis of high-voltage circuit breakers (HVCBs) based on vibration signal analysis is one of the most significant issues in improving the reliability and reducing the outage cost for power systems. The limitation of training samples and types of machine faults in HVCBs causes the existing mechanical fault diagnostic methods to recognize new types of machine faults easily without training samples as either a normal condition or a wrong fault type. A new mechanical fault diagnosis method for HVCBs based on variational mode decomposition (VMD) and multi-layer classifier (MLC) is proposed to improve the accuracy of fault diagnosis. First, HVCB vibration signals during operation are measured using an acceleration sensor. Second, a VMD algorithm is used to decompose the vibration signals into several intrinsic mode functions (IMFs). The IMF matrix is divided into submatrices to compute the local singular values (LSV). The maximum singular values of each submatrix are selected as the feature vectors for fault diagnosis. Finally, a MLC composed of two one-class support vector machines (OCSVMs) and a support vector machine (SVM) is constructed to identify the fault type. Two layers of independent OCSVM are adopted to distinguish normal or fault conditions with known or unknown fault types, respectively. On this basis, SVM recognizes the specific fault type. Real diagnostic experiments are conducted with a real SF₆ HVCB with normal and fault states. Three different faults (i.e., jam fault of the iron core, looseness of the base screw, and poor lubrication of the connecting lever) are simulated in a field experiment on a real HVCB to test the feasibility of the proposed method. Results show that the classification accuracy of the new method is superior to other traditional methods.

  6. Mechanical Fault Diagnosis of High Voltage Circuit Breakers Based on Variational Mode Decomposition and Multi-Layer Classifier

    PubMed Central

    Huang, Nantian; Chen, Huaijin; Cai, Guowei; Fang, Lihua; Wang, Yuqiang

    2016-01-01

    Mechanical fault diagnosis of high-voltage circuit breakers (HVCBs) based on vibration signal analysis is one of the most significant issues in improving the reliability and reducing the outage cost for power systems. The limitation of training samples and types of machine faults in HVCBs causes the existing mechanical fault diagnostic methods to recognize new types of machine faults easily without training samples as either a normal condition or a wrong fault type. A new mechanical fault diagnosis method for HVCBs based on variational mode decomposition (VMD) and multi-layer classifier (MLC) is proposed to improve the accuracy of fault diagnosis. First, HVCB vibration signals during operation are measured using an acceleration sensor. Second, a VMD algorithm is used to decompose the vibration signals into several intrinsic mode functions (IMFs). The IMF matrix is divided into submatrices to compute the local singular values (LSV). The maximum singular values of each submatrix are selected as the feature vectors for fault diagnosis. Finally, a MLC composed of two one-class support vector machines (OCSVMs) and a support vector machine (SVM) is constructed to identify the fault type. Two layers of independent OCSVM are adopted to distinguish normal or fault conditions with known or unknown fault types, respectively. On this basis, SVM recognizes the specific fault type. Real diagnostic experiments are conducted with a real SF6 HVCB with normal and fault states. Three different faults (i.e., jam fault of the iron core, looseness of the base screw, and poor lubrication of the connecting lever) are simulated in a field experiment on a real HVCB to test the feasibility of the proposed method. Results show that the classification accuracy of the new method is superior to other traditional methods. PMID:27834902

  7. Volumetric image classification using homogeneous decomposition and dictionary learning: A study using retinal optical coherence tomography for detecting age-related macular degeneration.

    PubMed

    Albarrak, Abdulrahman; Coenen, Frans; Zheng, Yalin

    2017-01-01

    Three-dimensional (3D) (volumetric) diagnostic imaging techniques are indispensable with respect to the diagnosis and management of many medical conditions. However there is a lack of automated diagnosis techniques to facilitate such 3D image analysis (although some support tools do exist). This paper proposes a novel framework for volumetric medical image classification founded on homogeneous decomposition and dictionary learning. In the proposed framework each image (volume) is recursively decomposed until homogeneous regions are arrived at. Each region is represented using a Histogram of Oriented Gradients (HOG) which is transformed into a set of feature vectors. The Gaussian Mixture Model (GMM) is then used to generate a "dictionary" and the Improved Fisher Kernel (IFK) approach is used to encode feature vectors so as to generate a single feature vector for each volume, which can then be fed into a classifier generator. The principal advantage offered by the framework is that it does not require the detection (segmentation) of specific objects within the input data. The nature of the framework is fully described. A wide range of experiments was conducted with which to analyse the operation of the proposed framework and these are also reported fully in the paper. Although the proposed approach is generally applicable to 3D volumetric images, the focus for the work is 3D retinal Optical Coherence Tomography (OCT) images in the context of the diagnosis of Age-related Macular Degeneration (AMD). The results indicate that excellent diagnostic predictions can be produced using the proposed framework. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Statistical Analysis of the Ionosphere based on Singular Value Decomposition

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Arikan, Feza; Necat Deviren, M.; Toker, Cenk

    2016-07-01

    Ionosphere is made up of a spatio-temporally varying trend structure and secondary variations due to solar, geomagnetic, gravitational and seismic activities. Hence, it is important to monitor the ionosphere and acquire up-to-date information about its state in order both to better understand the physical phenomena that cause the variability and also to predict the effect of the ionosphere on HF and satellite communications, and satellite-based positioning systems. To charaterise the behaviour of the ionosphere, we propose to apply Singular Value Decomposition (SVD) to Total Electron Content (TEC) maps obtained from the TNPGN-Active (Turkish National Permanent GPS Network) CORS network. TNPGN-Active network consists of 146 GNSS receivers spread over Turkey. IONOLAB-TEC values estimated from each station are spatio-temporally interpolated using a Universal Kriging based algorithm with linear trend, namely IONOLAB-MAP, with very high spatial resolution. It is observed that the dominant singular value of TEC maps is an indicator of the trend structure of the ionosphere. The diurnal, seasonal and annual variability of the most dominant value is the representation of solar effect on ionosphere in midlatitude range. Secondary and smaller singular values are indicators of secondary variation which can have significance especially during geomagnetic storms or seismic disturbances. The dominant singular values are related to the physical basis vectors where ionosphere can be fully reconstructed using these vectors. Therefore, the proposed method can be used both for the monitoring of the current state of a region and also for the prediction and tracking of future states of ionosphere using singular values and singular basis vectors. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.

  9. Analysis of wave propagation in a two-dimensional photonic crystal with negative index of refraction: plane wave decomposition of the Bloch modes.

    PubMed

    Martínez, Alejandro; Míguez, Hernán; Sánchez-Dehesa, José; Martí, Javier

    2005-05-30

    This work presents a comprehensive analysis of electromagnetic wave propagation inside a two-dimensional photonic crystal in a spectral region in which the crystal behaves as an effective medium to which a negative effective index of refraction can be associated. It is obtained that the main plane wave component of the Bloch mode that propagates inside the photonic crystal has its wave vector k' out of the first Brillouin zone and it is parallel to the Poynting vector ( S' ? k'> 0 ), so light propagation in these composites is different from that reported for left-handed materials despite the fact that negative refraction can take place at the interface between air and both kinds of composites. However, wave coupling at the interfaces is well explained using the reduced wave vector ( k' ) in the first Brillouin zone, which is opposed to the energy flow, and agrees well with previous works dealing with negative refraction in photonic crystals.

  10. Modeling diffusion control on organic matter decomposition in unsaturated soil pore space

    NASA Astrophysics Data System (ADS)

    Vogel, Laure; Pot, Valérie; Garnier, Patricia; Vieublé-Gonod, Laure; Nunan, Naoise; Raynaud, Xavier; Chenu, Claire

    2014-05-01

    Soil Organic Matter decomposition is affected by soil structure and water content, but field and laboratory studies about this issue conclude to highly variable outcomes. Variability could be explained by the discrepancy between the scale at which key processes occur and the measurements scale. We think that physical and biological interactions driving carbon transformation dynamics can be best understood at the pore scale. Because of the spatial disconnection between carbon sources and decomposers, the latter rely on nutrient transport unless they can actively move. In hydrostatic case, diffusion in soil pore space is thus thought to regulate biological activity. In unsaturated conditions, the heterogeneous distribution of water modifies diffusion pathways and rates, thus affects diffusion control on decomposition. Innovative imaging and modeling tools offer new means to address these effects. We have developed a new model based on the association between a 3D Lattice-Boltzmann Model and an adimensional decomposition module. We designed scenarios to study the impact of physical (geometry, saturation, decomposers position) and biological properties on decomposition. The model was applied on porous media with various morphologies. We selected three cubic images of 100 voxels side from µCT-scanned images of an undisturbed soil sample at 68µm resolution. We used LBM to perform phase separation and obtained water phase distributions at equilibrium for different saturation indices. We then simulated the diffusion of a simple soluble substrate (glucose) and its consumption by bacteria. The same mass of glucose was added as a pulse at the beginning of all simulations. Bacteria were placed in few voxels either regularly spaced or concentrated close to or far from the glucose source. We modulated physiological features of decomposers in order to weight them against abiotic conditions. We could evidence several effects creating unequal substrate access conditions for decomposers, hence inducing contrasted decomposition kinetics: position of bacteria relative to the substrate diffusion pathways, diffusion rate and hydraulic connectivity between bacteria and substrate source, local substrate enrichment due to restricted mass transfer. Physiological characteristics had a strong impact on decomposition only when glucose diffused easily but not when diffusion limitation prevailed. This suggests that carbon dynamics should not be considered to derive from decomposers' physiology alone but rather from the interactions of biological and physical processes at the microscale.

  11. Representation of magnetic fields in space

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1975-01-01

    Several methods by which a magnetic field in space can be represented are reviewed with particular attention to problems of the observed geomagnetic field. Time dependence is assumed to be negligible, and five main classes of representation are described by vector potential, scalar potential, orthogonal vectors, Euler potentials, and expanded magnetic field.

  12. Knowledge Space: A Conceptual Basis for the Organization of Knowledge

    ERIC Educational Resources Information Center

    Meincke, Peter P. M.; Atherton, Pauline

    1976-01-01

    Proposes a new conceptual basis for visualizing the organization of information, or knowledge, which differentiates between the concept "vectors" for a field of knowledge represented in a multidimensional space, and the state "vectors" for a person based on his understanding of these concepts, and the representational…

  13. Adaptive multigrid domain decomposition solutions for viscous interacting flows

    NASA Technical Reports Server (NTRS)

    Rubin, Stanley G.; Srinivasan, Kumar

    1992-01-01

    Several viscous incompressible flows with strong pressure interaction and/or axial flow reversal are considered with an adaptive multigrid domain decomposition procedure. Specific examples include the triple deck structure surrounding the trailing edge of a flat plate, the flow recirculation in a trough geometry, and the flow in a rearward facing step channel. For the latter case, there are multiple recirculation zones, of different character, for laminar and turbulent flow conditions. A pressure-based form of flux-vector splitting is applied to the Navier-Stokes equations, which are represented by an implicit lowest-order reduced Navier-Stokes (RNS) system and a purely diffusive, higher-order, deferred-corrector. A trapezoidal or box-like form of discretization insures that all mass conservation properties are satisfied at interfacial and outflow boundaries, even for this primitive-variable, non-staggered grid computation.

  14. Anisotropic fractal media by vector calculus in non-integer dimensional space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarasov, Vasily E., E-mail: tarasov@theory.sinp.msu.ru

    2014-08-15

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensionalmore » space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.« less

  15. Color TV: total variation methods for restoration of vector-valued images.

    PubMed

    Blomgren, P; Chan, T F

    1998-01-01

    We propose a new definition of the total variation (TV) norm for vector-valued functions that can be applied to restore color and other vector-valued images. The new TV norm has the desirable properties of 1) not penalizing discontinuities (edges) in the image, 2) being rotationally invariant in the image space, and 3) reducing to the usual TV norm in the scalar case. Some numerical experiments on denoising simple color images in red-green-blue (RGB) color space are presented.

  16. A Vector Approach to Euclidean Geometry: Inner Product Spaces, Euclidean Geometry and Trigonometry, Volume 2. Teacher's Edition.

    ERIC Educational Resources Information Center

    Vaughan, Herbert E.; Szabo, Steven

    This is the teacher's edition of a text for the second year of a two-year high school geometry course. The course bases plane and solid geometry and trigonometry on the fact that the translations of a Euclidean space constitute a vector space which has an inner product. Congruence is a geometric topic reserved for Volume 2. Volume 2 opens with an…

  17. Vectors and Rotations in 3-Dimensions: Vector Algebra for the C++ Programmer

    DTIC Science & Technology

    2016-12-01

    Proving Ground, MD 21005-5068 This report describes 2 C++ classes: a Vector class for performing vector algebra in 3-dimensional space ( 3D ) and a Rotation...class for performing rotations of vectors in 3D . Each class is self-contained in a single header file (Vector.h and Rotation.h) so that a C...vector, rotation, 3D , quaternion, C++ tools, rotation sequence, Euler angles, yaw, pitch, roll, orientation 98 Richard Saucier 410-278-6721Unclassified

  18. Pushing Memory Bandwidth Limitations Through Efficient Implementations of Block-Krylov Space Solvers on GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, M. A.; Strelchenko, Alexei; Vaquero, Alejandro

    Lattice quantum chromodynamics simulations in nuclear physics have benefited from a tremendous number of algorithmic advances such as multigrid and eigenvector deflation. These improve the time to solution but do not alleviate the intrinsic memory-bandwidth constraints of the matrix-vector operation dominating iterative solvers. Batching this operation for multiple vectors and exploiting cache and register blocking can yield a super-linear speed up. Block-Krylov solvers can naturally take advantage of such batched matrix-vector operations, further reducing the iterations to solution by sharing the Krylov space between solves. However, practical implementations typically suffer from the quadratic scaling in the number of vector-vector operations.more » Using the QUDA library, we present an implementation of a block-CG solver on NVIDIA GPUs which reduces the memory-bandwidth complexity of vector-vector operations from quadratic to linear. We present results for the HISQ discretization, showing a 5x speedup compared to highly-optimized independent Krylov solves on NVIDIA's SaturnV cluster.« less

  19. Observation of Polarization Vortices in Momentum Space

    NASA Astrophysics Data System (ADS)

    Zhang, Yiwen; Chen, Ang; Liu, Wenzhe; Hsu, Chia Wei; Wang, Bo; Guan, Fang; Liu, Xiaohan; Shi, Lei; Lu, Ling; Zi, Jian

    2018-05-01

    The vortex, a fundamental topological excitation featuring the in-plane winding of a vector field, is important in various areas such as fluid dynamics, liquid crystals, and superconductors. Although commonly existing in nature, vortices were observed exclusively in real space. Here, we experimentally observed momentum-space vortices as the winding of far-field polarization vectors in the first Brillouin zone of periodic plasmonic structures. Using homemade polarization-resolved momentum-space imaging spectroscopy, we mapped out the dispersion, lifetime, and polarization of all radiative states at the visible wavelengths. The momentum-space vortices were experimentally identified by their winding patterns in the polarization-resolved isofrequency contours and their diverging radiative quality factors. Such polarization vortices can exist robustly on any periodic systems of vectorial fields, while they are not captured by the existing topological band theory developed for scalar fields. Our work provides a new way for designing high-Q plasmonic resonances, generating vector beams, and studying topological photonics in the momentum space.

  20. Observation of Polarization Vortices in Momentum Space.

    PubMed

    Zhang, Yiwen; Chen, Ang; Liu, Wenzhe; Hsu, Chia Wei; Wang, Bo; Guan, Fang; Liu, Xiaohan; Shi, Lei; Lu, Ling; Zi, Jian

    2018-05-04

    The vortex, a fundamental topological excitation featuring the in-plane winding of a vector field, is important in various areas such as fluid dynamics, liquid crystals, and superconductors. Although commonly existing in nature, vortices were observed exclusively in real space. Here, we experimentally observed momentum-space vortices as the winding of far-field polarization vectors in the first Brillouin zone of periodic plasmonic structures. Using homemade polarization-resolved momentum-space imaging spectroscopy, we mapped out the dispersion, lifetime, and polarization of all radiative states at the visible wavelengths. The momentum-space vortices were experimentally identified by their winding patterns in the polarization-resolved isofrequency contours and their diverging radiative quality factors. Such polarization vortices can exist robustly on any periodic systems of vectorial fields, while they are not captured by the existing topological band theory developed for scalar fields. Our work provides a new way for designing high-Q plasmonic resonances, generating vector beams, and studying topological photonics in the momentum space.

  1. Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)

    NASA Astrophysics Data System (ADS)

    Dubinskii, Yu A.; Osipenko, A. S.

    2000-02-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.

  2. Parallel CE/SE Computations via Domain Decomposition

    NASA Technical Reports Server (NTRS)

    Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung

    2000-01-01

    This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.

  3. A POD reduced order model for resolving angular direction in neutron/photon transport problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buchan, A.G., E-mail: andrew.buchan@imperial.ac.uk; Calloo, A.A.; Goffin, M.G.

    2015-09-01

    This article presents the first Reduced Order Model (ROM) that efficiently resolves the angular dimension of the time independent, mono-energetic Boltzmann Transport Equation (BTE). It is based on Proper Orthogonal Decomposition (POD) and uses the method of snapshots to form optimal basis functions for resolving the direction of particle travel in neutron/photon transport problems. A unique element of this work is that the snapshots are formed from the vector of angular coefficients relating to a high resolution expansion of the BTE's angular dimension. In addition, the individual snapshots are not recorded through time, as in standard POD, but instead theymore » are recorded through space. In essence this work swaps the roles of the dimensions space and time in standard POD methods, with angle and space respectively. It is shown here how the POD model can be formed from the POD basis functions in a highly efficient manner. The model is then applied to two radiation problems; one involving the transport of radiation through a shield and the other through an infinite array of pins. Both problems are selected for their complex angular flux solutions in order to provide an appropriate demonstration of the model's capabilities. It is shown that the POD model can resolve these fluxes efficiently and accurately. In comparison to high resolution models this POD model can reduce the size of a problem by up to two orders of magnitude without compromising accuracy. Solving times are also reduced by similar factors.« less

  4. Adaptive fault feature extraction from wayside acoustic signals from train bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Dingcheng; Entezami, Mani; Stewart, Edward; Roberts, Clive; Yu, Dejie

    2018-07-01

    Wayside acoustic detection of train bearing faults plays a significant role in maintaining safety in the railway transport system. However, the bearing fault information is normally masked by strong background noises and harmonic interferences generated by other components (e.g. axles and gears). In order to extract the bearing fault feature information effectively, a novel method called improved singular value decomposition (ISVD) with resonance-based signal sparse decomposition (RSSD), namely the ISVD-RSSD method, is proposed in this paper. A Savitzky-Golay (S-G) smoothing filter is used to filter singular vectors (SVs) in the ISVD method as an extension of the singular value decomposition (SVD) theorem. Hilbert spectrum entropy and a stepwise optimisation strategy are used to optimize the S-G filter's parameters. The RSSD method is able to nonlinearly decompose the wayside acoustic signal of a faulty train bearing into high and low resonance components, the latter of which contains bearing fault information. However, the high level of noise usually results in poor decomposition results from the RSSD method. Hence, the collected wayside acoustic signal must first be de-noised using the ISVD component of the ISVD-RSSD method. Next, the de-noised signal is decomposed by using the RSSD method. The obtained low resonance component is then demodulated with a Hilbert transform such that the bearing fault can be detected by observing Hilbert envelope spectra. The effectiveness of the ISVD-RSSD method is verified through both laboratory field-based experiments as described in the paper. The results indicate that the proposed method is superior to conventional spectrum analysis and ensemble empirical mode decomposition methods.

  5. Analysis of structural response data using discrete modal filters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.

    1991-01-01

    The application of reciprocal modal vectors to the analysis of structural response data is described. Reciprocal modal vectors are constructed using an existing experimental modal model and an existing frequency response matrix of a structure, and can be assembled into a matrix that effectively transforms the data from the physical space to a modal space within a particular frequency range. In other words, the weighting matrix necessary for modal vector orthogonality (typically the mass matrix) is contained within the reciprocal model matrix. The underlying goal of this work is mostly directed toward observing the modal state responses in the presence of unknown, possibly closed loop forcing functions, thus having an impact on both operating data analysis techniques and independent modal space control techniques. This study investigates the behavior of reciprocol modal vectors as modal filters with respect to certain calculation parameters and their performance with perturbed system frequency response data.

  6. Modeling Musical Context With Word2Vec

    NASA Astrophysics Data System (ADS)

    Herremans, Dorien; Chuan, Ching-Hua

    2017-05-01

    We present a semantic vector space model for capturing complex polyphonic musical context. A word2vec model based on a skip-gram representation with negative sampling was used to model slices of music from a dataset of Beethoven's piano sonatas. A visualization of the reduced vector space using t-distributed stochastic neighbor embedding shows that the resulting embedded vector space captures tonal relationships, even without any explicit information about the musical contents of the slices. Secondly, an excerpt of the Moonlight Sonata from Beethoven was altered by replacing slices based on context similarity. The resulting music shows that the selected slice based on similar word2vec context also has a relatively short tonal distance from the original slice.

  7. PGA/MOEAD: a preference-guided evolutionary algorithm for multi-objective decision-making problems with interval-valued fuzzy preferences

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Lin, Lin; Zhong, ShiSheng

    2018-02-01

    In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.

  8. Joint Smoothed l₀-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar.

    PubMed

    Liu, Jing; Zhou, Weidong; Juwono, Filbert H

    2017-05-08

    Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l 0 -norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l 0 -norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD (singular value decomposition), RV (real-valued) l 1 -SVD and RV l 1 -SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.

  9. A vector scanning processing technique for pulsed laser velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Edwards, Robert V.

    1989-01-01

    Pulsed laser sheet velocimetry yields nonintrusive measurements of two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high precision (1 pct) velocity estimates, but can require several hours of processing time on specialized array processors. Under some circumstances, a simple, fast, less accurate (approx. 5 pct), data reduction technique which also gives unambiguous velocity vector information is acceptable. A direct space domain processing technique was examined. The direct space domain processing technique was found to be far superior to any other techniques known, in achieving the objectives listed above. It employs a new data coding and reduction technique, where the particle time history information is used directly. Further, it has no 180 deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 minutes on an 80386 based PC, producing a 2-D velocity vector map of the flow field. Hence, using this new space domain vector scanning (VS) technique, pulsed laser velocimetry data can be reduced quickly and reasonably accurately, without specialized array processing hardware.

  10. Online Artifact Removal for Brain-Computer Interfaces Using Support Vector Machines and Blind Source Separation

    PubMed Central

    Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang

    2007-01-01

    We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method. PMID:18288259

  11. Online artifact removal for brain-computer interfaces using support vector machines and blind source separation.

    PubMed

    Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang

    2007-01-01

    We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method.

  12. Geometric Representations of Condition Queries on Three-Dimensional Vector Fields

    NASA Technical Reports Server (NTRS)

    Henze, Chris

    1999-01-01

    Condition queries on distributed data ask where particular conditions are satisfied. It is possible to represent condition queries as geometric objects by plotting field data in various spaces derived from the data, and by selecting loci within these derived spaces which signify the desired conditions. Rather simple geometric partitions of derived spaces can represent complex condition queries because much complexity can be encapsulated in the derived space mapping itself A geometric view of condition queries provides a useful conceptual unification, allowing one to intuitively understand many existing vector field feature detection algorithms -- and to design new ones -- as variations on a common theme. A geometric representation of condition queries also provides a simple and coherent basis for computer implementation, reducing a wide variety of existing and potential vector field feature detection techniques to a few simple geometric operations.

  13. Variational Koopman models: Slow collective variables and molecular kinetics from short off-equilibrium simulations

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Nüske, Feliks; Paul, Fabian; Klus, Stefan; Koltai, Péter; Noé, Frank

    2017-04-01

    Markov state models (MSMs) and master equation models are popular approaches to approximate molecular kinetics, equilibria, metastable states, and reaction coordinates in terms of a state space discretization usually obtained by clustering. Recently, a powerful generalization of MSMs has been introduced, the variational approach conformation dynamics/molecular kinetics (VAC) and its special case the time-lagged independent component analysis (TICA), which allow us to approximate slow collective variables and molecular kinetics by linear combinations of smooth basis functions or order parameters. While it is known how to estimate MSMs from trajectories whose starting points are not sampled from an equilibrium ensemble, this has not yet been the case for TICA and the VAC. Previous estimates from short trajectories have been strongly biased and thus not variationally optimal. Here, we employ the Koopman operator theory and the ideas from dynamic mode decomposition to extend the VAC and TICA to non-equilibrium data. The main insight is that the VAC and TICA provide a coefficient matrix that we call Koopman model, as it approximates the underlying dynamical (Koopman) operator in conjunction with the basis set used. This Koopman model can be used to compute a stationary vector to reweight the data to equilibrium. From such a Koopman-reweighted sample, equilibrium expectation values and variationally optimal reversible Koopman models can be constructed even with short simulations. The Koopman model can be used to propagate densities, and its eigenvalue decomposition provides estimates of relaxation time scales and slow collective variables for dimension reduction. Koopman models are generalizations of Markov state models, TICA, and the linear VAC and allow molecular kinetics to be described without a cluster discretization.

  14. Research in computer science

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1985-01-01

    Synopses are given for NASA supported work in computer science at the University of Virginia. Some areas of research include: error seeding as a testing method; knowledge representation for engineering design; analysis of faults in a multi-version software experiment; implementation of a parallel programming environment; two computer graphics systems for visualization of pressure distribution and convective density particles; task decomposition for multiple robot arms; vectorized incomplete conjugate gradient; and iterative methods for solving linear equations on the Flex/32.

  15. A unifying model of concurrent spatial and temporal modularity in muscle activity.

    PubMed

    Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien

    2014-02-01

    Modularity in the central nervous system (CNS), i.e., the brain capability to generate a wide repertoire of movements by combining a small number of building blocks ("modules"), is thought to underlie the control of movement. Numerous studies reported evidence for such a modular organization by identifying invariant muscle activation patterns across various tasks. However, previous studies relied on decompositions differing in both the nature and dimensionality of the identified modules. Here, we derive a single framework that encompasses all influential models of muscle activation modularity. We introduce a new model (named space-by-time decomposition) that factorizes muscle activations into concurrent spatial and temporal modules. To infer these modules, we develop an algorithm, referred to as sample-based nonnegative matrix trifactorization (sNM3F). We test the space-by-time decomposition on a comprehensive electromyographic dataset recorded during execution of arm pointing movements and show that it provides a low-dimensional yet accurate, highly flexible and task-relevant representation of muscle patterns. The extracted modules have a well characterized functional meaning and implement an efficient trade-off between replication of the original muscle patterns and task discriminability. Furthermore, they are compatible with the modules extracted from existing models, such as synchronous synergies and temporal primitives, and generalize time-varying synergies. Our results indicate the effectiveness of a simultaneous but separate condensation of spatial and temporal dimensions of muscle patterns. The space-by-time decomposition accommodates a unified view of the hierarchical mapping from task parameters to coordinated muscle activations, which could be employed as a reference framework for studying compositional motor control.

  16. A note on φ-analytic conformal vector fields

    NASA Astrophysics Data System (ADS)

    Deshmukh, Sharief; Bin Turki, Nasser

    2017-09-01

    Taking clue from the analytic vector fields on a complex manifold, φ-analytic conformal vector fields are defined on a Riemannian manifold (Deshmukh and Al-Solamy in Colloq. Math. 112(1):157-161, 2008). In this paper, we use φ-analytic conformal vector fields to find new characterizations of the n-sphere Sn(c) and the Euclidean space (Rn,<,> ).

  17. High-temperature catalyst for catalytic combustion and decomposition

    NASA Technical Reports Server (NTRS)

    Mays, Jeffrey A. (Inventor); Lohner, Kevin A. (Inventor); Sevener, Kathleen M. (Inventor); Jensen, Jeff J. (Inventor)

    2005-01-01

    A robust, high temperature mixed metal oxide catalyst for propellant composition, including high concentration hydrogen peroxide, and catalytic combustion, including methane air mixtures. The uses include target, space, and on-orbit propulsion systems and low-emission terrestrial power and gas generation. The catalyst system requires no special preheat apparatus or special sequencing to meet start-up requirements, enabling a fast overall response time. Start-up transients of less than 1 second have been demonstrated with catalyst bed and propellant temperatures as low as 50 degrees Fahrenheit. The catalyst system has consistently demonstrated high decomposition effeciency, extremely low decomposition roughness, and long operating life on multiple test particles.

  18. Diffraction Theory and Almost Periodic Distributions

    NASA Astrophysics Data System (ADS)

    Strungaru, Nicolae; Terauds, Venta

    2016-09-01

    We introduce and study the notions of translation bounded tempered distributions, and autocorrelation for a tempered distribution. We further introduce the spaces of weakly, strongly and null weakly almost periodic tempered distributions and show that for weakly almost periodic tempered distributions the Eberlein decomposition holds. For translation bounded measures all these notions coincide with the classical ones. We show that tempered distributions with measure Fourier transform are weakly almost periodic and that for this class, the Eberlein decomposition is exactly the Fourier dual of the Lebesgue decomposition, with the Fourier-Bohr coefficients specifying the pure point part of the Fourier transform. We complete the project by looking at few interesting examples.

  19. Structure and decomposition of the silver formate Ag(HCO{sub 2})

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puzan, Anna N., E-mail: anna_puzan@mail.ru; Baumer, Vyacheslav N.; Mateychenko, Pavel V.

    Crystal structure of the silver formate Ag(HCO{sub 2}) has been determined (orthorhombic, sp.gr. Pccn, a=7.1199(5), b=10.3737(4), c=6.4701(3)Å, V=477.88(4) Å{sup 3}, Z=8). The structure contains isolated formate ions and the pairs Ag{sub 2}{sup 2+} which form the layers in (001) planes (the shortest Ag–Ag distances is 2.919 in the pair and 3.421 and 3.716 Å between the nearest Ag atoms of adjacent pairs). Silver formate is unstable compound which decompose spontaneously vs time. Decomposition was studied using Rietveld analysis of the powder diffraction patterns. It was concluded that the diffusion of Ag atoms leads to the formation of plate-like metal particlesmore » as nuclei in the (100) planes which settle parallel to (001) planes of the silver formate matrix. - Highlights: • Silver formate Ag(HCO{sub 2}) was synthesized and characterized. • Layered packing of Ag-Ag pairs in the structure was found. • Decomposition of Ag(HCO{sub 2}) and formation of metal phase were studied. • Rietveld-refined micro-structural characteristics during decomposition reveal the space relationship between the matrix structure and forming Ag phase REPLACE with: Space relationship between the matrix structure and forming Ag phase.« less

  20. Analyzing Transient Turbuelnce in a Stenosed Carotid Artery by Proper Orthogonal Decomposition

    NASA Astrophysics Data System (ADS)

    Grinberg, Leopold; Yakhot, Alexander; Karniadakis, George

    2009-11-01

    High resolution 3D simulation (involving 100M degrees of freedom) were employed to study transient turbulent flow in a carotid arterial bifurcation with a stenosed internal carotid artery (ICA). In the performed simulation an intermittent (in space and time) laminar-turbulent-laminar regime was observed. The simulation reveals the mechanism of the onset of turbulent flow in the stenosed ICA where the narrowing in the artery generates a strong jet flow. Time- and space-window Proper Orthogonal Decomposition (POD) was applied to quantify the different flow regimes in the occluded artery. A simplified version of the POD analysis that utilizes 2D slices only - more appropriate in the clinical setting - was also investigated.

  1. Augmenting the decomposition of EMG signals using supervised feature extraction techniques.

    PubMed

    Parsaei, Hossein; Gangeh, Mehrdad J; Stashuk, Daniel W; Kamel, Mohamed S

    2012-01-01

    Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its constituent motor unit potential trains (MUPTs). In this work, the possibility of improving the decomposing results using two supervised feature extraction methods, i.e., Fisher discriminant analysis (FDA) and supervised principal component analysis (SPCA), is explored. Using the MUP labels provided by a decomposition-based quantitative EMG system as a training data for FDA and SPCA, the MUPs are transformed into a new feature space such that the MUPs of a single MU become as close as possible to each other while those created by different MUs become as far as possible. The MUPs are then reclassified using a certainty-based classification algorithm. Evaluation results using 10 simulated EMG signals comprised of 3-11 MUPTs demonstrate that FDA and SPCA on average improve the decomposition accuracy by 6%. The improvement for the most difficult-to-decompose signal is about 12%, which shows the proposed approach is most beneficial in the decomposition of more complex signals.

  2. Microscopic observations of X-ray and gamma-ray induced decomposition of ammonium perchlorate crystals

    NASA Technical Reports Server (NTRS)

    Herley, P. J.; Levy, P. W.

    1972-01-01

    The X-ray and gamma-ray induced decomposition of ammonium perchlorate was studied by optical, transmission, and scanning electron microscopy. This material is a commonly used oxidizer in solid propellents which could be employed in deep-space probes, and where they will be subjected to a variety of radiations for as long as ten years. In some respects the radiation-induced damage closely resembles the effects produced by thermal decomposition, but in other respects the results differ markedly. Similar radiation and thermal effects include the following: (1) irregular or ill-defined circular etch pits are formed in both cases; (2) approximately the same size pits are produced; (3) the pit density is similar; (4) the c face is considerably more reactive than the m face; and (5) most importantly, many of the etch pits are aligned in crystallographic directions which are the same for thermal or radiolytic decomposition. Thus, dislocations play an important role in the radiolytic decomposition process.

  3. Exploratory Model Analysis of the Space Based Infrared System (SBIRS) Low Global Scheduler Problem

    DTIC Science & Technology

    1999-12-01

    solution. The non- linear least squares model is defined as Y = f{e,t) where: 0 =M-element parameter vector Y =N-element vector of all data t...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS EXPLORATORY MODEL ANALYSIS OF THE SPACE BASED INFRARED SYSTEM (SBIRS) LOW GLOBAL SCHEDULER...December 1999 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE EXPLORATORY MODEL ANALYSIS OF THE SPACE BASED INFRARED SYSTEM

  4. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  5. The computational complexity of elliptic curve integer sub-decomposition (ISD) method

    NASA Astrophysics Data System (ADS)

    Ajeena, Ruma Kareem K.; Kamarulhaili, Hailiza

    2014-07-01

    The idea of the GLV method of Gallant, Lambert and Vanstone (Crypto 2001) is considered a foundation stone to build a new procedure to compute the elliptic curve scalar multiplication. This procedure, that is integer sub-decomposition (ISD), will compute any multiple kP of elliptic curve point P which has a large prime order n with two low-degrees endomorphisms ψ1 and ψ2 of elliptic curve E over prime field Fp. The sub-decomposition of values k1 and k2, not bounded by ±C√n , gives us new integers k11, k12, k21 and k22 which are bounded by ±C√n and can be computed through solving the closest vector problem in lattice. The percentage of a successful computation for the scalar multiplication increases by ISD method, which improved the computational efficiency in comparison with the general method for computing scalar multiplication in elliptic curves over the prime fields. This paper will present the mechanism of ISD method and will shed light mainly on the computation complexity of the ISD approach that will be determined by computing the cost of operations. These operations include elliptic curve operations and finite field operations.

  6. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  7. Fault identification of rotor-bearing system based on ensemble empirical mode decomposition and self-zero space projection analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Fan; Zhu, Zhencai; Li, Wei; Zhou, Gongbo; Chen, Guoan

    2014-07-01

    Accurately identifying faults in rotor-bearing systems by analyzing vibration signals, which are nonlinear and nonstationary, is challenging. To address this issue, a new approach based on ensemble empirical mode decomposition (EEMD) and self-zero space projection analysis is proposed in this paper. This method seeks to identify faults appearing in a rotor-bearing system using simple algebraic calculations and projection analyses. First, EEMD is applied to decompose the collected vibration signals into a set of intrinsic mode functions (IMFs) for features. Second, these extracted features under various mechanical health conditions are used to design a self-zero space matrix according to space projection analysis. Finally, the so-called projection indicators are calculated to identify the rotor-bearing system's faults with simple decision logic. Experiments are implemented to test the reliability and effectiveness of the proposed approach. The results show that this approach can accurately identify faults in rotor-bearing systems.

  8. Multi-label learning with fuzzy hypergraph regularization for protein subcellular location prediction.

    PubMed

    Chen, Jing; Tang, Yuan Yan; Chen, C L Philip; Fang, Bin; Lin, Yuewei; Shang, Zhaowei

    2014-12-01

    Protein subcellular location prediction aims to predict the location where a protein resides within a cell using computational methods. Considering the main limitations of the existing methods, we propose a hierarchical multi-label learning model FHML for both single-location proteins and multi-location proteins. The latent concepts are extracted through feature space decomposition and label space decomposition under the nonnegative data factorization framework. The extracted latent concepts are used as the codebook to indirectly connect the protein features to their annotations. We construct dual fuzzy hypergraphs to capture the intrinsic high-order relations embedded in not only feature space, but also label space. Finally, the subcellular location annotation information is propagated from the labeled proteins to the unlabeled proteins by performing dual fuzzy hypergraph Laplacian regularization. The experimental results on the six protein benchmark datasets demonstrate the superiority of our proposed method by comparing it with the state-of-the-art methods, and illustrate the benefit of exploiting both feature correlations and label correlations.

  9. An Elementary Treatment of General Inner Products

    ERIC Educational Resources Information Center

    Graver, Jack E.

    2011-01-01

    A typical first course on linear algebra is usually restricted to vector spaces over the real numbers and the usual positive-definite inner product. Hence, the proof that dim(S)+ dim(S[perpendicular]) = dim("V") is not presented in a way that is generalizable to non-positive?definite inner products or to vector spaces over other fields. In this…

  10. Characterising laser beams with liquid crystal displays

    NASA Astrophysics Data System (ADS)

    Dudley, Angela; Naidoo, Darryl; Forbes, Andrew

    2016-02-01

    We show how one can determine the various properties of light, from the modal content of laser beams to decoding the information stored in optical fields carrying orbital angular momentum, by performing a modal decomposition. Although the modal decomposition of light has been known for a long time, applied mostly to pattern recognition, we illustrate how this technique can be implemented with the use of liquid-crystal displays. We show experimentally how liquid crystal displays can be used to infer the intensity, phase, wavefront, Poynting vector, and orbital angular momentum density of unknown optical fields. This measurement technique makes use of a single spatial light modulator (liquid crystal display), a Fourier transforming lens and detector (CCD or photo-diode). Such a diagnostic tool is extremely relevant to the real-time analysis of solid-state and fibre laser systems as well as mode division multiplexing as an emerging technology in optical communication.

  11. Telephone-quality pathological speech classification using empirical mode decomposition.

    PubMed

    Kaleem, M F; Ghoraani, B; Guergachi, A; Krishnan, S

    2011-01-01

    This paper presents a computationally simple and effective methodology based on empirical mode decomposition (EMD) for classification of telephone quality normal and pathological speech signals. EMD is used to decompose continuous normal and pathological speech signals into intrinsic mode functions, which are analyzed to extract physically meaningful and unique temporal and spectral features. Using continuous speech samples from a database of 51 normal and 161 pathological speakers, which has been modified to simulate telephone quality speech under different levels of noise, a linear classifier is used with the feature vector thus obtained to obtain a high classification accuracy, thereby demonstrating the effectiveness of the methodology. The classification accuracy reported in this paper (89.7% for signal-to-noise ratio 30 dB) is a significant improvement over previously reported results for the same task, and demonstrates the utility of our methodology for cost-effective remote voice pathology assessment over telephone channels.

  12. Hierarchical Diagnosis of Vocal Fold Disorders

    NASA Astrophysics Data System (ADS)

    Nikkhah-Bahrami, Mansour; Ahmadi-Noubari, Hossein; Seyed Aghazadeh, Babak; Khadivi Heris, Hossein

    This paper explores the use of hierarchical structure for diagnosis of vocal fold disorders. The hierarchical structure is initially used to train different second-level classifiers. At the first level normal and pathological signals have been distinguished. Next, pathological signals have been classified into neurogenic and organic vocal fold disorders. At the final level, vocal fold nodules have been distinguished from polyps in organic disorders category. For feature selection at each level of hierarchy, the reconstructed signal at each wavelet packet decomposition sub-band in 5 levels of decomposition with mother wavelet of (db10) is used to extract the nonlinear features of self-similarity and approximate entropy. Also, wavelet packet coefficients are used to measure energy and Shannon entropy features at different spectral sub-bands. Davies-Bouldin criterion has been employed to find the most discriminant features. Finally, support vector machines have been adopted as classifiers at each level of hierarchy resulting in the diagnosis accuracy of 92%.

  13. Glove-based approach to online signature verification.

    PubMed

    Kamel, Nidal S; Sayeed, Shohel; Ellis, Grant A

    2008-06-01

    Utilizing the multiple degrees of freedom offered by the data glove for each finger and the hand, a novel on-line signature verification system using the Singular Value Decomposition (SVD) numerical tool for signature classification and verification is presented. The proposed technique is based on the Singular Value Decomposition in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, so the effective dimensionality of A can be reduced. Having modeled the data glove signature through its r-principal subspace, signature authentication is performed by finding the angles between the different subspaces. A demonstration of the data glove is presented as an effective high-bandwidth data entry device for signature verification. This SVD-based signature verification technique is tested and its performance is shown to be able to recognize forgery signatures with a false acceptance rate of less than 1.2%.

  14. Wigner functions on non-standard symplectic vector spaces

    NASA Astrophysics Data System (ADS)

    Dias, Nuno Costa; Prata, João Nuno

    2018-01-01

    We consider the Weyl quantization on a flat non-standard symplectic vector space. We focus mainly on the properties of the Wigner functions defined therein. In particular we show that the sets of Wigner functions on distinct symplectic spaces are different but have non-empty intersections. This extends previous results to arbitrary dimension and arbitrary (constant) symplectic structure. As a by-product we introduce and prove several concepts and results on non-standard symplectic spaces which generalize those on the standard symplectic space, namely, the symplectic spectrum, Williamson's theorem, and Narcowich-Wigner spectra. We also show how Wigner functions on non-standard symplectic spaces behave under the action of an arbitrary linear coordinate transformation.

  15. Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets.

    PubMed

    Demartines, P; Herault, J

    1997-01-01

    We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.

  16. Effective Numerical Methods for Solving Elliptical Problems in Strengthened Sobolev Spaces

    NASA Technical Reports Server (NTRS)

    D'yakonov, Eugene G.

    1996-01-01

    Fourth-order elliptic boundary value problems in the plane can be reduced to operator equations in Hilbert spaces G that are certain subspaces of the Sobolev space W(sub 2)(exp 2)(Omega) is identical with G(sup (2)). Appearance of asymptotically optimal algorithms for Stokes type problems made it natural to focus on an approach that considers rot w is identical with (D(sub 2)w - D(sub 1)w) is identical with vector of u as a new unknown vector-function, which automatically satisfies the condition div vector of u = 0. In this work, we show that this approach can also be developed for an important class of problems from the theory of plates and shells with stiffeners. The main mathematical problem was to show that the well-known inf-sup condition (normal solvability of the divergence operator) holds for special Hilbert spaces. This result is also essential for certain hydrodynamics problems.

  17. Thrust vector control using electric actuation

    NASA Astrophysics Data System (ADS)

    Bechtel, Robert T.; Hall, David K.

    1995-01-01

    Presently, gimbaling of launch vehicle engines for thrust vector control is generally accomplished using a hydraulic system. In the case of the space shuttle solid rocket boosters and main engines, these systems are powered by hydrazine auxiliary power units. Use of electromechanical actuators would provide significant advantages in cost and maintenance. However, present energy source technologies such as batteries are heavy to the point of causing significant weight penalties. Utilizing capacitor technology developed by the Auburn University Space Power Institute in collaboration with the Auburn CCDS, Marshall Space Flight Center (MSFC) and Auburn are developing EMA system components with emphasis on high discharge rate energy sources compatible with space shuttle type thrust vector control requirements. Testing has been done at MSFC as part of EMA system tests with loads up to 66000 newtons for pulse times of several seconds. Results show such an approach to be feasible providing a potential for reduced weight and operations costs for new launch vehicles.

  18. Unconditionally energy stable time stepping scheme for Cahn–Morral equation: Application to multi-component spinodal decomposition and optimal space tiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir

    An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less

  19. Dynamic analysis of suspension cable based on vector form intrinsic finite element method

    NASA Astrophysics Data System (ADS)

    Qin, Jian; Qiao, Liang; Wan, Jiancheng; Jiang, Ming; Xia, Yongjun

    2017-10-01

    A vector finite element method is presented for the dynamic analysis of cable structures based on the vector form intrinsic finite element (VFIFE) and mechanical properties of suspension cable. Firstly, the suspension cable is discretized into different elements by space points, the mass and external forces of suspension cable are transformed into space points. The structural form of cable is described by the space points at different time. The equations of motion for the space points are established according to the Newton’s second law. Then, the element internal forces between the space points are derived from the flexible truss structure. Finally, the motion equations of space points are solved by the central difference method with reasonable time integration step. The tangential tension of the bearing rope in a test ropeway with the moving concentrated loads is calculated and compared with the experimental data. The results show that the tangential tension of suspension cable with moving loads is consistent with the experimental data. This method has high calculated precision and meets the requirements of engineering application.

  20. Vector magnetic fields in sunspots. I - Stokes profile analysis using the Marshall Space Flight Center magnetograph

    NASA Technical Reports Server (NTRS)

    Balasubramaniam, K. S.; West, E. A.

    1991-01-01

    The Marshall Space Flight Center (MSFC) vector magnetograph is a tunable filter magnetograph with a bandpass of 125 mA. Results are presented of the inversion of Stokes polarization profiles observed with the MSFC vector magnetograph centered on a sunspot to recover the vector magnetic field parameters and thermodynamic parameters of the spectral line forming region using the Fe I 5250.2 A spectral line using a nonlinear least-squares fitting technique. As a preliminary investigation, it is also shown that the recovered thermodynamic parameters could be better understood if the fitted parameters like Doppler width, opacity ratio, and damping constant were broken down into more basic quantities like temperature, microturbulent velocity, or density parameter.

  1. Human visual system-based color image steganography using the contourlet transform

    NASA Astrophysics Data System (ADS)

    Abdul, W.; Carré, P.; Gaborit, P.

    2010-01-01

    We present a steganographic scheme based on the contourlet transform which uses the contrast sensitivity function (CSF) to control the force of insertion of the hidden information in a perceptually uniform color space. The CIELAB color space is used as it is well suited for steganographic applications because any change in the CIELAB color space has a corresponding effect on the human visual system as is very important for steganographic schemes to be undetectable by the human visual system (HVS). The perceptual decomposition of the contourlet transform gives it a natural advantage over other decompositions as it can be molded with respect to the human perception of different frequencies in an image. The evaluation of the imperceptibility of the steganographic scheme with respect to the color perception of the HVS is done using standard methods such as the structural similarity (SSIM) and CIEDE2000. The robustness of the inserted watermark is tested against JPEG compression.

  2. Characteristic classes of gauge systems

    NASA Astrophysics Data System (ADS)

    Lyakhovich, S. L.; Sharapov, A. A.

    2004-12-01

    We define and study invariants which can be uniformly constructed for any gauge system. By a gauge system we understand an (anti-)Poisson supermanifold provided with an odd Hamiltonian self-commuting vector field called a homological vector field. This definition encompasses all the cases usually included into the notion of a gauge theory in physics as well as some other similar (but different) structures like Lie or Courant algebroids. For Lagrangian gauge theories or Hamiltonian first class constrained systems, the homological vector field is identified with the classical BRST transformation operator. We define characteristic classes of a gauge system as universal cohomology classes of the homological vector field, which are uniformly constructed in terms of this vector field itself. Not striving to exhaustively classify all the characteristic classes in this work, we compute those invariants which are built up in terms of the first derivatives of the homological vector field. We also consider the cohomological operations in the space of all the characteristic classes. In particular, we show that the (anti-)Poisson bracket becomes trivial when applied to the space of all the characteristic classes, instead the latter space can be endowed with another Lie bracket operation. Making use of this Lie bracket one can generate new characteristic classes involving higher derivatives of the homological vector field. The simplest characteristic classes are illustrated by the examples relating them to anomalies in the traditional BV or BFV-BRST theory and to characteristic classes of (singular) foliations.

  3. Unidirectional Wave Vector Manipulation in Two-Dimensional Space with an All Passive Acoustic Parity-Time-Symmetric Metamaterials Crystal

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Zhu, Xuefeng; Chen, Fei; Liang, Shanjun; Zhu, Jie

    2018-03-01

    Exploring the concept of non-Hermitian Hamiltonians respecting parity-time symmetry with classical wave systems is of great interest as it enables the experimental investigation of parity-time-symmetric systems through the quantum-classical analogue. Here, we demonstrate unidirectional wave vector manipulation in two-dimensional space, with an all passive acoustic parity-time-symmetric metamaterials crystal. The metamaterials crystal is constructed through interleaving groove- and holey-structured acoustic metamaterials to provide an intrinsic parity-time-symmetric potential that is two-dimensionally extended and curved, which allows the flexible manipulation of unpaired wave vectors. At the transition point from the unbroken to broken parity-time symmetry phase, the unidirectional sound focusing effect (along with reflectionless acoustic transparency in the opposite direction) is experimentally realized over the spectrum. This demonstration confirms the capability of passive acoustic systems to carry the experimental studies on general parity-time symmetry physics and further reveals the unique functionalities enabled by the judiciously tailored unidirectional wave vectors in space.

  4. On Anholonomic Deformation, Geometry, and Differentiation

    DTIC Science & Technology

    2013-02-01

    αβχ are not necessarily Levi - Civita connection coefficients). The vector cross product × obeys, for two vectors V and W and two covectors α and β , V...three-dimensional space. 2.2.5. Euclidean space. Let GAB(X ) = GA · GB be the metric tensor of the space. The Levi - Civita connection coefficients of GAB...curvature tensor of the Levi - Civita connection vanishes identically: G R A BCD = 2 ( ∂[B G A C]D + G A[B|E|G EC]D ) = 0. (43) In n

  5. Differential Calculus on h-Deformed Spaces

    NASA Astrophysics Data System (ADS)

    Herlemont, Basile; Ogievetsky, Oleg

    2017-10-01

    We construct the rings of generalized differential operators on the h-deformed vector space of gl-type. In contrast to the q-deformed vector space, where the ring of differential operators is unique up to an isomorphism, the general ring of h-deformed differential operators {Diff}_{h},σ(n) is labeled by a rational function σ in n variables, satisfying an over-determined system of finite-difference equations. We obtain the general solution of the system and describe some properties of the rings {Diff}_{h},σ(n).

  6. Support vector machine based decision for mechanical fault condition monitoring in induction motor using an advanced Hilbert-Park transform.

    PubMed

    Ben Salem, Samira; Bacha, Khmais; Chaari, Abdelkader

    2012-09-01

    In this work we suggest an original fault signature based on an improved combination of Hilbert and Park transforms. Starting from this combination we can create two fault signatures: Hilbert modulus current space vector (HMCSV) and Hilbert phase current space vector (HPCSV). These two fault signatures are subsequently analysed using the classical fast Fourier transform (FFT). The effects of mechanical faults on the HMCSV and HPCSV spectrums are described, and the related frequencies are determined. The magnitudes of spectral components, relative to the studied faults (air-gap eccentricity and outer raceway ball bearing defect), are extracted in order to develop the input vector necessary for learning and testing the support vector machine with an aim of classifying automatically the various states of the induction motor. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Evolution of Lamb Vector as a Vortex Breaking into Turbulence.

    NASA Astrophysics Data System (ADS)

    Wu, J. Z.; Lu, X. Y.

    1996-11-01

    In an incompressible flow, either laminar or turbulent, the Lamb vector is solely responsible to nonlinear interactions. While its longitudinal part is balanced by stagnation enthalpy, its transverse part is the unique source (as an external forcing in spectral space) that causes the flow to evolve. Moreover, in Reynolds-averaged flows the turbulent force can be derived exclusively from the Lamb vector instead of the full Reynolds stress tensor. Therefore, studying the evolution of the Lamb vector itself (both longitudinal and transverse parts) is of great interest. We have numerically examined this problem, taking the nonlinear distabilization of a viscous vortex as an example. In the later stage of this evolution we introduced a forcing to keep a statistically steady state, and observed the Lamb vector behavior in the resulting fine turbulence. The result is presented in both physical and spectral spaces.

  8. Optoelectronic Inner-Product Neural Associative Memory

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1993-01-01

    Optoelectronic apparatus acts as artificial neural network performing associative recall of binary images. Recall process is iterative one involving optical computation of inner products between binary input vector and one or more reference binary vectors in memory. Inner-product method requires far less memory space than matrix-vector method.

  9. [Surface electromyography signal classification using gray system theory].

    PubMed

    Xie, Hongbo; Ma, Congbin; Wang, Zhizhong; Huang, Hai

    2004-12-01

    A new method based on gray correlation was introduced to improve the identification rate in artificial limb. The electromyography (EMG) signal was first transformed into time-frequency domain by wavelet transform. Singular value decomposition (SVD) was then used to extract feature vector from the wavelet coefficient for pattern recognition. The decision was made according to the maximum gray correlation coefficient. Compared with neural network recognition, this robust method has an almost equivalent recognition rate but much lower computation costs and less training samples.

  10. Note on the helicity decomposition of spin and orbital optical currents

    NASA Astrophysics Data System (ADS)

    Aiello, Andrea; Berry, M. V.

    2015-06-01

    In the helicity representation, the Poynting vector (current) for a monochromatic optical field, when calculated using either the electric or the magnetic field, separates into right-handed and left-handed contributions, with no cross-helicity contributions. Cross-helicity terms do appear in the orbital and spin contributions to the current. But when the electric and magnetic formulas are averaged (‘electric-magnetic democracy’), these terms cancel, restoring the separation into right-handed and left-handed currents for orbital and spin separately.

  11. Characteristic-based algorithms for flows in thermo-chemical nonequilibrium

    NASA Technical Reports Server (NTRS)

    Walters, Robert W.; Cinnella, Pasquale; Slack, David C.; Halt, David

    1990-01-01

    A generalized finite-rate chemistry algorithm with Steger-Warming, Van Leer, and Roe characteristic-based flux splittings is presented in three-dimensional generalized coordinates for the Navier-Stokes equations. Attention is placed on convergence to steady-state solutions with fully coupled chemistry. Time integration schemes including explicit m-stage Runge-Kutta, implicit approximate-factorization, relaxation and LU decomposition are investigated and compared in terms of residual reduction per unit of CPU time. Practical issues such as code vectorization and memory usage on modern supercomputers are discussed.

  12. Innovative PCDD/F-containing gas stream generating system applied in catalytic decomposition of gaseous dioxins over V2O5-WO3/TiO2-based catalysts.

    PubMed

    Yang, Chia Cheng; Chang, Shu Hao; Hong, Bao Zhen; Chi, Kai Hsien; Chang, Moo Been

    2008-10-01

    Development of effective PCDD/F (polychlorinated dibenzo-p-dioxin and dibenzofuran) control technologies is essential for environmental engineers and researchers. In this study, a PCDD/F-containing gas stream generating system was developed to investigate the efficiency and effectiveness of innovative PCDD/F control technologies. The system designed and constructed can stably generate the gas stream with the PCDD/F concentration ranging from 1.0 to 100ng TEQ Nm(-3) while reproducibility test indicates that the PCDD/F recovery efficiencies are between 93% and 112%. This new PCDD/F-containing gas stream generating device is first applied in the investigation of the catalytic PCDD/F control technology. The catalytic decomposition of PCDD/Fs was evaluated with two types of commercial V(2)O(5)-WO(3)/TiO(2)-based catalysts (catalyst A and catalyst B) at controlled temperature, water vapor content, and space velocity. 84% and 91% PCDD/F destruction efficiencies are achieved with catalysts A and B, respectively, at 280 degrees C with the space velocity of 5000h(-1). The results also indicate that the presence of water vapor inhibits PCDD/F decomposition due to its competition with PCDD/F molecules for adsorption on the active vanadia sites for both catalysts. In addition, this study combined integral reaction and Mars-Van Krevelen model to calculate the activation energies of OCDD and OCDF decomposition. The activation energies of OCDD and OCDF decomposition via catalysis are calculated as 24.8kJmol(-1) and 25.2kJmol(-1), respectively.

  13. Community ecology in 3D: Tensor decomposition reveals spatio-temporal dynamics of large ecological communities.

    PubMed

    Frelat, Romain; Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A; Möllmann, Christian

    2017-01-01

    Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs.

  14. Human pose tracking from monocular video by traversing an image motion mapped body pose manifold

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2010-01-01

    Tracking human pose from monocular video sequences is a challenging problem due to the large number of independent parameters affecting image appearance and nonlinear relationships between generating parameters and the resultant images. Unlike the current practice of fitting interpolation functions to point correspondences between underlying pose parameters and image appearance, we exploit the relationship between pose parameters and image motion flow vectors in a physically meaningful way. Change in image appearance due to pose change is realized as navigating a low dimensional submanifold of the infinite dimensional Lie group of diffeomorphisms of the two dimensional sphere S2. For small changes in pose, image motion flow vectors lie on the tangent space of the submanifold. Any observed image motion flow vector field is decomposed into the basis motion vector flow fields on the tangent space and combination weights are used to update corresponding pose changes in the different dimensions of the pose parameter space. Image motion flow vectors are largely invariant to style changes in experiments with synthetic and real data where the subjects exhibit variation in appearance and clothing. The experiments demonstrate the robustness of our method (within +/-4° of ground truth) to style variance.

  15. A unified development of several techniques for the representation of random vectors and data sets

    NASA Technical Reports Server (NTRS)

    Bundick, W. T.

    1973-01-01

    Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.

  16. Operator pencil passing through a given operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biggs, A., E-mail: khudian@manchester.ac.uk, E-mail: adam.biggs@student.manchester.ac.uk; Khudaverdian, H. M., E-mail: khudian@manchester.ac.uk, E-mail: adam.biggs@student.manchester.ac.uk

    Let Δ be a linear differential operator acting on the space of densities of a given weight λ{sub 0} on a manifold M. One can consider a pencil of operators Π-circumflex(Δ)=(Δ{sub λ}) passing through the operator Δ such that any Δ{sub λ} is a linear differential operator acting on densities of weight λ. This pencil can be identified with a linear differential operator Δ-circumflex acting on the algebra of densities of all weights. The existence of an invariant scalar product in the algebra of densities implies a natural decomposition of operators, i.e., pencils of self-adjoint and anti-self-adjoint operators. We studymore » lifting maps that are on one hand equivariant with respect to divergenceless vector fields, and, on the other hand, with values in self-adjoint or anti-self-adjoint operators. In particular, we analyze the relation between these two concepts, and apply it to the study of diff (M)-equivariant liftings. Finally, we briefly consider the case of liftings equivariant with respect to the algebra of projective transformations and describe all regular self-adjoint and anti-self-adjoint liftings. Our constructions can be considered as a generalisation of equivariant quantisation.« less

  17. Complex mode indication function and its applications to spatial domain parameter estimation

    NASA Astrophysics Data System (ADS)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    This paper introduces the concept of the Complex Mode Indication Function (CMIF) and its application in spatial domain parameter estimation. The concept of CMIF is developed by performing singular value decomposition (SVD) of the Frequency Response Function (FRF) matrix at each spectral line. The CMIF is defined as the eigenvalues, which are the square of the singular values, solved from the normal matrix formed from the FRF matrix, [ H( jω)] H[ H( jω)], at each spectral line. The CMIF appears to be a simple and efficient method for identifying the modes of the complex system. The CMIF identifies modes by showing the physical magnitude of each mode and the damped natural frequency for each root. Since multiple reference data is applied in CMIF, repeated roots can be detected. The CMIF also gives global modal parameters, such as damped natural frequencies, mode shapes and modal participation vectors. Since CMIF works in the spatial domain, uneven frequency spacing data such as data from spatial sine testing can be used. A second-stage procedure for accurate damped natural frequency and damping estimation as well as mode shape scaling is also discussed in this paper.

  18. Peripheral transverse densities of the baryon octet from chiral effective field theory and dispersion analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alarcón, J. M.; Hiller Blin, A. N.; Vicente Vacas, M. J.

    2017-05-08

    The baryon electromagnetic form factors are expressed in terms of two-dimensional densities describing the distribution of charge and magnetization in transverse space at fixed light-front time. In this paper, we calculate the transverse densities of the spin-1/2 flavor-octet baryons at peripheral distances b=O(Mmore » $$-1\\atop{π}$$) using methods of relativistic chiral effective field theory (χ EFT) and dispersion analysis. The densities are represented as dispersive integrals over the imaginary parts of the form factors in the timelike region (spectral functions). The isovector spectral functions on the two-pion cut t > 4 M$$2\\atop{π}$$ are calculated using relativistic χEFT including octet and decuplet baryons. The χEFT calculations are extended into the ρ meson mass region using an N/D method that incorporates the pion electromagnetic form factor data. The isoscalar spectral functions are modeled by vector meson poles. We compute the peripheral charge and magnetization densities in the octet baryon states, estimate the uncertainties, and determine the quark flavor decomposition. Finally, the approach can be extended to baryon form factors of other operators and the moments of generalized parton distributions.« less

  19. Intelligent diagnosis of short hydraulic signal based on improved EEMD and SVM with few low-dimensional training samples

    NASA Astrophysics Data System (ADS)

    Zhang, Meijun; Tang, Jian; Zhang, Xiaoming; Zhang, Jiaojiao

    2016-03-01

    The high accurate classification ability of an intelligent diagnosis method often needs a large amount of training samples with high-dimensional eigenvectors, however the characteristics of the signal need to be extracted accurately. Although the existing EMD(empirical mode decomposition) and EEMD(ensemble empirical mode decomposition) are suitable for processing non-stationary and non-linear signals, but when a short signal, such as a hydraulic impact signal, is concerned, their decomposition accuracy become very poor. An improve EEMD is proposed specifically for short hydraulic impact signals. The improvements of this new EEMD are mainly reflected in four aspects, including self-adaptive de-noising based on EEMD, signal extension based on SVM(support vector machine), extreme center fitting based on cubic spline interpolation, and pseudo component exclusion based on cross-correlation analysis. After the energy eigenvector is extracted from the result of the improved EEMD, the fault pattern recognition based on SVM with small amount of low-dimensional training samples is studied. At last, the diagnosis ability of improved EEMD+SVM method is compared with the EEMD+SVM and EMD+SVM methods, and its diagnosis accuracy is distinctly higher than the other two methods no matter the dimension of the eigenvectors are low or high. The improved EEMD is very propitious for the decomposition of short signal, such as hydraulic impact signal, and its combination with SVM has high ability for the diagnosis of hydraulic impact faults.

  20. Solid-state reaction kinetics of neodymium doped magnesium hydrogen phosphate system

    NASA Astrophysics Data System (ADS)

    Gupta, Rashmi; Slathia, Goldy; Bamzai, K. K.

    2018-05-01

    Neodymium doped magnesium hydrogen phosphate (NdMHP) crystals were grown by using gel encapsulation technique. Structural characterization of the grown crystals has been carried out by single crystal X-ray diffraction (XRD) and it revealed that NdMHP crystals crystallize in orthorhombic crystal system with space group Pbca. Kinetics of the decomposition of the grown crystals has been studied by non-isothermal analysis. The estimation of decomposition temperatures and weight loss has been made from the thermogravimetric/differential thermo analytical (TG/DTA) in conjuncture with DSC studies. The various steps involved in the thermal decomposition of the material have been analysed using Horowitz-Metzger, Coats-Redfern and Piloyan-Novikova equations for evaluating various kinetic parameters.

  1. Thermodynamic Changes in the Coal Matrix - Gas - Moisture System Under Pressure Release and Phase Transformations of Gas Hydrates

    NASA Astrophysics Data System (ADS)

    Dyrdin, V. V.; Smirnov, V. G.; Kim, T. L.; Manakov, A. Yu.; Fofanov, A. A.; Kartopolova, I. S.

    2017-06-01

    The physical processes occurring in the coal - natural gas system under the gas pressure release were studied experimentally. The possibility of gas hydrates presence in the inner space of natural coal was shown, which decomposition leads to an increase in the amount of gas passing into the free state. The decomposition of gas hydrates can be caused either by the seam temperature increase or the pressure decrease to lower than the gas hydrates equilibrium curve. The contribution of methane released during gas hydrates decomposition should be taken into account in the design of safe mining technologies for coal seams prone to gas dynamic phenomena.

  2. A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging

    PubMed Central

    Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.

    2012-01-01

    Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065

  3. An Environmental Data Set for Vector-Borne Disease Modeling and Epidemiology

    PubMed Central

    Chabot-Couture, Guillaume; Nigmatulina, Karima; Eckhoff, Philip

    2014-01-01

    Understanding the environmental conditions of disease transmission is important in the study of vector-borne diseases. Low- and middle-income countries bear a significant portion of the disease burden; but data about weather conditions in those countries can be sparse and difficult to reconstruct. Here, we describe methods to assemble high-resolution gridded time series data sets of air temperature, relative humidity, land temperature, and rainfall for such areas; and we test these methods on the island of Madagascar. Air temperature and relative humidity were constructed using statistical interpolation of weather station measurements; the resulting median 95th percentile absolute errors were 2.75°C and 16.6%. Missing pixels from the MODIS11 remote sensing land temperature product were estimated using Fourier decomposition and time-series analysis; thus providing an alternative to the 8-day and 30-day aggregated products. The RFE 2.0 remote sensing rainfall estimator was characterized by comparing it with multiple interpolated rainfall products, and we observed significant differences in temporal and spatial heterogeneity relevant to vector-borne disease modeling. PMID:24755954

  4. Analysis of programming properties and the row-column generation method for 1-norm support vector machines.

    PubMed

    Zhang, Li; Zhou, WeiDa

    2013-12-01

    This paper deals with fast methods for training a 1-norm support vector machine (SVM). First, we define a specific class of linear programming with many sparse constraints, i.e., row-column sparse constraint linear programming (RCSC-LP). In nature, the 1-norm SVM is a sort of RCSC-LP. In order to construct subproblems for RCSC-LP and solve them, a family of row-column generation (RCG) methods is introduced. RCG methods belong to a category of decomposition techniques, and perform row and column generations in a parallel fashion. Specially, for the 1-norm SVM, the maximum size of subproblems of RCG is identical with the number of Support Vectors (SVs). We also introduce a semi-deleting rule for RCG methods and prove the convergence of RCG methods when using the semi-deleting rule. Experimental results on toy data and real-world datasets illustrate that it is efficient to use RCG to train the 1-norm SVM, especially in the case of small SVs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Catalytic Decomposition of Hydroxylammonium Nitrate Ionic Liquid: Enhancement of NO Formation.

    PubMed

    Chambreau, Steven D; Popolan-Vaida, Denisia M; Vaghjiani, Ghanshyam L; Leone, Stephen R

    2017-05-18

    Hydroxylammonium nitrate (HAN) is a promising candidate to replace highly toxic hydrazine in monopropellant thruster space applications. The reactivity of HAN aerosols on heated copper and iridium targets was investigated using tunable vacuum ultraviolet photoionization time-of-flight aerosol mass spectrometry. The reaction products were identified by their mass-to-charge ratios and their ionization energies. Products include NH 3 , H 2 O, NO, hydroxylamine (HA), HNO 3 , and a small amount of NO 2 at high temperature. No N 2 O was detected under these experimental conditions, despite the fact that N 2 O is one of the expected products according to the generally accepted thermal decomposition mechanism of HAN. Upon introduction of iridium catalyst, a significant enhancement of the NO/HA ratio was observed. This observation indicates that the formation of NO via decomposition of HA is an important pathway in the catalytic decomposition of HAN.

  6. Signal decomposition for surrogate modeling of a constrained ultrasonic design space

    NASA Astrophysics Data System (ADS)

    Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.

    2018-04-01

    The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.

  7. Computational model of a vector-mediated epidemic

    NASA Astrophysics Data System (ADS)

    Dickman, Adriana Gomes; Dickman, Ronald

    2015-05-01

    We discuss a lattice model of vector-mediated transmission of a disease to illustrate how simulations can be applied in epidemiology. The population consists of two species, human hosts and vectors, which contract the disease from one another. Hosts are sedentary, while vectors (mosquitoes) diffuse in space. Examples of such diseases are malaria, dengue fever, and Pierce's disease in vineyards. The model exhibits a phase transition between an absorbing (infection free) phase and an active one as parameters such as infection rates and vector density are varied.

  8. Modeling Interferometric Structures with Birefringent Elements: A Linear Vector-Space Formalism

    DTIC Science & Technology

    2013-11-12

    Annapolis, Maryland ViNceNt J. Urick FraNk BUcholtz Photonics Technology Branch Optical Sciences Division i REPORT DOCUMENTATION PAGE Form...a Linear Vector-Space Formalism Nicholas J. Frigo,1 Vincent J. Urick , and Frank Bucholtz Naval Research Laboratory, Code 5650 4555 Overlook Avenue, SW...Annapolis, MD Unclassified Unlimited Unclassified Unlimited Unclassified Unlimited Unclassified Unlimited 29 Vincent J. Urick (202) 767-9352 Coupled mode

  9. On the n-symplectic structure of faithful irreducible representations

    NASA Astrophysics Data System (ADS)

    Norris, L. K.

    2017-04-01

    Each faithful irreducible representation of an N-dimensional vector space V1 on an n-dimensional vector space V2 is shown to define a unique irreducible n-symplectic structure on the product manifold V1×V2 . The basic details of the associated Poisson algebra are developed for the special case N = n2, and 2n-dimensional symplectic submanifolds are shown to exist.

  10. A phenomenological calculus of Wiener description space.

    PubMed

    Richardson, I W; Louie, A H

    2007-10-01

    The phenomenological calculus is a categorical example of Robert Rosen's modeling relation. This paper is an alligation of the phenomenological calculus and generalized harmonic analysis, another categorical example. Our epistemological exploration continues into the realm of Wiener description space, in which constitutive parameters are extended from vectors to vector-valued functions of a real variable. Inherent in the phenomenology are fundamental representations of time and nearness to equilibrium.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine

    This slide presentation describes basic topological concepts, including topological spaces, homeomorphisms, homotopy, betti numbers. Scalar field topology explores finding topological features and scalar field visualization, and vector field topology explores finding topological features and vector field visualization.

  12. Vectors in Use in a 3D Juggling Game Simulation

    ERIC Educational Resources Information Center

    Kynigos, Chronis; Latsi, Maria

    2006-01-01

    The new representations enabled by the educational computer game the "Juggler" can place vectors in a central role both for controlling and measuring the behaviours of objects in a virtual environment simulating motion in three-dimensional spaces. The mathematical meanings constructed by 13 year-old students in relation to vectors as…

  13. 4 × 20 Gbit/s mode division multiplexing over free space using vector modes and a q-plate mode (de)multiplexer

    NASA Astrophysics Data System (ADS)

    Milione, Giovanni; Lavery, Martin P. J.; Huang, Hao; Ren, Yongxiong; Xie, Guodong; Nguyen, Thien An; Karimi, Ebrahim; Marrucci, Lorenzo; Nolan, Daniel A.; Alfano, Robert R.; Willner, Alan E.

    2015-05-01

    Vector modes are spatial modes that have spatially inhomogeneous states of polarization, such as, radial and azimuthal polarization. They can produce smaller spot sizes and stronger longitudinal polarization components upon focusing. As a result, they are used for many applications, including optical trapping and nanoscale imaging. In this work, vector modes are used to increase the information capacity of free space optical communication via the method of optical communication referred to as mode division multiplexing. A mode (de)multiplexer for vector modes based on a liquid crystal technology referred to as a q-plate is introduced. As a proof of principle, using the mode (de)multiplexer four vector modes each carrying a 20 Gbit/s quadrature phase shift keying signal on a single wavelength channel (~1550nm), comprising an aggregate 80 Gbit/s, were transmitted ~1m over the lab table with <-16.4 dB (<2%) mode crosstalk. Bit error rates for all vector modes were measured at the forward error correction threshold with power penalties < 3.41dB.

  14. Theory of bright-field scanning transmission electron microscopy for tomography

    NASA Astrophysics Data System (ADS)

    Levine, Zachary H.

    2005-02-01

    Radiation transport theory is applied to electron microscopy of samples composed of one or more materials. The theory, originally due to Goudsmit and Saunderson, assumes only elastic scattering and an amorphous medium dominated by atomic interactions. For samples composed of a single material, the theory yields reasonable parameter-free agreement with experimental data taken from the literature for the multiple scattering of 300-keV electrons through aluminum foils up to 25μm thick. For thin films, the theory gives a validity condition for Beer's law. For thick films, a variant of Molière's theory [V. G. Molière, Z. Naturforschg. 3a, 78 (1948)] of multiple scattering leads to a form for the bright-field signal for foils in the multiple-scattering regime. The signal varies as [tln(e1-2γt/τ)]-1 where t is the path length of the beam, τ is the mean free path for elastic scattering, and γ is Euler's constant. The Goudsmit-Saunderson solution interpolates numerically between these two limits. For samples with multiple materials, elemental sensitivity is developed through the angular dependence of the scattering. From the elastic scattering cross sections of the first 92 elements, a singular-value decomposition of a vector space spanned by the elastic scattering cross sections minus a delta function shows that there is a dominant common mode, with composition-dependent corrections of about 2%. A mathematically correct reconstruction procedure beyond 2% accuracy requires the acquisition of the bright-field signal as a function of the scattering angle. Tomographic reconstructions are carried out for three singular vectors of a sample problem with four elements Cr, Cu, Zr, and Te. The three reconstructions are presented jointly as a color image; all four elements are clearly identifiable throughout the image.

  15. The kinematic component of the cosmological redshift

    NASA Astrophysics Data System (ADS)

    Chodorowski, Michał J.

    2011-05-01

    It is widely believed that the cosmological redshift is not a Doppler shift. However, Bunn & Hogg have recently pointed out that to solve this problem properly, one has to transport parallelly the velocity four-vector of a distant galaxy to the observer's position. Performing such a transport along the null geodesic of photons arriving from the galaxy, they found that the cosmological redshift is purely kinematic. Here we argue that one should rather transport the velocity four-vector along the geodesic connecting the points of intersection of the world-lines of the galaxy and the observer with the hypersurface of constant cosmic time. We find that the resulting relation between the transported velocity and the redshift of arriving photons is not given by a relativistic Doppler formula. Instead, for small redshifts it coincides with the well-known non-relativistic decomposition of the redshift into a Doppler (kinematic) component and a gravitational one. We perform such a decomposition for arbitrary large redshifts and derive a formula for the kinematic component of the cosmological redshift, valid for any Friedman-Lemaître-Robertson-Walker (FLRW) cosmology. In particular, in a universe with Ωm= 0.24 and ΩΛ= 0.76, a quasar at a redshift 6, at the time of emission of photons reaching us today had the recession velocity v= 0.997c. This can be contrasted with v= 0.96c, had the redshift been entirely kinematic. Thus, for recession velocities of such high-redshift sources, the effect of deceleration of the early Universe clearly prevails over the effect of its relatively recent acceleration. Last but not the least, we show that the so-called proper recession velocities of galaxies, commonly used in cosmology, are in fact radial components of the galaxies' four-velocity vectors. As such, they can indeed attain superluminal values, but should not be regarded as real velocities.

  16. Dengue Fever Occurrence and Vector Detection by Larval Survey, Ovitrap and MosquiTRAP: A Space-Time Clusters Analysis

    PubMed Central

    de Melo, Diogo Portella Ornelas; Scherrer, Luciano Rios; Eiras, Álvaro Eduardo

    2012-01-01

    The use of vector surveillance tools for preventing dengue disease requires fine assessment of risk, in order to improve vector control activities. Nevertheless, the thresholds between vector detection and dengue fever occurrence are currently not well established. In Belo Horizonte (Minas Gerais, Brazil), dengue has been endemic for several years. From January 2007 to June 2008, the dengue vector Aedes (Stegomyia) aegypti was monitored by ovitrap, the sticky-trap MosquiTRAP™ and larval surveys in an study area in Belo Horizonte. Using a space-time scan for clusters detection implemented in SaTScan software, the vector presence recorded by the different monitoring methods was evaluated. Clusters of vectors and dengue fever were detected. It was verified that ovitrap and MosquiTRAP vector detection methods predicted dengue occurrence better than larval survey, both spatially and temporally. MosquiTRAP and ovitrap presented similar results of space-time intersections to dengue fever clusters. Nevertheless ovitrap clusters presented longer duration periods than MosquiTRAP ones, less acuratelly signalizing the dengue risk areas, since the detection of vector clusters during most of the study period was not necessarily correlated to dengue fever occurrence. It was verified that ovitrap clusters occurred more than 200 days (values ranged from 97.0±35.35 to 283.0±168.4 days) before dengue fever clusters, whereas MosquiTRAP clusters preceded dengue fever clusters by approximately 80 days (values ranged from 65.5±58.7 to 94.0±14. 3 days), the former showing to be more temporally precise. Thus, in the present cluster analysis study MosquiTRAP presented superior results for signaling dengue transmission risks both geographically and temporally. Since early detection is crucial for planning and deploying effective preventions, MosquiTRAP showed to be a reliable tool and this method provides groundwork for the development of even more precise tools. PMID:22848729

  17. Community ecology in 3D: Tensor decomposition reveals spatio-temporal dynamics of large ecological communities

    PubMed Central

    Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O.; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A.; Möllmann, Christian

    2017-01-01

    Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs. PMID:29136658

  18. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.

    PubMed

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ d , and the dictionary is learned from the training data using the vector space structure of ℝ d and its Euclidean L 2 -metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.

  19. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning

    PubMed Central

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝd, and the dictionary is learned from the training data using the vector space structure of ℝd and its Euclidean L2-metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis. PMID:24129583

  20. Adaptive Hybrid Picture Coding. Volume 2.

    DTIC Science & Technology

    1985-02-01

    ooo5 V.a Measurement Vector ..eho..............57 V.b Size Variable o .entroi* Vector .......... .- 59 V * c Shape Vector .Ř 0-60o oe 6 I V~d...the Program for the Adaptive Line of Sight Method .i.. 18.. o ... .... .... 1 B Details of the Feature Vector FormationProgram .. o ...oo..-....- .122 C ...shape recognition is analogous to recognition of curves in space. Therefore, well known concepts and theorems from differential geometry can be 34 . o

  1. Vehicle Based Vector Sensor

    DTIC Science & Technology

    2015-09-28

    buoyant underwater vehicle with an interior space in which a length of said underwater vehicle is equal to one tenth of the acoustic wavelength...underwater vehicle with an interior space in which a length of said underwater vehicle is equal to one tenth of the acoustic wavelength; an...unmanned underwater vehicle that can function as an acoustic vector sensor. (2) Description of the Prior Art [0004] It is known that a propagating

  2. Design and assembly of a catalyst bed gas generator for the catalytic decomposition of high concentration hydrogen peroxide propellants and the catalytic combustion of hydrocarbon/air mixtures

    NASA Technical Reports Server (NTRS)

    Lohner, Kevin A. (Inventor); Mays, Jeffrey A. (Inventor); Sevener, Kathleen M. (Inventor)

    2004-01-01

    A method for designing and assembling a high performance catalyst bed gas generator for use in decomposing propellants, particularly hydrogen peroxide propellants, for use in target, space, and on-orbit propulsion systems and low-emission terrestrial power and gas generation. The gas generator utilizes a sectioned catalyst bed system, and incorporates a robust, high temperature mixed metal oxide catalyst. The gas generator requires no special preheat apparatus or special sequencing to meet start-up requirements, enabling a fast overall response time. The high performance catalyst bed gas generator system has consistently demonstrated high decomposition efficiency, extremely low decomposition roughness, and long operating life on multiple test articles.

  3. Task decomposition for a multilimbed robot to work in reachable but unorientable space

    NASA Technical Reports Server (NTRS)

    Su, Chau; Zheng, Yuan F.

    1991-01-01

    Robot manipulators installed on legged mobile platforms are suggested for enlarging robot workspace. To plan the motion of such a system, the arm-platform motion coordination problem is raised, and a task decomposition is proposed to solve the problem. A given task described by the destination position and orientation of the end effector is decomposed into subtasks for arm manipulation and for platform configuration, respectively. The former is defined as the end-effector position and orientation with respect to the platform, and the latter as the platform position and orientation in the base coordinates. Three approaches are proposed for the task decomposition. The approaches are also evaluated in terms of the displacements, from which an optimal approach can be selected.

  4. Nutrient and pollutant metals within earthworm residues are immobilized in soil during decomposition

    PubMed Central

    Richardson, J.B; Renock, D.J; Görres, J.H; Jackson, B.P; Webb, S.M; Friedland, A.J

    2016-01-01

    Earthworms are known to bioaccumulate metals, making them a potential vector for metal transport in soils. However, the fate of metals within soil upon death of earthworms has not been characterized. We compared the fate of nutrient (Ca, Mg, Mn) and potentially toxic (Cu, Zn, Pb) metals during decomposition of Amynthas agrestis and Lumbricus rubellus in soil columns. Cumulative leachate pools, exchangeable pools (0.1 M KCl + 0.01 M acetic acid extracted), and stable pools (16 M HNO3 + 12 M HCl extracted) were quantified in the soil columns after 7, 21, and 60 days of decomposition. Soil columns containing A. agrestis and L. rubellus had significantly higher cumulative leachate pools of Ca, Mn, Cu, and Pb than Control soil columns. Exchangeable and stable pools of Cu, Pb, and Zn were greater for A. agrestis and L. rubellus soil columns than Control soil columns. However, we estimated that > 98 % of metals from earthworm residues were immobilized in the soil in an exchangeable or stable form over the 60 days using a mass balance approach. Micro-XRF images of longitudinal thin sections of soil columns after 60 days containing A. agrestis confirm metals immobilization in earthworm residues. Our research demonstrates that nutrient and toxic metals are stabilized in soil within earthworm residues. PMID:28163331

  5. Decomposition of sea lamprey Petromyzon marinus carcasses: temperature effects, nutrient dynamics, and implications for stream food webs

    USGS Publications Warehouse

    Weaver, Daniel M.; Coghlan, Stephen M.; Zydlewski, Joseph D.; Hogg, Robert S.; Canton, Michael

    2015-01-01

    Anadromous fishes serve as vectors of marine-derived nutrients into freshwaters that are incorporated into aquatic and terrestrial food webs. Pacific salmonines Oncorhynchus spp. exemplify the importance of migratory fish as links between marine and freshwater systems; however, little attention has been given to sea lamprey (Petromyzon marinus Linnaeus, 1758) in Atlantic coastal systems. A first step to understanding the role of sea lamprey in freshwater food webs is to characterize the composition and rate of nutrient inputs. We conducted laboratory and field studies characterizing the elemental composition and the decay rates and subsequent water enriching effects of sea lamprey carcasses. Proximate tissue analysis demonstrated lamprey carcass nitrogen:phosphorus ratios of 20.2:1 (±1.18 SE). In the laboratory, carcass decay resulted in liberation of phosphorus within 1 week and nitrogen within 3 weeks. Nutrient liberation was accelerated at higher temperatures. In a natural stream, carcass decomposition resulted in an exponential decline in biomass, and after 24 days, the proportion of initial biomass remaining was 27% (±3.0% SE). We provide quantitative results as to the temporal dynamics of sea lamprey carcass decomposition and subsequent nutrient liberation. These nutrient subsidies may arrive at a critical time to maximize enrichment of stream food webs.

  6. A geometric approach to problems in birational geometry.

    PubMed

    Chi, Chen-Yu; Yau, Shing-Tung

    2008-12-02

    A classical set of birational invariants of a variety are its spaces of pluricanonical forms and some of their canonically defined subspaces. Each of these vector spaces admits a typical metric structure which is also birationally invariant. These vector spaces so metrized will be referred to as the pseudonormed spaces of the original varieties. A fundamental question is the following: Given two mildly singular projective varieties with some of the first variety's pseudonormed spaces being isometric to the corresponding ones of the second variety's, can one construct a birational map between them that induces these isometries? In this work, a positive answer to this question is given for varieties of general type. This can be thought of as a theorem of Torelli type for birational equivalence.

  7. The next 25 years: Industrialization of space - Rationale for planning

    NASA Technical Reports Server (NTRS)

    Von Puttkamer, J.

    1977-01-01

    A methodology for planning the industralization of space is discussed. The suggested approach combines the extrapolative ('push') approach, in which alternative futures are projected on the basis of past and current trends and tendencies, with the normative ('pull') view, in which an ideal state in the far future is postulated and policies and decisions are directed toward its attainment. Time-reversed vectors of the future are tied to extrapolated, trend-oriented vectors of the quasi-present to identify common plateaus or stepping stones in technological development. Important steps in the industrialization of space to attain the short-range goals of production of space-derived energy, goods and services and the long-range goal of space colonization are discussed.

  8. Anisotropic Hardy Spaces of Musielak-Orlicz Type with Applications to Boundedness of Sublinear Operators

    PubMed Central

    Li, Baode; Yang, Dachun; Yuan, Wen

    2014-01-01

    Let φ : ℝn × [0, ∞)→[0, ∞) be a Musielak-Orlicz function and A an expansive dilation. In this paper, the authors introduce the anisotropic Hardy space of Musielak-Orlicz type, H A φ(ℝn), via the grand maximal function. The authors then obtain some real-variable characterizations of H A φ(ℝn) in terms of the radial, the nontangential, and the tangential maximal functions, which generalize the known results on the anisotropic Hardy space H A p(ℝn) with p ∈ (0,1] and are new even for its weighted variant. Finally, the authors characterize these spaces by anisotropic atomic decompositions. The authors also obtain the finite atomic decomposition characterization of H A φ(ℝn), and, as an application, the authors prove that, for a given admissible triplet (φ, q, s), if T is a sublinear operator and maps all (φ, q, s)-atoms with q < ∞ (or all continuous (φ, q, s)-atoms with q = ∞) into uniformly bounded elements of some quasi-Banach spaces ℬ, then T uniquely extends to a bounded sublinear operator from H A φ(ℝn) to ℬ. These results are new even for anisotropic Orlicz-Hardy spaces on ℝn. PMID:24757418

  9. Anisotropic hardy spaces of Musielak-Orlicz type with applications to boundedness of sublinear operators.

    PubMed

    Li, Baode; Yang, Dachun; Yuan, Wen

    2014-01-01

    Let φ : ℝ(n) × [0, ∞)→[0, ∞) be a Musielak-Orlicz function and A an expansive dilation. In this paper, the authors introduce the anisotropic Hardy space of Musielak-Orlicz type, H(A)(φ)(ℝ(n)), via the grand maximal function. The authors then obtain some real-variable characterizations of H(A)(φ)(ℝ(n)) in terms of the radial, the nontangential, and the tangential maximal functions, which generalize the known results on the anisotropic Hardy space H(A)(p) (ℝ(n)) with p ∈ (0,1] and are new even for its weighted variant. Finally, the authors characterize these spaces by anisotropic atomic decompositions. The authors also obtain the finite atomic decomposition characterization of H(A)(φ)(ℝ(n)), and, as an application, the authors prove that, for a given admissible triplet (φ, q, s), if T is a sublinear operator and maps all (φ, q, s)-atoms with q < ∞ (or all continuous (φ, q, s)-atoms with q = ∞) into uniformly bounded elements of some quasi-Banach spaces ℬ, then T uniquely extends to a bounded sublinear operator from H(A)(φ)(ℝ(n)) to ℬ. These results are new even for anisotropic Orlicz-Hardy spaces on ℝ(n).

  10. (p,q) deformations and (p,q)-vector coherent states of the Jaynes-Cummings model in the rotating wave approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ben Geloun, Joseph; Govaerts, Jan; Hounkonnou, M. Norbert

    2007-03-15

    Classes of (p,q) deformations of the Jaynes-Cummings model in the rotating wave approximation are considered. Diagonalization of the Hamiltonian is performed exactly, leading to useful spectral decompositions of a series of relevant operators. The latter include ladder operators acting between adjacent energy eigenstates within two separate infinite discrete towers, except for a singleton state. These ladder operators allow for the construction of (p,q)-deformed vector coherent states. Using (p,q) arithmetics, explicit and exact solutions to the associated moment problem are displayed, providing new classes of coherent states for such models. Finally, in the limit of decoupled spin sectors, our analysis translatesmore » into (p,q) deformations of the supersymmetric harmonic oscillator, such that the two supersymmetric sectors get intertwined through the action of the ladder operators as well as in the associated coherent states.« less

  11. A new feature constituting approach to detection of vocal fold pathology

    NASA Astrophysics Data System (ADS)

    Hariharan, M.; Polat, Kemal; Yaacob, Sazali

    2014-08-01

    In the last two decades, non-invasive methods through acoustic analysis of voice signal have been proved to be excellent and reliable tool to diagnose vocal fold pathologies. This paper proposes a new feature vector based on the wavelet packet transform and singular value decomposition for the detection of vocal fold pathology. k-means clustering based feature weighting is proposed to increase the distinguishing performance of the proposed features. In this work, two databases Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database and MAPACI speech pathology database are used. Four different supervised classifiers such as k-nearest neighbour (k-NN), least-square support vector machine, probabilistic neural network and general regression neural network are employed for testing the proposed features. The experimental results uncover that the proposed features give very promising classification accuracy of 100% for both MEEI database and MAPACI speech pathology database.

  12. Embedding of multidimensional time-dependent observations.

    PubMed

    Barnard, J P; Aldrich, C; Gerber, M

    2001-10-01

    A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.

  13. Embedding of multidimensional time-dependent observations

    NASA Astrophysics Data System (ADS)

    Barnard, Jakobus P.; Aldrich, Chris; Gerber, Marius

    2001-10-01

    A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.

  14. Foundation Mathematics for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-03-01

    1. Arithmetic and geometry; 2. Preliminary algebra; 3. Differential calculus; 4. Integral calculus; 5. Complex numbers and hyperbolic functions; 6. Series and limits; 7. Partial differentiation; 8. Multiple integrals; 9. Vector algebra; 10. Matrices and vector spaces; 11. Vector calculus; 12. Line, surface and volume integrals; 13. Laplace transforms; 14. Ordinary differential equations; 15. Elementary probability; Appendices; Index.

  15. Student Solution Manual for Foundation Mathematics for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-03-01

    1. Arithmetic and geometry; 2. Preliminary algebra; 3. Differential calculus; 4. Integral calculus; 5. Complex numbers and hyperbolic functions; 6. Series and limits; 7. Partial differentiation; 8. Multiple integrals; 9. Vector algebra; 10. Matrices and vector spaces; 11. Vector calculus; 12. Line, surface and volume integrals; 13. Laplace transforms; 14. Ordinary differential equations; 15. Elementary probability; Appendix.

  16. Lorentz symmetric n-particle systems without ``multiple times''

    NASA Astrophysics Data System (ADS)

    Smith, Felix

    2013-05-01

    The need for multiple times in relativistic n-particle dynamics is a consequence of Minkowski's postulated symmetry between space and time coordinates in a space-time s = [x1 , . . ,x4 ] = [ x , y , z , ict ] , Eq. (1). Poincaré doubted the need for this space-time symmetry, believing Lorentz covariance could also prevail in some geometries with a three-dimensional position space and a quite different time coordinate. The Hubble expansion observed later justifies a specific geometry of this kind, a negatively curved position 3-space expanding with time at the Hubble rate lH (t) =lH , 0 + cΔt (F. T. Smith, Ann. Fond. L. de Broglie, 30, 179 (2005) and 35, 395 (2010)). Its position 4-vector is not s but q = [x1 , . . ,x4 ] = [ x , y , z , ilH (t) ] , and shows no 4-space symmetry. What is observed is always a difference 4-vector Δq = [ Δx , Δy , Δz , icΔt ] , and this displays the structure of Eq. (1) perfectly. Thus we find the standard 4-vector of special relativity in a geometry that does not require a Minkowski space-time at all, but a quite different geometry with a expanding 3-space symmetry and an independent time. The same Lorentz symmetry with but a single time extends to 2 and n-body systems.

  17. Fast metabolite identification with Input Output Kernel Regression.

    PubMed

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-06-15

    An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. celine.brouard@aalto.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  18. Fast metabolite identification with Input Output Kernel Regression

    PubMed Central

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-01-01

    Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628

  19. A link between torse-forming vector fields and rotational hypersurfaces

    NASA Astrophysics Data System (ADS)

    Chen, Bang-Yen; Verstraelen, Leopold

    Torse-forming vector fields introduced by Yano [On torse forming direction in a Riemannian space, Proc. Imp. Acad. Tokyo 20 (1944) 340-346] are natural extension of concurrent and concircular vector fields. Such vector fields have many nice applications to geometry and mathematical physics. In this paper, we establish a link between rotational hypersurfaces and torse-forming vector fields. More precisely, our main result states that, for a hypersurface M of 𝔼n+1 with n ≥ 3, the tangential component xT of the position vector field of M is a proper torse-forming vector field on M if and only if M is contained in a rotational hypersurface whose axis of rotation contains the origin.

  20. Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)

    2001-01-01

    Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.

  1. Solving periodic block tridiagonal systems using the Sherman-Morrison-Woodbury formula

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice

    1989-01-01

    Many algorithms for solving the Navier-Stokes equations require the solution of periodic block tridiagonal systems of equations. By applying a splitting to the matrix representing this system of equations, it may first be reduced to a block tridiagonal matrix plus an outer product of two block vectors. The Sherman-Morrison-Woodbury formula is then applied. The algorithm thus reduces a periodic banded system to a non-periodic banded system with additional right-hand sides and is of higher efficiency than standard Thomas algorithm/LU decompositions.

  2. Field by field hybrid upwind splitting methods

    NASA Technical Reports Server (NTRS)

    Coquel, Frederic; Liou, Meng-Sing

    1993-01-01

    A new and general approach to upwind splitting is presented. The design principle combines the robustness of flux vector splitting schemes in the capture of nonlinear waves and the accuracy of some flux difference splitting schemes in the resolution of linear waves. The new schemes are derived following a general hybridization technique performed directly at the basic level of the field by field decomposition involved in FDS methods. The scheme does not use a spatial switch to be tuned up according to the local smoothness of the approximate solution.

  3. Empirical Mode Decomposition and k-Nearest Embedding Vectors for Timely Analyses of Antibiotic Resistance Trends

    PubMed Central

    Teodoro, Douglas; Lovis, Christian

    2013-01-01

    Background Antibiotic resistance is a major worldwide public health concern. In clinical settings, timely antibiotic resistance information is key for care providers as it allows appropriate targeted treatment or improved empirical treatment when the specific results of the patient are not yet available. Objective To improve antibiotic resistance trend analysis algorithms by building a novel, fully data-driven forecasting method from the combination of trend extraction and machine learning models for enhanced biosurveillance systems. Methods We investigate a robust model for extraction and forecasting of antibiotic resistance trends using a decade of microbiology data. Our method consists of breaking down the resistance time series into independent oscillatory components via the empirical mode decomposition technique. The resulting waveforms describing intrinsic resistance trends serve as the input for the forecasting algorithm. The algorithm applies the delay coordinate embedding theorem together with the k-nearest neighbor framework to project mappings from past events into the future dimension and estimate the resistance levels. Results The algorithms that decompose the resistance time series and filter out high frequency components showed statistically significant performance improvements in comparison with a benchmark random walk model. We present further qualitative use-cases of antibiotic resistance trend extraction, where empirical mode decomposition was applied to highlight the specificities of the resistance trends. Conclusion The decomposition of the raw signal was found not only to yield valuable insight into the resistance evolution, but also to produce novel models of resistance forecasters with boosted prediction performance, which could be utilized as a complementary method in the analysis of antibiotic resistance trends. PMID:23637796

  4. The canonical Lagrangian approach to three-space general relativity

    NASA Astrophysics Data System (ADS)

    Shyam, Vasudev; Venkatesh, Madhavan

    2013-07-01

    We study the action for the three-space formalism of general relativity, better known as the Barbour-Foster-Ó Murchadha action, which is a square-root Baierlein-Sharp-Wheeler action. In particular, we explore the (pre)symplectic structure by pulling it back via a Legendre map to the tangent bundle of the configuration space of this action. With it we attain the canonical Lagrangian vector field which generates the gauge transformations (3-diffeomorphisms) and the true physical evolution of the system. This vector field encapsulates all the dynamics of the system. We also discuss briefly the observables and perennials for this theory. We then present a symplectic reduction of the constrained phase space.

  5. Vector Magnetograph Design

    NASA Technical Reports Server (NTRS)

    Chipman, Russell A.

    1996-01-01

    This report covers work performed during the period of November 1994 through March 1996 on the design of a Space-borne Solar Vector Magnetograph. This work has been performed as part of a design team under the supervision of Dr. Mona Hagyard and Dr. Alan Gary of the Space Science Laboratory. Many tasks were performed and this report documents the results from some of those tasks, each contained in the corresponding appendix. Appendices are organized in chronological order.

  6. The Absolute Vector Magnetometers on Board Swarm, Lessons Learned From Two Years in Space.

    NASA Astrophysics Data System (ADS)

    Hulot, G.; Leger, J. M.; Vigneron, P.; Brocco, L.; Olsen, N.; Jager, T.; Bertrand, F.; Fratter, I.; Sirol, O.; Lalanne, X.

    2015-12-01

    ESA's Swarm satellites carry 4He absolute magnetometers (ASM), designed by CEA-Léti and developed in partnership with CNES. These instruments are the first-ever space-born magnetometers to use a common sensor to simultaneously deliver 1Hz independent absolute scalar and vector readings of the magnetic field. They have provided the very high accuracy scalar field data nominally required by the mission (for both science and calibration purposes, since each satellite also carries a low noise high frequency fluxgate magnetometer designed by DTU), but also very useful experimental absolute vector data. In this presentation, we will report on the status of the instruments, as well as on the various tests and investigations carried out using these experimental data since launch in November 2013. In particular, we will illustrate the advantages of flying ASM instruments on space-born magnetic missions for nominal data quality checks, geomagnetic field modeling and science objectives.

  7. Realistic Covariance Prediction for the Earth Science Constellation

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.

  8. Laplace-Runge-Lenz vector in quantum mechanics in noncommutative space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gáliková, Veronika; Kováčik, Samuel; Prešnajder, Peter

    2013-12-15

    The main point of this paper is to examine a “hidden” dynamical symmetry connected with the conservation of Laplace-Runge-Lenz vector (LRL) in the hydrogen atom problem solved by means of non-commutative quantum mechanics (NCQM). The basic features of NCQM will be introduced to the reader, the key one being the fact that the notion of a point, or a zero distance in the considered configuration space, is abandoned and replaced with a “fuzzy” structure in such a way that the rotational invariance is preserved. The main facts about the conservation of LRL vector in both classical and quantum theory willmore » be reviewed. Finally, we will search for an analogy in the NCQM, provide our results and their comparison with the QM predictions. The key notions we are going to deal with are non-commutative space, Coulomb-Kepler problem, and symmetry.« less

  9. Distance between RBS and AUG plays an important role in overexpression of recombinant proteins.

    PubMed

    Berwal, Sunil K; Sreejith, R K; Pal, Jayanta K

    2010-10-15

    The spacing between ribosome binding site (RBS) and AUG is crucial for efficient overexpression of genes when cloned in prokaryotic expression vectors. We undertook a brief study on the overexpression of genes cloned in Escherichia coli expression vectors, wherein the spacing between the RBS and the start codon was varied. SDS-PAGE and Western blot analysis indicated a high level of protein expression only in constructs where the spacing between RBS and AUG was approximately 40 nucleotides or more, despite the synthesis of the transcripts in the representative cases investigated. Copyright 2010 Elsevier Inc. All rights reserved.

  10. Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform

    PubMed Central

    Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart

    2014-01-01

    Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331

  11. Normal forms of Hopf-zero singularity

    NASA Astrophysics Data System (ADS)

    Gazor, Majid; Mokhtari, Fahimeh

    2015-01-01

    The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative-nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov-Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov-Takens singularities. Despite this, the normal form computations of Bogdanov-Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative-nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto-Sivashinsky equations to demonstrate the applicability of our results.

  12. A fast identification algorithm for Box-Cox transformation based radial basis function neural network.

    PubMed

    Hong, Xia

    2006-07-01

    In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

  13. Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data

    PubMed Central

    Young, Alistair A.; Li, Xiaosong

    2014-01-01

    Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382

  14. MARS-MD: rejection based image domain material decomposition

    NASA Astrophysics Data System (ADS)

    Bateman, C. J.; Knight, D.; Brandwacht, B.; McMahon, J.; Healy, J.; Panta, R.; Aamir, R.; Rajendran, K.; Moghiseh, M.; Ramyar, M.; Rundle, D.; Bennett, J.; de Ruiter, N.; Smithies, D.; Bell, S. T.; Doesburg, R.; Chernoglazov, A.; Mandalika, V. B. H.; Walsh, M.; Shamshad, M.; Anjomrouz, M.; Atharifard, A.; Vanden Broeke, L.; Bheesette, S.; Kirkbride, T.; Anderson, N. G.; Gieseg, S. P.; Woodfield, T.; Renaud, P. F.; Butler, A. P. H.; Butler, P. H.

    2018-05-01

    This paper outlines image domain material decomposition algorithms that have been routinely used in MARS spectral CT systems. These algorithms (known collectively as MARS-MD) are based on a pragmatic heuristic for solving the under-determined problem where there are more materials than energy bins. This heuristic contains three parts: (1) splitting the problem into a number of possible sub-problems, each containing fewer materials; (2) solving each sub-problem; and (3) applying rejection criteria to eliminate all but one sub-problem's solution. An advantage of this process is that different constraints can be applied to each sub-problem if necessary. In addition, the result of this process is that solutions will be sparse in the material domain, which reduces crossover of signal between material images. Two algorithms based on this process are presented: the Segmentation variant, which uses segmented material classes to define each sub-problem; and the Angular Rejection variant, which defines the rejection criteria using the angle between reconstructed attenuation vectors.

  15. Singular value decomposition based feature extraction technique for physiological signal analysis.

    PubMed

    Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C

    2012-06-01

    Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.

  16. Troposphere-stratosphere (surface-55 km) monthly general circulation statistics for the Northern Hemisphere-four year averages

    NASA Technical Reports Server (NTRS)

    Wu, M. F.; Geller, M. A.; Olson, J. G.; Gelman, M. E.

    1984-01-01

    This report presents four year averages of monthly mean Northern Hemisphere general circulation statistics for the period from 1 December 1978 through 30 November 1982. Computations start with daily maps of temperature for 18 pressure levels between 1000 and 0.4 mb that were supplied by NOAA/NMC. Geopotential height and geostrophic wind are constructed using the hydrostatic and geostrophic formulae. Fields presented in this report are zonally averaged temperature, mean zonal wind, and amplitude and phase of the planetary waves in geopotential height with zonal wavenumbers 1-3. The northward fluxes of heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large annual and interannual variations are found in each quantity especially in the stratosphere in accordance with the changes in the planetary wave activity. The results are shown both in graphic and tabular form.

  17. Algebraic and radical potential fields. Stability domains in coordinate and parametric space

    NASA Astrophysics Data System (ADS)

    Uteshev, Alexei Yu.

    2018-05-01

    A dynamical system d X/d t = F(X; A) is treated where F(X; A) is a polynomial (or some general type of radical contained) function in the vectors of state variables X ∈ ℝn and parameters A ∈ ℝm. We are looking for stability domains in both spaces, i.e. (a) domain ℙ ⊂ ℝm such that for any parameter vector specialization A ∈ ℙ, there exists a stable equilibrium for the dynamical system, and (b) domain 𝕊 ⊂ ℝn such that any point X* ∈ 𝕊 could be made a stable equilibrium by a suitable specialization of the parameter vector A.

  18. Enhanced secure 4-D modulation space optical multi-carrier system based on joint constellation and Stokes vector scrambling.

    PubMed

    Liu, Bo; Zhang, Lijia; Xin, Xiangjun

    2018-03-19

    This paper proposes and demonstrates an enhanced secure 4-D modulation optical generalized filter bank multi-carrier (GFBMC) system based on joint constellation and Stokes vector scrambling. The constellation and Stokes vectors are scrambled by using different scrambling parameters. A multi-scroll Chua's circuit map is adopted as the chaotic model. Large secure key space can be obtained due to the multi-scroll attractors and independent operability of subcarriers. A 40.32Gb/s encrypted optical GFBMC signal with 128 parallel subcarriers is successfully demonstrated in the experiment. The results show good resistance against the illegal receiver and indicate a potential way for the future optical multi-carrier system.

  19. Rotational-path decomposition based recursive planning for spacecraft attitude reorientation

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying

    2018-02-01

    The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.

  20. A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, C. Kristopher; Hauck, Cory D.

    In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less

  1. A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework

    DOE PAGES

    Garrett, C. Kristopher; Hauck, Cory D.

    2018-04-05

    In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less

  2. Regularized estimation of Euler pole parameters

    NASA Astrophysics Data System (ADS)

    Aktuğ, Bahadir; Yildirim, Ömer

    2013-07-01

    Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.

  3. Terrorism/Criminalogy/Sociology via Magnetism-Hamiltonian ``Models''?!: Black Swans; What Secrets Lie Buried in Magnetism?; ``Magnetism Will Conquer the Universe?''(Charles Middleton, aka ``His Imperial Majesty The Emperior Ming `The Merciless!!!''

    NASA Astrophysics Data System (ADS)

    Carrott, Anthony; Siegel, Edward Carl-Ludwig; Hoover, John-Edgar; Ness, Elliott

    2013-03-01

    Terrorism/Criminalogy//Sociology : non-Linear applied-mathematician (``nose-to-the grindstone / ``gearheadism'') ''modelers'': Worden,, Short, ...criminologists/counter-terrorists/sociologists confront [SIAM Conf. on Nonlinearity, Seattle(12); Canadian Sociology Conf,. Burnaby(12)]. ``The `Sins' of the Fathers Visited Upon the Sons'': Zeno vs Ising vs Heisenberg vs Stoner vs Hubbard vs Siegel ''SODHM''(But NO Y!!!) vs ...??? Magntism and it turn are themselves confronted BY MAGNETISM,via relatively magnetism/metal-insulator conductivity / percolation-phase-transitions critical-phenomena -illiterate non-linear applied-mathematician (nose-to-the-grindstone/ ``gearheadism'')''modelers''. What Secrets Lie Buried in Magnetism?; ``Magnetism Will Conquer the Universe!!!''[Charles Middleton, aka ``His Imperial Majesty The Emperior Ming `The Merciless!!!']'' magnetism-Hamiltonian phase-transitions percolation-``models''!: Zeno(~2350 BCE) to Peter the Pilgrim(1150) to Gilbert(1600) to Faraday(1815-1820) to Tate (1870-1880) to Ewing(1882) hysteresis to Barkhausen(1885) to Curie(1895)-Weiss(1895) to Ising-Lenz(r-space/Localized-Scalar/ Discrete/1911) to Heisenberg(r-space/localized-vector/discrete/1927) to Priesich(1935) to Stoner (electron/k-space/ itinerant-vector/discrete/39) to Stoner-Wohlfarth (technical-magnetism hysteresis /r-space/ itinerant-vector/ discrete/48) to Hubbard-Longuet-Higgins (k-space versus r-space/

  4. Space Science

    NASA Image and Video Library

    1990-10-01

    Using the Solar Vector Magnetograph, a solar observation facility at NASA's Marshall Space Flight Center (MSFC), scientists from the National Space Science and Technology Center (NSSTC) in Huntsville, Alabama, are monitoring the explosive potential of magnetic areas of the Sun. This effort could someday lead to better prediction of severe space weather, a phenomenon that occurs when blasts of particles and magnetic fields from the Sun impact the magnetosphere, the magnetic bubble around the Earth. When massive solar explosions, known as coronal mass ejections, blast through the Sun's outer atmosphere and plow toward Earth at speeds of thousands of miles per second, the resulting effects can be harmful to communication satellites and astronauts outside the Earth's magnetosphere. Like severe weather on Earth, severe space weather can be costly. On the ground, the magnetic storm wrought by these solar particles can knock out electric power. The researchers from MSFC and NSSTC's solar physics group develop instruments for measuring magnetic fields on the Sun. With these instruments, the group studies the origin, structure, and evolution of the solar magnetic field and the impact it has on Earth's space environment. This photograph shows the Solar Vector Magnetograph and Dr. Mona Hagyard of MSFC, the director of the observatory who leads the development, operation and research program of the Solar Vector Magnetograph.

  5. The organization of conspecific face space in nonhuman primates

    PubMed Central

    Parr, Lisa A.; Taubert, Jessica; Little, Anthony C.; Hancock, Peter J. B.

    2013-01-01

    Humans and chimpanzees demonstrate numerous cognitive specializations for processing faces, but comparative studies with monkeys suggest that these may be the result of recent evolutionary adaptations. The present study utilized the novel approach of face space, a powerful theoretical framework used to understand the representation of face identity in humans, to further explore species differences in face processing. According to the theory, faces are represented by vectors in a multidimensional space, the centre of which is defined by an average face. Each dimension codes features important for describing a face’s identity, and vector length codes the feature’s distinctiveness. Chimpanzees and rhesus monkeys discriminated male and female conspecifics’ faces, rated by humans for their distinctiveness, using a computerized task. Multidimensional scaling analyses showed that the organization of face space was similar between humans and chimpanzees. Distinctive faces had the longest vectors and were the easiest for chimpanzees to discriminate. In contrast, distinctiveness did not correlate with the performance of rhesus monkeys. The feature dimensions for each species’ face space were visualized and described using morphing techniques. These results confirm species differences in the perceptual representation of conspecific faces, which are discussed within an evolutionary framework. PMID:22670823

  6. Some Applications Of Semigroups And Computer Algebra In Discrete Structures

    NASA Astrophysics Data System (ADS)

    Bijev, G.

    2009-11-01

    An algebraic approach to the pseudoinverse generalization problem in Boolean vector spaces is used. A map (p) is defined, which is similar to an orthogonal projection in linear vector spaces. Some other important maps with properties similar to those of the generalized inverses (pseudoinverses) of linear transformations and matrices corresponding to them are also defined and investigated. Let Ax = b be an equation with matrix A and vectors x and b Boolean. Stochastic experiments for solving the equation, which involves the maps defined and use computer algebra methods, have been made. As a result, the Hamming distance between vectors Ax = p(b) and b is equal or close to the least possible. We also share our experience in using computer algebra systems for teaching discrete mathematics and linear algebra and research. Some examples for computations with binary relations using Maple are given.

  7. Method and system for efficient video compression with low-complexity encoder

    NASA Technical Reports Server (NTRS)

    Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)

    2012-01-01

    Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.

  8. Short-Circuit Fault Detection and Classification Using Empirical Wavelet Transform and Local Energy for Electric Transmission Line.

    PubMed

    Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin

    2017-09-16

    In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF₂) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach.

  9. Short-Circuit Fault Detection and Classification Using Empirical Wavelet Transform and Local Energy for Electric Transmission Line

    PubMed Central

    Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin

    2017-01-01

    In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF2) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach. PMID:28926953

  10. Inferring Lower Boundary Driving Conditions Using Vector Magnetic Field Observations

    NASA Technical Reports Server (NTRS)

    Schuck, Peter W.; Linton, Mark; Leake, James; MacNeice, Peter; Allred, Joel

    2012-01-01

    Low-beta coronal MHD simulations of realistic CME events require the detailed specification of the magnetic fields, velocities, densities, temperatures, etc., in the low corona. Presently, the most accurate estimates of solar vector magnetic fields are made in the high-beta photosphere. Several techniques have been developed that provide accurate estimates of the associated photospheric plasma velocities such as the Differential Affine Velocity Estimator for Vector Magnetograms and the Poloidal/Toroidal Decomposition. Nominally, these velocities are consistent with the evolution of the radial magnetic field. To evolve the tangential magnetic field radial gradients must be specified. In addition to estimating the photospheric vector magnetic and velocity fields, a further challenge involves incorporating these fields into an MHD simulation. The simulation boundary must be driven, consistent with the numerical boundary equations, with the goal of accurately reproducing the observed magnetic fields and estimated velocities at some height within the simulation. Even if this goal is achieved, many unanswered questions remain. How can the photospheric magnetic fields and velocities be propagated to the low corona through the transition region? At what cadence must we observe the photosphere to realistically simulate the corona? How do we model the magnetic fields and plasma velocities in the quiet Sun? How sensitive are the solutions to other unknowns that must be specified, such as the global solar magnetic field, and the photospheric temperature and density?

  11. Hydrogen Peroxide - Material Compatibility Studied by Microcalorimetry

    NASA Technical Reports Server (NTRS)

    Homung, Steven D.; Davis, Dennis D.; Baker, David; Popp, Christopher G.

    2003-01-01

    Environmental and toxicity concerns with current hypergolic propellants have led to a renewed interest in propellant grade hydrogen peroxide (HP) for propellant applications. Storability and stability has always been an issue with HP. Contamination or contact of HP with metallic surfaces may cause decomposition, which can result in the evolution of heat and gas leading to increased pressure or thermal hazards. The NASA Johnson Space Center White Sands Test Facility has developed a technique to monitor the decompositions of hydrogen peroxide at temperatures ranging from 25 to 60 C. Using isothermal microcalorimetry we have measured decomposition rates at the picomole/s/g level showing the catalytic effects of materials of construction. In this paper we will present the results of testing with Class 1 and 2 materials in 90 percent hydrogen peroxide.

  12. A biorthogonal decomposition for the identification and simulation of non-stationary and non-Gaussian random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zentner, I.; Ferré, G., E-mail: gregoire.ferre@ponts.org; Poirion, F.

    2016-06-01

    In this paper, a new method for the identification and simulation of non-Gaussian and non-stationary stochastic fields given a database is proposed. It is based on two successive biorthogonal decompositions aiming at representing spatio–temporal stochastic fields. The proposed double expansion allows to build the model even in the case of large-size problems by separating the time, space and random parts of the field. A Gaussian kernel estimator is used to simulate the high dimensional set of random variables appearing in the decomposition. The capability of the method to reproduce the non-stationary and non-Gaussian features of random phenomena is illustrated bymore » applications to earthquakes (seismic ground motion) and sea states (wave heights).« less

  13. Quantitative tissue polarimetry using polar decomposition of 3 x 3 Mueller matrix

    NASA Astrophysics Data System (ADS)

    Swami, M. K.; Manhas, S.; Buddhiwant, P.; Ghosh, N.; Uppal, A.; Gupta, P. K.

    2007-05-01

    Polarization properties of any optical system are completely described by a sixteen-element (4 x 4) matrix called Mueller matrix, which transform the Stokes vector describing the polarization properties of incident light to the stokes vector of scattered light. Measurement of all the elements of the matrix requires a minimum of sixteen measurements involving both linear and circularly polarized light. However, for many diagnostic applications, it would be useful if all the polarization parameters of the medium (depolarization (Δ), differential attenuation of two orthogonal polarizations, that is, diattenuation (d), and differential phase retardance of two orthogonal polarizations, i.e., retardance (δ )) can be quantified with linear polarization measurements alone. In this paper we show that for a turbid medium, like biological tissue, where the depolarization of linearly polarized light arises primarily due to the randomization of the field vector's direction by multiple scattering, the polarization parameters of the medium can be obtained from the nine Mueller matrix elements involving linear polarization measurements only. Use of the approach for measurement of polarization parameters (Δ, d and δ) of normal and malignant (squamous cell carcinoma) tissues resected from human oral cavity are presented.

  14. Applications of Aerodynamic Forces for Spacecraft Orbit Maneuverability in Operationally Responsive Space and Space Reconstitution Needs

    DTIC Science & Technology

    2012-03-01

    observation re = the radius of the Earth at the equator Pn = the Legendre polynomial 26 L = the geocentric latitude, sin The acceleration can then...atmospheric density at an altitude above an %% oblate earth given the position vector in the Geocentric Equatorial %% frame. The position vector is in...Diff between Delta and Geocentric lat rad %% GeoDtLat - Geodetic Latitude -Pi/2 to Pi/2 rad %% GeoCnLat

  15. Pure state consciousness and its local reduction to neuronal space

    NASA Astrophysics Data System (ADS)

    Duggins, A. J.

    2013-01-01

    The single neuronal state can be represented as a vector in a complex space, spanned by an orthonormal basis of integer spike counts. In this model a scalar element of experience is associated with the instantaneous firing rate of a single sensory neuron over repeated stimulus presentations. Here the model is extended to composite neural systems that are tensor products of single neuronal vector spaces. Depiction of the mental state as a vector on this tensor product space is intended to capture the unity of consciousness. The density operator is introduced as its local reduction to the single neuron level, from which the firing rate can again be derived as the objective correlate of a subjective element. However, the relational structure of perceptual experience only emerges when the non-local mental state is considered. A metric of phenomenal proximity between neuronal elements of experience is proposed, based on the cross-correlation function of neurophysiology, but constrained by the association of theoretical extremes of correlation/anticorrelation in inseparable 2-neuron states with identical and opponent elements respectively.

  16. Paving the way to a full chip gate level double patterning application

    NASA Astrophysics Data System (ADS)

    Haffner, Henning; Meiring, Jason; Baum, Zachary; Halle, Scott

    2007-10-01

    Double patterning lithography processes can offer significant yield enhancement for challenging circuit designs. Many decomposition (i.e. the process of dividing the layout design into first and second exposures) techniques are possible, but the focus of this paper is on the use of a secondary "cut" mask to trim away extraneous features left from the first exposure. This approach has the advantage that each exposure only needs to support a subset of critical features (e.g. dense lines with the first exposure, isolated spaces with the second one). The extraneous features ("printing assist features" or PrAFs) are designed to support the process window of critical features much like the role of the subresolution assist features (SRAFs) in conventional processes. However, the printing nature of PrAFs leads to many more design options, and hence a greater process and decomposition parameter exploration space, than are available for SRAFs. A decomposition scheme using PRAFs was developed for a gate level process. A critical driver of the work was to deliver improved across-chip linewidth variation (ACLV) performance versus an optimized single exposure process while providing support for a larger range of critical features. A variety of PRAF techniques were investigated by simulation, with a PrAF scheme similar to standard SRAF rules being chosen as the optimal solution [1]. This paper discusses aspects of the code development for an automated PrAF generation and placement scheme and the subsequent decomposition of a layout into two mask levels. While PrAF placement and decomposition is straightforward for layouts with pitch and orientation restrictions, it becomes rather complex for unrestricted layout styles. Because this higher complexity yields more irregularly shaped PrAFs, mask making becomes another critical driver of the optimum placement and clean-up strategies. Examples are given of how those challenges are met or can be successfully circumvented. During subsequent decomposition of the PrAF-enhanced layout into two independent mask levels, various geometric decomposition parameters have to be considered. As an example, the removal of PrAFs has to be guaranteed by a minimum required overlap of the cut mask opening past any PrAF edge. It is discussed that process assumptions such as CD tolerances and overlay as well as inter-level relationship ground rules need to be considered to successfully optimize the final decomposition scheme. Furthermore, simulation and experimental results regarding not only ACLV but also across-device linewidth variation (ADLV) are analyzed.

  17. Definition of a parametric form of nonsingular Mueller matrices.

    PubMed

    Devlaminck, Vincent; Terrier, Patrick

    2008-11-01

    The goal of this paper is to propose a mathematical framework to define and analyze a general parametric form of an arbitrary nonsingular Mueller matrix. Starting from previous results about nondepolarizing matrices, we generalize the method to any nonsingular Mueller matrix. We address this problem in a six-dimensional space in order to introduce a transformation group with the same number of degrees of freedom and explain why subsets of O(5,1), the orthogonal group associated with six-dimensional Minkowski space, is a physically admissible solution to this question. Generators of this group are used to define possible expressions of an arbitrary nonsingular Mueller matrix. Ultimately, the problem of decomposition of these matrices is addressed, and we point out that the "reverse" and "forward" decomposition concepts recently introduced may be inferred from the formalism we propose.

  18. Illustrating dynamical symmetries in classical mechanics: The Laplace-Runge-Lenz vector revisited

    NASA Astrophysics Data System (ADS)

    O'Connell, Ross C.; Jagannathan, Kannan

    2003-03-01

    The inverse square force law admits a conserved vector that lies in the plane of motion. This vector has been associated with the names of Laplace, Runge, and Lenz, among others. Many workers have explored aspects of the symmetry and degeneracy associated with this vector and with analogous dynamical symmetries. We define a conserved dynamical variable α that characterizes the orientation of the orbit in two-dimensional configuration space for the Kepler problem and an analogous variable β for the isotropic harmonic oscillator. This orbit orientation variable is canonically conjugate to the angular momentum component normal to the plane of motion. We explore the canonical one-parameter group of transformations generated by α(β). Because we have an obvious pair of conserved canonically conjugate variables, it is desirable to use them as a coordinate-momentum pair. In terms of these phase space coordinates, the form of the Hamiltonian is nearly trivial because neither member of the pair can occur explicitly in the Hamiltonian. From these considerations we gain a simple picture of dynamics in phase space. The procedure we use is in the spirit of the Hamilton-Jacobi method.

  19. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3) this paper adopts the Gauss Elimination, one of the on-the-shelf techniques, to generate a basis of the original feature space, which is stable and efficient.

  20. Composition, dynamics, and fate of leached dissolved organic matter in terrestrial ecosystems: Results from a decomposition experiment

    USGS Publications Warehouse

    Cleveland, C.C.; Neff, J.C.; Townsend, A.R.; Hood, E.

    2004-01-01

    Fluxes of dissolved organic matter (DOM) are an important vector for the movement of carbon (C) and nutrients both within and between ecosystems. However, although DOM fluxes from throughfall and through litterfall can be large, little is known about the fate of DOM leached from plant canopies, or from the litter layer into the soil horizon. In this study, our objectives were to determine the importance of plant-litter leachate as a vehicle for DOM movement, and to track DOM decomposition [including dissolve organic carbon (DOC) and dissolved organic nitrogen (DON) fractions], as well as DOM chemical and isotopic dynamics, during a long-term laboratory incubation experiment using fresh leaves and litter from several ecosystem types. The water-extractable fraction of organic C was high for all five plant species, as was the biodegradable fraction; in most cases, more than 70% of the initial DOM was decomposed in the first 10 days of the experiment. The chemical composition of the DOM changed as decomposition proceeded, with humic (hydrophobic) fractions becoming relatively more abundant than nonhumic (hydrophilic) fractions over time. However, in spite of proportional changes in humic and nonhumic fractions over time, our data suggest that both fractions are readily decomposed in the absence of physicochemical reactions with soil surfaces. Our data also showed no changes in the ??13C signature of DOM during decomposition, suggesting that isotopic fractionation during DOM uptake is not a significant process. These results suggest that soil microorganisms preferentially decompose more labile organic molecules in the DOM pool, which also tend to be isotopically heavier than more recalcitrant DOM fractions. We believe that the interaction between DOM decomposition dynamics and soil sorption processes contribute to the ??13C enrichment of soil organic matter commonly observed with depth in soil profiles.

  1. Bimolecular Coupling as a Vector for Decomposition of Fast-Initiating Olefin Metathesis Catalysts.

    PubMed

    Bailey, Gwendolyn A; Foscato, Marco; Higman, Carolyn S; Day, Craig S; Jensen, Vidar R; Fogg, Deryn E

    2018-06-06

    The correlation between rapid initiation and rapid decomposition in olefin metathesis is probed for a series of fast-initiating, phosphine-free Ru catalysts: the Hoveyda catalyst HII, RuCl 2 (L)(═CHC 6 H 4 - o-O i Pr); the Grela catalyst nG (a derivative of HII with a nitro group para to O i Pr); the Piers catalyst PII, [RuCl 2 (L)(═CHPCy 3 )]OTf; the third-generation Grubbs catalyst GIII, RuCl 2 (L)(py) 2 (═CHPh); and dianiline catalyst DA, RuCl 2 (L)( o-dianiline)(═CHPh), in all of which L = H 2 IMes = N,N'-bis(mesityl)imidazolin-2-ylidene. Prior studies of ethylene metathesis have established that various Ru metathesis catalysts can decompose by β-elimination of propene from the metallacyclobutane intermediate RuCl 2 (H 2 IMes)(κ 2 -C 3 H 6 ), Ru-2. The present work demonstrates that in metathesis of terminal olefins, β-elimination yields only ca. 25-40% propenes for HII, nG, PII, or DA, and none for GIII. The discrepancy is attributed to competing decomposition via bimolecular coupling of methylidene intermediate RuCl 2 (H 2 IMes)(═CH 2 ), Ru-1. Direct evidence for methylidene coupling is presented, via the controlled decomposition of transiently stabilized adducts of Ru-1, RuCl 2 (H 2 IMes)L n (═CH 2 ) (L n = py n' ; n' = 1, 2, or o-dianiline). These adducts were synthesized by treating in situ-generated metallacyclobutane Ru-2 with pyridine or o-dianiline, and were isolated by precipitating at low temperature (-116 or -78 °C, respectively). On warming, both undergo methylidene coupling, liberating ethylene and forming RuCl 2 (H 2 IMes)L n . A mechanism is proposed based on kinetic studies and molecular-level computational analysis. Bimolecular coupling emerges as an important contributor to the instability of Ru-1, and a potentially major pathway for decomposition of fast-initiating, phosphine-free metathesis catalysts.

  2. Generalized sidelobe canceller beamforming method for ultrasound imaging.

    PubMed

    Wang, Ping; Li, Na; Luo, Han-Wu; Zhu, Yong-Kun; Cui, Shi-Gang

    2017-03-01

    A modified generalized sidelobe canceller (IGSC) algorithm is proposed to enhance the resolution and robustness against the noise of the traditional generalized sidelobe canceller (GSC) and coherence factor combined method (GSC-CF). In the GSC algorithm, weighting vector is divided into adaptive and non-adaptive parts, while the non-adaptive part does not block all the desired signal. A modified steer vector of the IGSC algorithm is generated by the projection of the non-adaptive vector on the signal space constructed by the covariance matrix of received data. The blocking matrix is generated based on the orthogonal complementary space of the modified steer vector and the weighting vector is updated subsequently. The performance of IGSC was investigated by simulations and experiments. Through simulations, IGSC outperformed GSC-CF in terms of spatial resolution by 0.1 mm regardless there is noise or not, as well as the contrast ratio respect. The proposed IGSC can be further improved by combining with CF. The experimental results also validated the effectiveness of the proposed algorithm with dataset provided by the University of Michigan.

  3. Time-based self-spacing techniques using cockpit display of traffic information during approach to landing in a terminal area vectoring environment

    NASA Technical Reports Server (NTRS)

    Williams, D. H.

    1983-01-01

    A simulation study was undertaken to evaluate two time-based self-spacing techniques for in-trail following during terminal area approach. An electronic traffic display was provided in the weather radarscope location. The displayed self-spacing cues allowed the simulated aircraft to follow and to maintain spacing on another aircraft which was being vectored by air traffic control (ATC) for landing in a high-density terminal area. Separation performance data indicate the information provided on the traffic display was adequate for the test subjects to accurately follow the approach path of another aircraft without the assistance of ATC. The time-based technique with a constant-delay spacing criterion produced the most satisfactory spacing performance. Pilot comments indicate the workload associated with the self-separation task was very high and that additional spacing command information and/or aircraft autopilot functions would be desirable for operational implementational of the self-spacing task.

  4. Hypercyclic subspaces for Frechet space operators

    NASA Astrophysics Data System (ADS)

    Petersson, Henrik

    2006-07-01

    A continuous linear operator is hypercyclic if there is an such that the orbit {Tnx} is dense, and such a vector x is said to be hypercyclic for T. Recent progress show that it is possible to characterize Banach space operators that have a hypercyclic subspace, i.e., an infinite dimensional closed subspace of, except for zero, hypercyclic vectors. The following is known to hold: A Banach space operator T has a hypercyclic subspace if there is a sequence (ni) and an infinite dimensional closed subspace such that T is hereditarily hypercyclic for (ni) and Tni->0 pointwise on E. In this note we extend this result to the setting of Frechet spaces that admit a continuous norm, and study some applications for important function spaces. As an application we also prove that any infinite dimensional separable Frechet space with a continuous norm admits an operator with a hypercyclic subspace.

  5. A space-efficient quantum computer simulator suitable for high-speed FPGA implementation

    NASA Astrophysics Data System (ADS)

    Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel

    2009-05-01

    Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.

  6. Wave-filter-based approach for generation of a quiet space in a rectangular cavity

    NASA Astrophysics Data System (ADS)

    Iwamoto, Hiroyuki; Tanaka, Nobuo; Sanada, Akira

    2018-02-01

    This paper is concerned with the generation of a quiet space in a rectangular cavity using active wave control methodology. It is the purpose of this paper to present the wave filtering method for a rectangular cavity using multiple microphones and its application to an adaptive feedforward control system. Firstly, the transfer matrix method is introduced for describing the wave dynamics of the sound field, and then feedforward control laws for eliminating transmitted waves is derived. Furthermore, some numerical simulations are conducted that show the best possible result of active wave control. This is followed by the derivation of the wave filtering equations that indicates the structure of the wave filter. It is clarified that the wave filter consists of three portions; modal group filter, rearrangement filter and wave decomposition filter. Next, from a numerical point of view, the accuracy of the wave decomposition filter which is expressed as a function of frequency is investigated using condition numbers. Finally, an experiment on the adaptive feedforward control system using the wave filter is carried out, demonstrating that a quiet space is generated in the target space by the proposed method.

  7. Left ventricular hypertrophy index based on a combination of frontal and transverse planes in the ECG and VCG: Diagnostic utility of cardiac vectors

    NASA Astrophysics Data System (ADS)

    Bonomini, Maria Paula; Juan Ingallina, Fernando; Barone, Valeria; Antonucci, Ricardo; Valentinuzzi, Max; Arini, Pedro David

    2016-04-01

    The changes that left ventricular hypertrophy (LVH) induces in depolarization and repolarization vectors are well known. We analyzed the performance of the electrocardiographic and vectorcardiographic transverse planes (TP in the ECG and XZ in the VCG) and frontal planes (FP in the ECG and XY in the VCG) to discriminate LVH patients from control subjects. In an age-balanced set of 58 patients, the directions and amplitudes of QRS-complexes and T-wave vectors were studied. The repolarization vector significantly decreased in modulus from controls to LVH in the transverse plane (TP: 0.45±0.17mV vs. 0.24±0.13mV, p<0.0005 XZ: 0.43±0.16mV vs. 0.26±0.11mV, p<0.005) while the depolarization vector significantly changed in angle in the electrocardiographic frontal plane (Controls vs. LVH, FP: 48.24±33.66° vs. 46.84±35.44°, p<0.005, XY: 20.28±35.20° vs. 19.35±12.31°, NS). Several LVH indexes were proposed combining such information in both ECG and VCG spaces. A subset of all those indexes with AUC values greater than 0.7 was further studied. This subset comprised four indexes, with three of them belonging to the ECG space. Two out of the four indexes presented the best ROC curves (AUC values: 0.78 and 0.75, respectively). One index belonged to the ECG space and the other one to the VCG space. Both indexes showed a sensitivity of 86% and a specificity of 70%. In conclusion, the proposed indexes can favorably complement LVH diagnosis

  8. Covariantized vector Galileons

    NASA Astrophysics Data System (ADS)

    Hull, Matthew; Koyama, Kazuya; Tasinato, Gianmassimo

    2016-03-01

    Vector Galileons are ghost-free systems containing higher derivative interactions of vector fields. They break the vector gauge symmetry, and the dynamics of the longitudinal vector polarizations acquire a Galileon symmetry in an appropriate decoupling limit in Minkowski space. Using an Arnowitt-Deser-Misner approach, we carefully reconsider the coupling with gravity of vector Galileons, with the aim of studying the necessary conditions to avoid the propagation of ghosts. We develop arguments that put on a more solid footing the results previously obtained in the literature. Moreover, working in analogy with the scalar counterpart, we find indications for the existence of a "beyond Horndeski" theory involving vector degrees of freedom that avoids the propagation of ghosts thanks to secondary constraints. In addition, we analyze a Higgs mechanism for generating vector Galileons through spontaneous symmetry breaking, and we present its consistent covariantization.

  9. Closedness of orbits in a space with SU(2) Poisson structure

    NASA Astrophysics Data System (ADS)

    Fatollahi, Amir H.; Shariati, Ahmad; Khorrami, Mohammad

    2014-06-01

    The closedness of orbits of central forces is addressed in a three-dimensional space in which the Poisson bracket among the coordinates is that of the SU(2) Lie algebra. In particular it is shown that among problems with spherically symmetric potential energies, it is only the Kepler problem for which all bounded orbits are closed. In analogy with the case of the ordinary space, a conserved vector (apart from the angular momentum) is explicitly constructed, which is responsible for the orbits being closed. This is the analog of the Laplace-Runge-Lenz vector. The algebra of the constants of the motion is also worked out.

  10. Structural aspects of Hamilton-Jacobi theory

    NASA Astrophysics Data System (ADS)

    Cariñena, J. F.; Gràcia, X.; Marmo, G.; Martínez, E.; Muñoz-Lecanda, M. C.; Román-Roy, N.

    2016-12-01

    In our previous papers [J. F. Cariñena, X. Gràcia, G. Marmo, E. Martínez, M. C. Muñoz-Lecanda and N. Román-Roy, Geometric Hamilton-Jacobi theory, Int. J. Geom. Meth. Mod. Phys. 3 (2006) 1417-1458; Geometric Hamilton-Jacobi theory for nonholonomic dynamical systems, Int. J. Geom. Meth. Mod. Phys. 7 (2010) 431-454] we showed that the Hamilton-Jacobi problem can be regarded as a way to describe a given dynamics on a phase space manifold in terms of a family of dynamics on a lower-dimensional manifold. We also showed how constants of the motion help to solve the Hamilton-Jacobi equation. Here we want to delve into this interpretation by considering the most general case: a dynamical system on a manifold that is described in terms of a family of dynamics (slicing vector fields) on lower-dimensional manifolds. We identify the relevant geometric structures that lead from this decomposition of the dynamics to the classical Hamilton-Jacobi theory, by considering special cases like fibered manifolds and Hamiltonian dynamics, in the symplectic framework and the Poisson one. We also show how a set of functions on a tangent bundle can determine a second-order dynamics for which they are constants of the motion.

  11. The Forest Method as a New Parallel Tree Method with the Sectional Voronoi Tessellation

    NASA Astrophysics Data System (ADS)

    Yahagi, Hideki; Mori, Masao; Yoshii, Yuzuru

    1999-09-01

    We have developed a new parallel tree method which will be called the forest method hereafter. This new method uses the sectional Voronoi tessellation (SVT) for the domain decomposition. The SVT decomposes a whole space into polyhedra and allows their flat borders to move by assigning different weights. The forest method determines these weights based on the load balancing among processors by means of the overload diffusion (OLD). Moreover, since all the borders are flat, before receiving the data from other processors, each processor can collect enough data to calculate the gravity force with precision. Both the SVT and the OLD are coded in a highly vectorizable manner to accommodate on vector parallel processors. The parallel code based on the forest method with the Message Passing Interface is run on various platforms so that a wide portability is guaranteed. Extensive calculations with 15 processors of Fujitsu VPP300/16R indicate that the code can calculate the gravity force exerted on 105 particles in each second for some ideal dark halo. This code is found to enable an N-body simulation with 107 or more particles for a wide dynamic range and is therefore a very powerful tool for the study of galaxy formation and large-scale structure in the universe.

  12. Applications of wavelets in morphometric analysis of medical images

    NASA Astrophysics Data System (ADS)

    Davatzikos, Christos; Tao, Xiaodong; Shen, Dinggang

    2003-11-01

    Morphometric analysis of medical images is playing an increasingly important role in understanding brain structure and function, as well as in understanding the way in which these change during development, aging and pathology. This paper presents three wavelet-based methods with related applications in morphometric analysis of magnetic resonance (MR) brain images. The first method handles cases where very limited datasets are available for the training of statistical shape models in the deformable segmentation. The method is capable of capturing a larger range of shape variability than the standard active shape models (ASMs) can, by using the elegant spatial-frequency decomposition of the shape contours provided by wavelet transforms. The second method addresses the difficulty of finding correspondences in anatomical images, which is a key step in shape analysis and deformable registration. The detection of anatomical correspondences is completed by using wavelet-based attribute vectors as morphological signatures of voxels. The third method uses wavelets to characterize the morphological measurements obtained from all voxels in a brain image, and the entire set of wavelet coefficients is further used to build a brain classifier. Since the classification scheme operates in a very-high-dimensional space, it can determine subtle population differences with complex spatial patterns. Experimental results are provided to demonstrate the performance of the proposed methods.

  13. Professor Herman Burger (1893-1965), eminent teacher and scientist, who laid the theoretical foundations of vectorcardiography--and electrocardiography.

    PubMed

    van Herpen, Gerard

    2014-01-01

    Einthoven not only designed a high quality instrument, the string galvanometer, for recording the ECG, he also shaped the conceptual framework to understand it. He reduced the body to an equilateral triangle and the cardiac electric activity to a dipole, represented by an arrow (i.e. a vector) in the triangle's center. Up to the present day the interpretation of the ECG is based on the model of a dipole vector being projected on the various leads. The model is practical but intuitive, not physically founded. Burger analysed the relation between heart vector and leads according to the principles of physics. It then follows that an ECG lead must be treated as a vector (lead vector) and that the lead voltage is not simply proportional to the projection of the vector on the lead, but must be multiplied by the value (length) of the lead vector, the lead strength. Anatomical lead axis and electrical lead axis are different entities and the anatomical body space must be distinguished from electrical space. Appreciation of these underlying physical principles should contribute to a better understanding of the ECG. The development of these principles by Burger is described, together with some personal notes and a sketch of the personality of this pioneer of medical physics. Copyright © 2014. Published by Elsevier Inc.

  14. Vector representation of lithium and other mica compositions

    NASA Technical Reports Server (NTRS)

    Burt, Donald M.

    1991-01-01

    In contrast to mathematics, where a vector of one component defines a line, in chemical petrology a one-component system is a point, and two components are needed to define a line, three for a plane, and four for a space. Here, an attempt is made to show how these differences in the definition of a component can be resolved, with lithium micas used as an example. In particular, the condensed composition space theoretically accessible to Li-Fe-Al micas is shown to be an irregular three-dimensional polyhedron, rather than the triangle Al(3+)-Fe(2+)-Li(+), used by some researchers. This result is demonstrated starting with the annite composition and using exchange operators graphically as vectors that generate all of the other mica compositions.

  15. Derivation of formulas for root-mean-square errors in location, orientation, and shape in triangulation solution of an elongated object in space

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1974-01-01

    Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.

  16. Covariance estimation in Terms of Stokes Parameters with Application to Vector Sensor Imaging

    DTIC Science & Technology

    2016-12-15

    S. Klein, “HF Vector Sensor for Radio Astronomy : Ground Testing Results,” in AIAA SPACE 2016, ser. AIAA SPACE Forum, American Institute of... astronomy ,” in 2016 IEEE Aerospace Conference, Mar. 2016, pp. 1–17. doi: 10.1109/ AERO.2016.7500688. [4] K.-C. Ho, K.-C. Tan, and B. T. G. Tan, “Estimation of...Statistical Imaging in Radio Astronomy via an Expectation-Maximization Algorithm for Structured Covariance Estimation,” in Statistical Methods in Imaging: IN

  17. Lie theory and control systems defined on spheres

    NASA Technical Reports Server (NTRS)

    Brockett, R. W.

    1972-01-01

    It is shown that in constructing a theory for the most elementary class of control problems defined on spheres, some results from the Lie theory play a natural role. To understand controllability, optimal control, and certain properties of stochastic equations, Lie theoretic ideas are needed. The framework considered here is the most natural departure from the usual linear system/vector space problems which have dominated control systems literature. For this reason results are compared with those previously available for the finite dimensional vector space case.

  18. Space Object Classification Using Fused Features of Time Series Data

    NASA Astrophysics Data System (ADS)

    Jia, B.; Pham, K. D.; Blasch, E.; Shen, D.; Wang, Z.; Chen, G.

    In this paper, a fused feature vector consisting of raw time series and texture feature information is proposed for space object classification. The time series data includes historical orbit trajectories and asteroid light curves. The texture feature is derived from recurrence plots using Gabor filters for both unsupervised learning and supervised learning algorithms. The simulation results show that the classification algorithms using the fused feature vector achieve better performance than those using raw time series or texture features only.

  19. Secure coherent optical multi-carrier system with four-dimensional modulation space and Stokes vector scrambling.

    PubMed

    Zhang, Lijia; Liu, Bo; Xin, Xiangjun

    2015-06-15

    A secure enhanced coherent optical multi-carrier system based on Stokes vector scrambling is proposed and experimentally demonstrated. The optical signal with four-dimensional (4D) modulation space has been scrambled intra- and inter-subcarriers, where a multi-layer logistic map is adopted as the chaotic model. An experiment with 61.71-Gb/s encrypted multi-carrier signal is successfully demonstrated with the proposed method. The results indicate a promising solution for the physical secure optical communication.

  20. Using trees to compute approximate solutions to ordinary differential equations exactly

    NASA Technical Reports Server (NTRS)

    Grossman, Robert

    1991-01-01

    Some recent work is reviewed which relates families of trees to symbolic algorithms for the exact computation of series which approximate solutions of ordinary differential equations. It turns out that the vector space whose basis is the set of finite, rooted trees carries a natural multiplication related to the composition of differential operators, making the space of trees an algebra. This algebraic structure can be exploited to yield a variety of algorithms for manipulating vector fields and the series and algebras they generate.

  1. Local Gram-Schmidt and covariant Lyapunov vectors and exponents for three harmonic oscillator problems

    NASA Astrophysics Data System (ADS)

    Hoover, Wm. G.; Hoover, Carol G.

    2012-02-01

    We compare the Gram-Schmidt and covariant phase-space-basis-vector descriptions for three time-reversible harmonic oscillator problems, in two, three, and four phase-space dimensions respectively. The two-dimensional problem can be solved analytically. The three-dimensional and four-dimensional problems studied here are simultaneously chaotic, time-reversible, and dissipative. Our treatment is intended to be pedagogical, for use in an updated version of our book on Time Reversibility, Computer Simulation, and Chaos. Comments are very welcome.

  2. Sensitivity analysis of the space shuttle to ascent wind profiles

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Austin, L. D., Jr.

    1982-01-01

    A parametric sensitivity analysis of the space shuttle ascent flight to the wind profile is presented. Engineering systems parameters are obtained by flight simulations using wind profile models and samples of detailed (Jimsphere) wind profile measurements. The wind models used are the synthetic vector wind model, with and without the design gust, and a model of the vector wind change with respect to time. From these comparison analyses an insight is gained on the contribution of winds to ascent subsystems flight parameters.

  3. Vortical and acoustical mode coupling inside a porous tube with uniform wall suction.

    PubMed

    Jankowskia, T A; Majdalani, J

    2005-06-01

    This paper considers the oscillatory motion of gases inside a long porous tube of the closed-open type. In particular, the focus is placed on describing an analytical solution for the internal acoustico-vortical coupling that arises in the presence of appreciable wall suction. This unsteady field is driven by longitudinal oscillatory waves that are triggered by small unavoidable fluctuations in the wall suction speed. Under the assumption of small amplitude oscillations, the time-dependent governing equations are linearized through a regular perturbation of the dependent variables. Further application of the Helmholtz vector decomposition theorem enables us to discriminate between acoustical and vortical equations. After solving the wave equation for the acoustical contribution, the boundary-driven vortical field is considered. The method of matched-asymptotic expansions is then used to obtain a closed-form solution for the unsteady momentum equation developing from flow decomposition. An exact series expansion is also derived and shown to coincide with the numerical solution for the problem. The numerically verified end results suggest that the asymptotic scheme is capable of providing a sufficiently accurate solution. This is due to the error associated with the matched-asymptotic expansion being smaller than the error introduced in the Navier-Stokes linearization. A basis for comparison is established by examining the evolution of the oscillatory field in both space and time. The corresponding boundary-layer behavior is also characterized over a range of oscillation frequencies and wall suction velocities. In general, the current solution is found to exhibit features that are consistent with the laminar theory of periodic flows. By comparison to the Sexl profile in nonporous tubes, the critically damped solution obtained here exhibits a slightly smaller overshoot and depth of penetration. These features may be attributed to the suction effect that tends to attract the shear layers closer the wall.

  4. Data-driven Climate Modeling and Prediction

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.; Chekroun, M.

    2016-12-01

    Global climate models aim to simulate a broad range of spatio-temporal scales of climate variability with state vector having many millions of degrees of freedom. On the other hand, while detailed weather prediction out to a few days requires high numerical resolution, it is fairly clear that a major fraction of large-scale climate variability can be predicted in a much lower-dimensional phase space. Low-dimensional models can simulate and predict this fraction of climate variability, provided they are able to account for linear and nonlinear interactions between the modes representing large scales of climate dynamics, as well as their interactions with a much larger number of modes representing fast and small scales. This presentation will highlight several new applications by Multilayered Stochastic Modeling (MSM) [Kondrashov, Chekroun and Ghil, 2015] framework that has abundantly proven its efficiency in the modeling and real-time forecasting of various climate phenomena. MSM is a data-driven inverse modeling technique that aims to obtain a low-order nonlinear system of prognostic equations driven by stochastic forcing, and estimates both the dynamical operator and the properties of the driving noise from multivariate time series of observations or a high-end model's simulation. MSM leads to a system of stochastic differential equations (SDEs) involving hidden (auxiliary) variables of fast-small scales ranked by layers, which interact with the macroscopic (observed) variables of large-slow scales to model the dynamics of the latter, and thus convey memory effects. New MSM climate applications focus on development of computationally efficient low-order models by using data-adaptive decomposition methods that convey memory effects by time-embedding techniques, such as Multichannel Singular Spectrum Analysis (M-SSA) [Ghil et al. 2002] and recently developed Data-Adaptive Harmonic (DAH) decomposition method [Chekroun and Kondrashov, 2016]. In particular, new results by DAH-MSM modeling and prediction of Arctic Sea Ice, as well as decadal predictions of near-surface Earth temperatures will be presented.

  5. Representing Matrix Cracks Through Decomposition of the Deformation Gradient Tensor in Continuum Damage Mechanics Methods

    NASA Technical Reports Server (NTRS)

    Leone, Frank A., Jr.

    2015-01-01

    A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.

  6. A new solar power output prediction based on hybrid forecast engine and decomposition model.

    PubMed

    Zhang, Weijiang; Dang, Hongshe; Simoes, Rolando

    2018-06-12

    Regarding to the growing trend of photovoltaic (PV) energy as a clean energy source in electrical networks and its uncertain nature, PV energy prediction has been proposed by researchers in recent decades. This problem is directly effects on operation in power network while, due to high volatility of this signal, an accurate prediction model is demanded. A new prediction model based on Hilbert Huang transform (HHT) and integration of improved empirical mode decomposition (IEMD) with feature selection and forecast engine is presented in this paper. The proposed approach is divided into three main sections. In the first section, the signal is decomposed by the proposed IEMD as an accurate decomposition tool. To increase the accuracy of the proposed method, a new interpolation method has been used instead of cubic spline curve (CSC) fitting in EMD. Then the obtained output is entered into the new feature selection procedure to choose the best candidate inputs. Finally, the signal is predicted by a hybrid forecast engine composed of support vector regression (SVR) based on an intelligent algorithm. The effectiveness of the proposed approach has been verified over a number of real-world engineering test cases in comparison with other well-known models. The obtained results prove the validity of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Recent Developments In Theory Of Balanced Linear Systems

    NASA Technical Reports Server (NTRS)

    Gawronski, Wodek

    1994-01-01

    Report presents theoretical study of some issues of controllability and observability of system represented by linear, time-invariant mathematical model of the form. x = Ax + Bu, y = Cx + Du, x(0) = xo where x is n-dimensional vector representing state of system; u is p-dimensional vector representing control input to system; y is q-dimensional vector representing output of system; n,p, and q are integers; x(0) is intial (zero-time) state vector; and set of matrices (A,B,C,D) said to constitute state-space representation of system.

  8. Data-adaptive harmonic spectra and multilayer Stuart-Landau models

    NASA Astrophysics Data System (ADS)

    Chekroun, Mickaël D.; Kondrashov, Dmitri

    2017-09-01

    Harmonic decompositions of multivariate time series are considered for which we adopt an integral operator approach with periodic semigroup kernels. Spectral decomposition theorems are derived that cover the important cases of two-time statistics drawn from a mixing invariant measure. The corresponding eigenvalues can be grouped per Fourier frequency and are actually given, at each frequency, as the singular values of a cross-spectral matrix depending on the data. These eigenvalues obey, furthermore, a variational principle that allows us to define naturally a multidimensional power spectrum. The eigenmodes, as far as they are concerned, exhibit a data-adaptive character manifested in their phase which allows us in turn to define a multidimensional phase spectrum. The resulting data-adaptive harmonic (DAH) modes allow for reducing the data-driven modeling effort to elemental models stacked per frequency, only coupled at different frequencies by the same noise realization. In particular, the DAH decomposition extracts time-dependent coefficients stacked by Fourier frequency which can be efficiently modeled—provided the decay of temporal correlations is sufficiently well-resolved—within a class of multilayer stochastic models (MSMs) tailored here on stochastic Stuart-Landau oscillators. Applications to the Lorenz 96 model and to a stochastic heat equation driven by a space-time white noise are considered. In both cases, the DAH decomposition allows for an extraction of spatio-temporal modes revealing key features of the dynamics in the embedded phase space. The multilayer Stuart-Landau models (MSLMs) are shown to successfully model the typical patterns of the corresponding time-evolving fields, as well as their statistics of occurrence.

  9. Analysis of temporal-longitudinal-latitudinal characteristics in the global ionosphere based on tensor rank-1 decomposition

    NASA Astrophysics Data System (ADS)

    Lu, Shikun; Zhang, Hao; Li, Xihai; Li, Yihong; Niu, Chao; Yang, Xiaoyun; Liu, Daizhi

    2018-03-01

    Combining analyses of spatial and temporal characteristics of the ionosphere is of great significance for scientific research and engineering applications. Tensor decomposition is performed to explore the temporal-longitudinal-latitudinal characteristics in the ionosphere. Three-dimensional tensors are established based on the time series of ionospheric vertical total electron content maps obtained from the Centre for Orbit Determination in Europe. To obtain large-scale characteristics of the ionosphere, rank-1 decomposition is used to obtain U^{(1)}, U^{(2)}, and U^{(3)}, which are the resulting vectors for the time, longitude, and latitude modes, respectively. Our initial finding is that the correspondence between the frequency spectrum of U^{(1)} and solar variation indicates that rank-1 decomposition primarily describes large-scale temporal variations in the global ionosphere caused by the Sun. Furthermore, the time lags between the maxima of the ionospheric U^{(2)} and solar irradiation range from 1 to 3.7 h without seasonal dependence. The differences in time lags may indicate different interactions between processes in the magnetosphere-ionosphere-thermosphere system. Based on the dataset displayed in the geomagnetic coordinates, the position of the barycenter of U^{(3)} provides evidence for north-south asymmetry (NSA) in the large-scale ionospheric variations. The daily variation in such asymmetry indicates the influences of solar ionization. The diurnal geomagnetic coordinate variations in U^{(3)} show that the large-scale EIA (equatorial ionization anomaly) variations during the day and night have similar characteristics. Considering the influences of geomagnetic disturbance on ionospheric behavior, we select the geomagnetic quiet GIMs to construct the ionospheric tensor. The results indicate that the geomagnetic disturbances have little effect on large-scale ionospheric characteristics.

  10. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging.

    PubMed

    Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D; Joel, Suresh; Pekar, James J; Mostofsky, Stewart H; Caffo, Brian

    2012-01-01

    Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.

  11. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging

    PubMed Central

    Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D.; Joel, Suresh; Pekar, James J.; Mostofsky, Stewart H.; Caffo, Brian

    2012-01-01

    Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD. PMID:22969709

  12. Application of Direct Parallel Methods to Reconstruction and Forecasting Problems

    NASA Astrophysics Data System (ADS)

    Song, Changgeun

    Many important physical processes in nature are represented by partial differential equations. Numerical weather prediction in particular, requires vast computational resources. We investigate the significance of parallel processing technology to the real world problem of atmospheric prediction. In this paper we consider the classic problem of decomposing the observed wind field into the irrotational and nondivergent components. Recognizing the fact that on a limited domain this problem has a non-unique solution, Lynch (1989) described eight different ways to accomplish the decomposition. One set of elliptic equations is associated with the decomposition--this determines the initial nondivergent state for the forecast model. It is shown that the entire decomposition problem can be solved in a fraction of a second using multi-vector processor such as ALLIANT FX/8. Secondly, the barotropic model is used to track hurricanes. Also, one set of elliptic equations is solved to recover the streamfunction from the forecasted vorticity. A 72 h prediction of Elena is made while it is in the Gulf of Mexico. During this time the hurricane executes a dramatic re-curvature that is captured by the model. Furthermore, an improvement in the track prediction results when a simple assimilation strategy is used. This technique makes use of the wind fields in the 24 h period immediately preceding the initial time for the prediction. In this particular application, solutions to systems of elliptic equations are the center of the computational mechanics. We demonstrate that direct, parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to the decomposition, the forecast and adjoint assimilation.

  13. Searching for transcription factor binding sites in vector spaces

    PubMed Central

    2012-01-01

    Background Computational approaches to transcription factor binding site identification have been actively researched in the past decade. Learning from known binding sites, new binding sites of a transcription factor in unannotated sequences can be identified. A number of search methods have been introduced over the years. However, one can rarely find one single method that performs the best on all the transcription factors. Instead, to identify the best method for a particular transcription factor, one usually has to compare a handful of methods. Hence, it is highly desirable for a method to perform automatic optimization for individual transcription factors. Results We proposed to search for transcription factor binding sites in vector spaces. This framework allows us to identify the best method for each individual transcription factor. We further introduced two novel methods, the negative-to-positive vector (NPV) and optimal discriminating vector (ODV) methods, to construct query vectors to search for binding sites in vector spaces. Extensive cross-validation experiments showed that the proposed methods significantly outperformed the ungapped likelihood under positional background method, a state-of-the-art method, and the widely-used position-specific scoring matrix method. We further demonstrated that motif subtypes of a TF can be readily identified in this framework and two variants called the k NPV and k ODV methods benefited significantly from motif subtype identification. Finally, independent validation on ChIP-seq data showed that the ODV and NPV methods significantly outperformed the other compared methods. Conclusions We conclude that the proposed framework is highly flexible. It enables the two novel methods to automatically identify a TF-specific subspace to search for binding sites. Implementations are available as source code at: http://biogrid.engr.uconn.edu/tfbs_search/. PMID:23244338

  14. SVPWM Technique with Varying DC-Link Voltage for Common Mode Voltage Reduction in a Matrix Converter and Analytical Estimation of its Output Voltage Distortion

    NASA Astrophysics Data System (ADS)

    Padhee, Varsha

    Common Mode Voltage (CMV) in any power converter has been the major contributor to premature motor failures, bearing deterioration, shaft voltage build up and electromagnetic interference. Intelligent control methods like Space Vector Pulse Width Modulation (SVPWM) techniques provide immense potential and flexibility to reduce CMV, thereby targeting all the afore mentioned problems. Other solutions like passive filters, shielded cables and EMI filters add to the volume and cost metrics of the entire system. Smart SVPWM techniques therefore, come with a very important advantage of being an economical solution. This thesis discusses a modified space vector technique applied to an Indirect Matrix Converter (IMC) which results in the reduction of common mode voltages and other advanced features. The conventional indirect space vector pulse-width modulation (SVPWM) method of controlling matrix converters involves the usage of two adjacent active vectors and one zero vector for both rectifying and inverting stages of the converter. By suitable selection of space vectors, the rectifying stage of the matrix converter can generate different levels of virtual DC-link voltage. This capability can be exploited for operation of the converter in different ranges of modulation indices for varying machine speeds. This results in lower common mode voltage and improves the harmonic spectrum of the output voltage, without increasing the number of switching transitions as compared to conventional modulation. To summarize it can be said that the responsibility of formulating output voltages with a particular magnitude and frequency has been transferred solely to the rectifying stage of the IMC. Estimation of degree of distortion in the three phase output voltage is another facet discussed in this thesis. An understanding of the SVPWM technique and the switching sequence of the space vectors in detail gives the potential to estimate the RMS value of the switched output voltage of any converter. This conceivably aids the sizing and design of output passive filters. An analytical estimation method has been presented to achieve this purpose for am IMC. Knowledge of the fundamental component in output voltage can be utilized to calculate its Total Harmonic Distortion (THD). The effectiveness of the proposed SVPWM algorithms and the analytical estimation technique is substantiated by simulations in MATLAB / Simulink and experiments on a laboratory prototype of the IMC. Proper comparison plots have been provided to contrast the performance of the proposed methods with the conventional SVPWM method. The behavior of output voltage distortion and CMV with variation in operating parameters like modulation index and output frequency has also been analyzed.

  15. An efficient solution procedure for the thermoelastic analysis of truss space structures

    NASA Technical Reports Server (NTRS)

    Givoli, D.; Rand, O.

    1992-01-01

    A solution procedure is proposed for the thermal and thermoelastic analysis of truss space structures in periodic motion. In this method, the spatial domain is first descretized using a consistent finite element formulation. Then the resulting semi-discrete equations in time are solved analytically by using Fourier decomposition. Geometrical symmetry is taken advantage of completely. An algorithm is presented for the calculation of heat flux distribution. The method is demonstrated via a numerical example of a cylindrically shaped space structure.

  16. On the Hodge-type decomposition and cohomology groups of k-Cauchy-Fueter complexes over domains in the quaternionic space

    NASA Astrophysics Data System (ADS)

    Chang, Der-Chen; Markina, Irina; Wang, Wei

    2016-09-01

    The k-Cauchy-Fueter operator D0(k) on one dimensional quaternionic space H is the Euclidean version of spin k / 2 massless field operator on the Minkowski space in physics. The k-Cauchy-Fueter equation for k ≥ 2 is overdetermined and its compatibility condition is given by the k-Cauchy-Fueter complex. In quaternionic analysis, these complexes play the role of Dolbeault complex in several complex variables. We prove that a natural boundary value problem associated to this complex is regular. Then by using the theory of regular boundary value problems, we show the Hodge-type orthogonal decomposition, and the fact that the non-homogeneous k-Cauchy-Fueter equation D0(k) u = f on a smooth domain Ω in H is solvable if and only if f satisfies the compatibility condition and is orthogonal to the set ℋ(k)1 (Ω) of Hodge-type elements. This set is isomorphic to the first cohomology group of the k-Cauchy-Fueter complex over Ω, which is finite dimensional, while the second cohomology group is always trivial.

  17. Scale Issues in Air Quality Modeling

    EPA Science Inventory

    This presentation reviews past model evaluation studies investigating the impact of horizontal grid spacing on model performance. It also presents several examples of using a spectral decomposition technique to separate the forcings from processes operating on different time scal...

  18. Nature of Driving Force for Protein Folding: A Result From Analyzing the Statistical Potential

    NASA Astrophysics Data System (ADS)

    Li, Hao; Tang, Chao; Wingreen, Ned S.

    1997-07-01

    In a statistical approach to protein structure analysis, Miyazawa and Jernigan derived a 20×20 matrix of inter-residue contact energies between different types of amino acids. Using the method of eigenvalue decomposition, we find that the Miyazawa-Jernigan matrix can be accurately reconstructed from its first two principal component vectors as Mij = C0+C1\\(qi+qj\\)+C2qiqj, with constant C's, and 20 q values associated with the 20 amino acids. This regularity is due to hydrophobic interactions and a force of demixing, the latter obeying Hildebrand's solubility theory of simple liquids.

  19. On the parallel solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.

  20. New wrinkles on black hole perturbations: Numerical treatment of acoustic and gravitational waves

    NASA Astrophysics Data System (ADS)

    Tenyotkin, Valery

    2009-06-01

    This thesis develops two main topics. A full relativistic calculation of quasinormal modes of an acoustic black hole is carried out. The acoustic black hole is formed by a perfect, inviscid, relativistic, ideal gas that is spherically accreting onto a Schwarzschild black hole. The second major part is the calculation of sourceless vector (electromagnetic) and tensor (gravitational) covariant field evolution equations for perturbations on a Schwarzschild background using the relatively recent [Special characters omitted.] decomposition method. Scattering calculations are carried out in Schwarzschild coordinates for electromagnetic and gravitational cases as validation of the method and the derived equations.

  1. [Exploration of influencing factors of price of herbal based on VAR model].

    PubMed

    Wang, Nuo; Liu, Shu-Zhen; Yang, Guang

    2014-10-01

    Based on vector auto-regression (VAR) model, this paper takes advantage of Granger causality test, variance decomposition and impulse response analysis techniques to carry out a comprehensive study of the factors influencing the price of Chinese herbal, including herbal cultivation costs, acreage, natural disasters, the residents' needs and inflation. The study found that there is Granger causality relationship between inflation and herbal prices, cultivation costs and herbal prices. And in the total variance analysis of Chinese herbal and medicine price index, the largest contribution to it is from its own fluctuations, followed by the cultivation costs and inflation.

  2. Image compression using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.

    2017-11-01

    We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.

  3. The study of Thai stock market across the 2008 financial crisis

    NASA Astrophysics Data System (ADS)

    Kanjamapornkul, K.; Pinčák, Richard; Bartoš, Erik

    2016-11-01

    The cohomology theory for financial market can allow us to deform Kolmogorov space of time series data over time period with the explicit definition of eight market states in grand unified theory. The anti-de Sitter space induced from a coupling behavior field among traders in case of a financial market crash acts like gravitational field in financial market spacetime. Under this hybrid mathematical superstructure, we redefine a behavior matrix by using Pauli matrix and modified Wilson loop for time series data. We use it to detect the 2008 financial market crash by using a degree of cohomology group of sphere over tensor field in correlation matrix over all possible dominated stocks underlying Thai SET50 Index Futures. The empirical analysis of financial tensor network was performed with the help of empirical mode decomposition and intrinsic time scale decomposition of correlation matrix and the calculation of closeness centrality of planar graph.

  4. jCompoundMapper: An open source Java library and command-line tool for chemical fingerprints

    PubMed Central

    2011-01-01

    Background The decomposition of a chemical graph is a convenient approach to encode information of the corresponding organic compound. While several commercial toolkits exist to encode molecules as so-called fingerprints, only a few open source implementations are available. The aim of this work is to introduce a library for exactly defined molecular decompositions, with a strong focus on the application of these features in machine learning and data mining. It provides several options such as search depth, distance cut-offs, atom- and pharmacophore typing. Furthermore, it provides the functionality to combine, to compare, or to export the fingerprints into several formats. Results We provide a Java 1.6 library for the decomposition of chemical graphs based on the open source Chemistry Development Kit toolkit. We reimplemented popular fingerprinting algorithms such as depth-first search fingerprints, extended connectivity fingerprints, autocorrelation fingerprints (e.g. CATS2D), radial fingerprints (e.g. Molprint2D), geometrical Molprint, atom pairs, and pharmacophore fingerprints. We also implemented custom fingerprints such as the all-shortest path fingerprint that only includes the subset of shortest paths from the full set of paths of the depth-first search fingerprint. As an application of jCompoundMapper, we provide a command-line executable binary. We measured the conversion speed and number of features for each encoding and described the composition of the features in detail. The quality of the encodings was tested using the default parametrizations in combination with a support vector machine on the Sutherland QSAR data sets. Additionally, we benchmarked the fingerprint encodings on the large-scale Ames toxicity benchmark using a large-scale linear support vector machine. The results were promising and could often compete with literature results. On the large Ames benchmark, for example, we obtained an AUC ROC performance of 0.87 with a reimplementation of the extended connectivity fingerprint. This result is comparable to the performance achieved by a non-linear support vector machine using state-of-the-art descriptors. On the Sutherland QSAR data set, the best fingerprint encodings showed a comparable or better performance on 5 of the 8 benchmarks when compared against the results of the best descriptors published in the paper of Sutherland et al. Conclusions jCompoundMapper is a library for chemical graph fingerprints with several tweaking possibilities and exporting options for open source data mining toolkits. The quality of the data mining results, the conversion speed, the LPGL software license, the command-line interface, and the exporters should be useful for many applications in cheminformatics like benchmarks against literature methods, comparison of data mining algorithms, similarity searching, and similarity-based data mining. PMID:21219648

  5. A Bag of Concepts Approach for Biomedical Document Classification Using Wikipedia Knowledge.

    PubMed

    Mouriño-García, Marcos A; Pérez-Rodríguez, Roberto; Anido-Rifón, Luis E

    2017-01-01

    The ability to efficiently review the existing literature is essential for the rapid progress of research. This paper describes a classifier of text documents, represented as vectors in spaces of Wikipedia concepts, and analyses its suitability for classification of Spanish biomedical documents when only English documents are available for training. We propose the cross-language concept matching (CLCM) technique, which relies on Wikipedia interlanguage links to convert concept vectors from the Spanish to the English space. The performance of the classifier is compared to several baselines: a classifier based on machine translation, a classifier that represents documents after performing Explicit Semantic Analysis (ESA), and a classifier that uses a domain-specific semantic an- notator (MetaMap). The corpus used for the experiments (Cross-Language UVigoMED) was purpose-built for this study, and it is composed of 12,832 English and 2,184 Spanish MEDLINE abstracts. The performance of our approach is superior to any other state-of-the art classifier in the benchmark, with performance increases up to: 124% over classical machine translation, 332% over MetaMap, and 60 times over the classifier based on ESA. The results have statistical significance, showing p-values < 0.0001. Using knowledge mined from Wikipedia to represent documents as vectors in a space of Wikipedia concepts and translating vectors between language-specific concept spaces, a cross-language classifier can be built, and it performs better than several state-of-the-art classifiers. Schattauer GmbH.

  6. A Bag of Concepts Approach for Biomedical Document Classification Using Wikipedia Knowledge*. Spanish-English Cross-language Case Study.

    PubMed

    Mouriño-García, Marcos A; Pérez-Rodríguez, Roberto; Anido-Rifón, Luis E

    2017-10-26

    The ability to efficiently review the existing literature is essential for the rapid progress of research. This paper describes a classifier of text documents, represented as vectors in spaces of Wikipedia concepts, and analyses its suitability for classification of Spanish biomedical documents when only English documents are available for training. We propose the cross-language concept matching (CLCM) technique, which relies on Wikipedia interlanguage links to convert concept vectors from the Spanish to the English space. The performance of the classifier is compared to several baselines: a classifier based on machine translation, a classifier that represents documents after performing Explicit Semantic Analysis (ESA), and a classifier that uses a domain-specific semantic annotator (MetaMap). The corpus used for the experiments (Cross-Language UVigoMED) was purpose-built for this study, and it is composed of 12,832 English and 2,184 Spanish MEDLINE abstracts. The performance of our approach is superior to any other state-of-the art classifier in the benchmark, with performance increases up to: 124% over classical machine translation, 332% over MetaMap, and 60 times over the classifier based on ESA. The results have statistical significance, showing p-values < 0.0001. Using knowledge mined from Wikipedia to represent documents as vectors in a space of Wikipedia concepts and translating vectors between language-specific concept spaces, a cross-language classifier can be built, and it performs better than several state-of-the-art classifiers.

  7. Implementation of the Orbital Maneuvering Systems Engine and Thrust Vector Control for the European Service Module

    NASA Technical Reports Server (NTRS)

    Millard, Jon

    2014-01-01

    The European Space Agency (ESA) has entered into a partnership with the National Aeronautics and Space Administration (NASA) to develop and provide the Service Module (SM) for the Orion Multipurpose Crew Vehicle (MPCV) Program. The European Service Module (ESM) will provide main engine thrust by utilizing the Space Shuttle Program Orbital Maneuvering System Engine (OMS-E). Thrust Vector Control (TVC) of the OMS-E will be provided by the Orbital Maneuvering System (OMS) TVC, also used during the Space Shuttle Program. NASA will be providing the OMS-E and OMS TVC to ESA as Government Furnished Equipment (GFE) to integrate into the ESM. This presentation will describe the OMS-E and OMS TVC and discuss the implementation of the hardware for the ESM.

  8. Modal vector estimation for closely spaced frequency modes

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.; Chung, Y. T.; Blair, M.

    1982-01-01

    Techniques for obtaining improved modal vector estimates for systems with closely spaced frequency modes are discussed. In describing the dynamical behavior of a complex structure modal parameters are often analyzed: undamped natural frequency, mode shape, modal mass, modal stiffness and modal damping. From both an analytical standpoint and an experimental standpoint, identification of modal parameters is more difficult if the system has repeated frequencies or even closely spaced frequencies. The more complex the structure, the more likely it is to have closely spaced frequencies. This makes it difficult to determine valid mode shapes using single shaker test methods. By employing band selectable analysis (zoom) techniques and by employing Kennedy-Pancu circle fitting or some multiple degree of freedom (MDOF) curve fit procedure, the usefulness of the single shaker approach can be extended.

  9. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory

    NASA Astrophysics Data System (ADS)

    Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.

    2017-09-01

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  10. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory.

    PubMed

    Lee, M; Leiter, K; Eisner, C; Breuer, A; Wang, X

    2017-09-21

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  11. A simple suboptimal least-squares algorithm for attitude determination with multiple sensors

    NASA Technical Reports Server (NTRS)

    Brozenec, Thomas F.; Bender, Douglas J.

    1994-01-01

    Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is faster than all but a similarly specialized version of the QUEST algorithm. We also introduce a novel measurement averaging technique which reduces the n-measurement case to the two measurement case for our particular application, a star tracker and earth sensor mounted on an earth-pointed geosynchronous communications satellite. Using this technique, many n-measurement problems reduce to less than or equal to 3 measurements; this reduces the amount of required calculation without significant degradation in accuracy. Finally, we present the results of some tests which compare the least-squares algorithm with the QUEST and FOAM algorithms in the two-measurement case. For our example case, all three algorithms performed with similar accuracy.

  12. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less

  13. A parallel domain decomposition-based implicit method for the Cahn–Hilliard–Cook phase-field equation in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Xiang; Yang, Chao; State Key Laboratory of Computer Science, Chinese Academy of Sciences, Beijing 100190

    2015-03-15

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracymore » (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.« less

  14. Cosmology in generalized Proca theories

    NASA Astrophysics Data System (ADS)

    De Felice, Antonio; Heisenberg, Lavinia; Kase, Ryotaro; Mukohyama, Shinji; Tsujikawa, Shinji; Zhang, Ying-li

    2016-06-01

    We consider a massive vector field with derivative interactions that propagates only the 3 desired polarizations (besides two tensor polarizations from gravity) with second-order equations of motion in curved space-time. The cosmological implications of such generalized Proca theories are investigated for both the background and the linear perturbation by taking into account the Lagrangian up to quintic order. In the presence of a matter fluid with a temporal component of the vector field, we derive the background equations of motion and show the existence of de Sitter solutions relevant to the late-time cosmic acceleration. We also obtain conditions for the absence of ghosts and Laplacian instabilities of tensor, vector, and scalar perturbations in the small-scale limit. Our results are applied to concrete examples of the general functions in the theory, which encompass vector Galileons as a specific case. In such examples, we show that the de Sitter fixed point is always a stable attractor and study viable parameter spaces in which the no-ghost and stability conditions are satisfied during the cosmic expansion history.

  15. GNSS Single Frequency, Single Epoch Reliable Attitude Determination Method with Baseline Vector Constraint.

    PubMed

    Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong

    2015-12-02

    For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.

  16. In Situ Probes of Capture and Decomposition of Chemical Warfare Agent Simulants by Zr-Based Metal Organic Frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plonka, Anna M.; Wang, Qi; Gordon, Wesley O.

    Recently, Zr-based metal organic frameworks (MOFs) were shown to be among the fastest catalysts of nerve-agent hydrolysis in solution. Here, we report a detailed study of the adsorption and decomposition of a nerve-agent simulant, dimethyl methylphosphonate (DMMP), on UiO-66, UiO-67, MOF-808, and NU-1000 using synchrotron-based X-ray powder diffraction, X-ray absorption, and infrared spectroscopy, which reveals key aspects of the reaction mechanism. The diffraction measurements indicate that all four MOFs adsorb DMMP (introduced at atmospheric pressures through a flow of helium or air) within the pore space. In addition, the combination of X-ray absorption and infrared spectra suggests direct coordination ofmore » DMMP to the Zr6 cores of all MOFs, which ultimately leads to decomposition to phosphonate products. Our experimental probes into the mechanism of adsorption and decomposition of chemical warfare agent simulants on Zr-based MOFs open new opportunities in rational design of new and superior decontamination materials.« less

  17. In Situ Probes of Capture and Decomposition of Chemical Warfare Agent Simulants by Zr-Based Metal Organic Frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plonka, Anna M.; Wang, Qi; Gordon, Wesley O.

    Zr-based metal organic frameworks (MOFs) have been recently shown to be among the fastest catalysts of nerve-agent hydrolysis in solution. We report a detailed study of the adsorption and decomposition of a nerve-agent simulant, dimethyl methylphosphonate (DMMP), on UiO-66, UiO-67, MOF-808, and NU-1000 using synchrotron-based X-ray powder diffraction, X-ray absorption, and infrared spectroscopy, which reveals key aspects of the reaction mechanism. The diffraction measurements indicate that all four MOFs adsorb DMMP (introduced at atmospheric pressures through a flow of helium or air) within the pore space. In addition, the combination of X-ray absorption and infrared spectra suggests direct coordination ofmore » DMMP to the Zr6 cores of all MOFs, which ultimately leads to decomposition to phosphonate products. These experimental probes into the mechanism of adsorption and decomposition of chemical warfare agent simulants on Zr-based MOFs open new opportunities in rational design of new and superior decontamination materials.« less

  18. In Situ Probes of Capture and Decomposition of Chemical Warfare Agent Simulants by Zr-Based Metal Organic Frameworks

    DOE PAGES

    Plonka, Anna M.; Wang, Qi; Gordon, Wesley O.; ...

    2016-12-30

    Recently, Zr-based metal organic frameworks (MOFs) were shown to be among the fastest catalysts of nerve-agent hydrolysis in solution. Here, we report a detailed study of the adsorption and decomposition of a nerve-agent simulant, dimethyl methylphosphonate (DMMP), on UiO-66, UiO-67, MOF-808, and NU-1000 using synchrotron-based X-ray powder diffraction, X-ray absorption, and infrared spectroscopy, which reveals key aspects of the reaction mechanism. The diffraction measurements indicate that all four MOFs adsorb DMMP (introduced at atmospheric pressures through a flow of helium or air) within the pore space. In addition, the combination of X-ray absorption and infrared spectra suggests direct coordination ofmore » DMMP to the Zr6 cores of all MOFs, which ultimately leads to decomposition to phosphonate products. Our experimental probes into the mechanism of adsorption and decomposition of chemical warfare agent simulants on Zr-based MOFs open new opportunities in rational design of new and superior decontamination materials.« less

  19. Use of digital control theory state space formalism for feedback at SLC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Himel, T.; Hendrickson, L.; Rouse, F.

    The algorithms used in the database-driven SLC fast-feedback system are based on the state space formalism of digital control theory. These are implemented as a set of matrix equations which use a Kalman filter to estimate a vector of states from a vector of measurements, and then apply a gain matrix to determine the actuator settings from the state vector. The matrices used in the calculation are derived offline using Linear Quadratic Gaussian minimization. For a given noise spectrum, this procedure minimizes the rms of the states (e.g., the position or energy of the beam). The offline program also allowsmore » simulation of the loop's response to arbitrary inputs, and calculates its frequency response. 3 refs., 3 figs.« less

  20. A novel double fine guide sensor design on space telescope

    NASA Astrophysics Data System (ADS)

    Zhang, Xu-xu; Yin, Da-yi

    2018-02-01

    To get high precision attitude for space telescope, a double marginal FOV (field of view) FGS (Fine Guide Sensor) is proposed. It is composed of two large area APS CMOS sensors and both share the same lens in main light of sight. More star vectors can be get by two FGS and be used for high precision attitude determination. To improve star identification speed, the vector cross product in inter-star angles for small marginal FOV different from traditional way is elaborated and parallel processing method is applied to pyramid algorithm. The star vectors from two sensors are then used to attitude fusion with traditional QUEST algorithm. The simulation results show that the system can get high accuracy three axis attitudes and the scheme is feasibility.

Top